Sample records for assessed statistical analyses

  1. DESIGNING ENVIRONMENTAL MONITORING DATABASES FOR STATISTIC ASSESSMENT

    EPA Science Inventory

    Databases designed for statistical analyses have characteristics that distinguish them from databases intended for general use. EMAP uses a probabilistic sampling design to collect data to produce statistical assessments of environmental conditions. In addition to supporting the ...

  2. Differences in Performance Among Test Statistics for Assessing Phylogenomic Model Adequacy.

    PubMed

    Duchêne, David A; Duchêne, Sebastian; Ho, Simon Y W

    2018-05-18

    Statistical phylogenetic analyses of genomic data depend on models of nucleotide or amino acid substitution. The adequacy of these substitution models can be assessed using a number of test statistics, allowing the model to be rejected when it is found to provide a poor description of the evolutionary process. A potentially valuable use of model-adequacy test statistics is to identify when data sets are likely to produce unreliable phylogenetic estimates, but their differences in performance are rarely explored. We performed a comprehensive simulation study to identify test statistics that are sensitive to some of the most commonly cited sources of phylogenetic estimation error. Our results show that, for many test statistics, traditional thresholds for assessing model adequacy can fail to reject the model when the phylogenetic inferences are inaccurate and imprecise. This is particularly problematic when analysing loci that have few variable informative sites. We propose new thresholds for assessing substitution model adequacy and demonstrate their effectiveness in analyses of three phylogenomic data sets. These thresholds lead to frequent rejection of the model for loci that yield topological inferences that are imprecise and are likely to be inaccurate. We also propose the use of a summary statistic that provides a practical assessment of overall model adequacy. Our approach offers a promising means of enhancing model choice in genome-scale data sets, potentially leading to improvements in the reliability of phylogenomic inference.

  3. Using DEWIS and R for Multi-Staged Statistics e-Assessments

    ERIC Educational Resources Information Center

    Gwynllyw, D. Rhys; Weir, Iain S.; Henderson, Karen L.

    2016-01-01

    We demonstrate how the DEWIS e-Assessment system may use embedded R code to facilitate the assessment of students' ability to perform involved statistical analyses. The R code has been written to emulate SPSS output and thus the statistical results for each bespoke data set can be generated efficiently and accurately using standard R routines.…

  4. Development of the Statistical Reasoning in Biology Concept Inventory (SRBCI)

    PubMed Central

    Deane, Thomas; Nomme, Kathy; Jeffery, Erica; Pollock, Carol; Birol, Gülnur

    2016-01-01

    We followed established best practices in concept inventory design and developed a 12-item inventory to assess student ability in statistical reasoning in biology (Statistical Reasoning in Biology Concept Inventory [SRBCI]). It is important to assess student thinking in this conceptual area, because it is a fundamental requirement of being statistically literate and associated skills are needed in almost all walks of life. Despite this, previous work shows that non–expert-like thinking in statistical reasoning is common, even after instruction. As science educators, our goal should be to move students along a novice-to-expert spectrum, which could be achieved with growing experience in statistical reasoning. We used item response theory analyses (the one-parameter Rasch model and associated analyses) to assess responses gathered from biology students in two populations at a large research university in Canada in order to test SRBCI’s robustness and sensitivity in capturing useful data relating to the students’ conceptual ability in statistical reasoning. Our analyses indicated that SRBCI is a unidimensional construct, with items that vary widely in difficulty and provide useful information about such student ability. SRBCI should be useful as a diagnostic tool in a variety of biology settings and as a means of measuring the success of teaching interventions designed to improve statistical reasoning skills. PMID:26903497

  5. Sunspot activity and influenza pandemics: a statistical assessment of the purported association.

    PubMed

    Towers, S

    2017-10-01

    Since 1978, a series of papers in the literature have claimed to find a significant association between sunspot activity and the timing of influenza pandemics. This paper examines these analyses, and attempts to recreate the three most recent statistical analyses by Ertel (1994), Tapping et al. (2001), and Yeung (2006), which all have purported to find a significant relationship between sunspot numbers and pandemic influenza. As will be discussed, each analysis had errors in the data. In addition, in each analysis arbitrary selections or assumptions were also made, and the authors did not assess the robustness of their analyses to changes in those arbitrary assumptions. Varying the arbitrary assumptions to other, equally valid, assumptions negates the claims of significance. Indeed, an arbitrary selection made in one of the analyses appears to have resulted in almost maximal apparent significance; changing it only slightly yields a null result. This analysis applies statistically rigorous methodology to examine the purported sunspot/pandemic link, using more statistically powerful un-binned analysis methods, rather than relying on arbitrarily binned data. The analyses are repeated using both the Wolf and Group sunspot numbers. In all cases, no statistically significant evidence of any association was found. However, while the focus in this particular analysis was on the purported relationship of influenza pandemics to sunspot activity, the faults found in the past analyses are common pitfalls; inattention to analysis reproducibility and robustness assessment are common problems in the sciences, that are unfortunately not noted often enough in review.

  6. Power, effects, confidence, and significance: an investigation of statistical practices in nursing research.

    PubMed

    Gaskin, Cadeyrn J; Happell, Brenda

    2014-05-01

    To (a) assess the statistical power of nursing research to detect small, medium, and large effect sizes; (b) estimate the experiment-wise Type I error rate in these studies; and (c) assess the extent to which (i) a priori power analyses, (ii) effect sizes (and interpretations thereof), and (iii) confidence intervals were reported. Statistical review. Papers published in the 2011 volumes of the 10 highest ranked nursing journals, based on their 5-year impact factors. Papers were assessed for statistical power, control of experiment-wise Type I error, reporting of a priori power analyses, reporting and interpretation of effect sizes, and reporting of confidence intervals. The analyses were based on 333 papers, from which 10,337 inferential statistics were identified. The median power to detect small, medium, and large effect sizes was .40 (interquartile range [IQR]=.24-.71), .98 (IQR=.85-1.00), and 1.00 (IQR=1.00-1.00), respectively. The median experiment-wise Type I error rate was .54 (IQR=.26-.80). A priori power analyses were reported in 28% of papers. Effect sizes were routinely reported for Spearman's rank correlations (100% of papers in which this test was used), Poisson regressions (100%), odds ratios (100%), Kendall's tau correlations (100%), Pearson's correlations (99%), logistic regressions (98%), structural equation modelling/confirmatory factor analyses/path analyses (97%), and linear regressions (83%), but were reported less often for two-proportion z tests (50%), analyses of variance/analyses of covariance/multivariate analyses of variance (18%), t tests (8%), Wilcoxon's tests (8%), Chi-squared tests (8%), and Fisher's exact tests (7%), and not reported for sign tests, Friedman's tests, McNemar's tests, multi-level models, and Kruskal-Wallis tests. Effect sizes were infrequently interpreted. Confidence intervals were reported in 28% of papers. The use, reporting, and interpretation of inferential statistics in nursing research need substantial improvement. Most importantly, researchers should abandon the misleading practice of interpreting the results from inferential tests based solely on whether they are statistically significant (or not) and, instead, focus on reporting and interpreting effect sizes, confidence intervals, and significance levels. Nursing researchers also need to conduct and report a priori power analyses, and to address the issue of Type I experiment-wise error inflation in their studies. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.

  7. Biomechanical Analysis of Military Boots. Phase 1. Materials Testing of Military and Commercial Footwear

    DTIC Science & Technology

    1992-10-01

    N=8) and Results of 44 Statistical Analyses for Impact Test Performed on Forefoot of Unworn Footwear A-2. Summary Statistics (N=8) and Results of...on Forefoot of Worn Footwear Vlll Tables (continued) Table Page B-2. Summary Statistics (N=4) and Results of 76 Statistical Analyses for Impact...used tests to assess heel and forefoot shock absorption, upper and sole durability, and flexibility (Cavanagh, 1978). Later, the number of tests was

  8. Statistical approaches to assessing single and multiple outcome measures in dry eye therapy and diagnosis.

    PubMed

    Tomlinson, Alan; Hair, Mario; McFadyen, Angus

    2013-10-01

    Dry eye is a multifactorial disease which would require a broad spectrum of test measures in the monitoring of its treatment and diagnosis. However, studies have typically reported improvements in individual measures with treatment. Alternative approaches involve multiple, combined outcomes being assessed by different statistical analyses. In order to assess the effect of various statistical approaches to the use of single and combined test measures in dry eye, this review reanalyzed measures from two previous studies (osmolarity, evaporation, tear turnover rate, and lipid film quality). These analyses assessed the measures as single variables within groups, pre- and post-intervention with a lubricant supplement, by creating combinations of these variables and by validating these combinations with the combined sample of data from all groups of dry eye subjects. The effectiveness of single measures and combinations in diagnosis of dry eye was also considered. Copyright © 2013. Published by Elsevier Inc.

  9. A systematic review of the quality of statistical methods employed for analysing quality of life data in cancer randomised controlled trials.

    PubMed

    Hamel, Jean-Francois; Saulnier, Patrick; Pe, Madeline; Zikos, Efstathios; Musoro, Jammbe; Coens, Corneel; Bottomley, Andrew

    2017-09-01

    Over the last decades, Health-related Quality of Life (HRQoL) end-points have become an important outcome of the randomised controlled trials (RCTs). HRQoL methodology in RCTs has improved following international consensus recommendations. However, no international recommendations exist concerning the statistical analysis of such data. The aim of our study was to identify and characterise the quality of the statistical methods commonly used for analysing HRQoL data in cancer RCTs. Building on our recently published systematic review, we analysed a total of 33 published RCTs studying the HRQoL methods reported in RCTs since 1991. We focussed on the ability of the methods to deal with the three major problems commonly encountered when analysing HRQoL data: their multidimensional and longitudinal structure and the commonly high rate of missing data. All studies reported HRQoL being assessed repeatedly over time for a period ranging from 2 to 36 months. Missing data were common, with compliance rates ranging from 45% to 90%. From the 33 studies considered, 12 different statistical methods were identified. Twenty-nine studies analysed each of the questionnaire sub-dimensions without type I error adjustment. Thirteen studies repeated the HRQoL analysis at each assessment time again without type I error adjustment. Only 8 studies used methods suitable for repeated measurements. Our findings show a lack of consistency in statistical methods for analysing HRQoL data. Problems related to multiple comparisons were rarely considered leading to a high risk of false positive results. It is therefore critical that international recommendations for improving such statistical practices are developed. Copyright © 2017. Published by Elsevier Ltd.

  10. Longitudinal Assessment of Self-Reported Recent Back Pain and Combat Deployment in the Millennium Cohort Study

    DTIC Science & Technology

    2016-11-15

    participants who were followed for the development of back pain for an average of 3.9 years. Methods. Descriptive statistics and longitudinal...health, military personnel, occupational health, outcome assessment, statistics, survey methodology . Level of Evidence: 3 Spine 2016;41:1754–1763ack...based on the National Health and Nutrition Examination Survey.21 Statistical Analysis Descriptive and univariate analyses compared character- istics

  11. Epidemiology Characteristics, Methodological Assessment and Reporting of Statistical Analysis of Network Meta-Analyses in the Field of Cancer

    PubMed Central

    Ge, Long; Tian, Jin-hui; Li, Xiu-xia; Song, Fujian; Li, Lun; Zhang, Jun; Li, Ge; Pei, Gai-qin; Qiu, Xia; Yang, Ke-hu

    2016-01-01

    Because of the methodological complexity of network meta-analyses (NMAs), NMAs may be more vulnerable to methodological risks than conventional pair-wise meta-analysis. Our study aims to investigate epidemiology characteristics, conduction of literature search, methodological quality and reporting of statistical analysis process in the field of cancer based on PRISMA extension statement and modified AMSTAR checklist. We identified and included 102 NMAs in the field of cancer. 61 NMAs were conducted using a Bayesian framework. Of them, more than half of NMAs did not report assessment of convergence (60.66%). Inconsistency was assessed in 27.87% of NMAs. Assessment of heterogeneity in traditional meta-analyses was more common (42.62%) than in NMAs (6.56%). Most of NMAs did not report assessment of similarity (86.89%) and did not used GRADE tool to assess quality of evidence (95.08%). 43 NMAs were adjusted indirect comparisons, the methods used were described in 53.49% NMAs. Only 4.65% NMAs described the details of handling of multi group trials and 6.98% described the methods of similarity assessment. The median total AMSTAR-score was 8.00 (IQR: 6.00–8.25). Methodological quality and reporting of statistical analysis did not substantially differ by selected general characteristics. Overall, the quality of NMAs in the field of cancer was generally acceptable. PMID:27848997

  12. Development of the Statistical Reasoning in Biology Concept Inventory (SRBCI).

    PubMed

    Deane, Thomas; Nomme, Kathy; Jeffery, Erica; Pollock, Carol; Birol, Gülnur

    2016-01-01

    We followed established best practices in concept inventory design and developed a 12-item inventory to assess student ability in statistical reasoning in biology (Statistical Reasoning in Biology Concept Inventory [SRBCI]). It is important to assess student thinking in this conceptual area, because it is a fundamental requirement of being statistically literate and associated skills are needed in almost all walks of life. Despite this, previous work shows that non-expert-like thinking in statistical reasoning is common, even after instruction. As science educators, our goal should be to move students along a novice-to-expert spectrum, which could be achieved with growing experience in statistical reasoning. We used item response theory analyses (the one-parameter Rasch model and associated analyses) to assess responses gathered from biology students in two populations at a large research university in Canada in order to test SRBCI's robustness and sensitivity in capturing useful data relating to the students' conceptual ability in statistical reasoning. Our analyses indicated that SRBCI is a unidimensional construct, with items that vary widely in difficulty and provide useful information about such student ability. SRBCI should be useful as a diagnostic tool in a variety of biology settings and as a means of measuring the success of teaching interventions designed to improve statistical reasoning skills. © 2016 T. Deane et al. CBE—Life Sciences Education © 2016 The American Society for Cell Biology. This article is distributed by The American Society for Cell Biology under license from the author(s). It is available to the public under an Attribution–Noncommercial–Share Alike 3.0 Unported Creative Commons License (http://creativecommons.org/licenses/by-nc-sa/3.0).

  13. Study Quality in SLA: An Assessment of Designs, Analyses, and Reporting Practices in Quantitative L2 Research

    ERIC Educational Resources Information Center

    Plonsky, Luke

    2013-01-01

    This study assesses research and reporting practices in quantitative second language (L2) research. A sample of 606 primary studies, published from 1990 to 2010 in "Language Learning and Studies in Second Language Acquisition," was collected and coded for designs, statistical analyses, reporting practices, and outcomes (i.e., effect…

  14. Examining the Relationship between Gender and Drug-Using Behaviors in Adolescents: The Use of Diagnostic Assessments and Biochemical Analyses of Urine Samples.

    ERIC Educational Resources Information Center

    James, William H.; Moore, David D.

    1999-01-01

    Examines the relationship between gender and drug use among adolescents using diagnostic assessments and biochemical analyses of urine samples. Statistical significance was found in the relationship between gender and marijuana use. The study confirms that more research is needed in this area. (Author/MKA)

  15. Narrative Review of Statistical Reporting Checklists, Mandatory Statistical Editing, and Rectifying Common Problems in the Reporting of Scientific Articles.

    PubMed

    Dexter, Franklin; Shafer, Steven L

    2017-03-01

    Considerable attention has been drawn to poor reproducibility in the biomedical literature. One explanation is inadequate reporting of statistical methods by authors and inadequate assessment of statistical reporting and methods during peer review. In this narrative review, we examine scientific studies of several well-publicized efforts to improve statistical reporting. We also review several retrospective assessments of the impact of these efforts. These studies show that instructions to authors and statistical checklists are not sufficient; no findings suggested that either improves the quality of statistical methods and reporting. Second, even basic statistics, such as power analyses, are frequently missing or incorrectly performed. Third, statistical review is needed for all papers that involve data analysis. A consistent finding in the studies was that nonstatistical reviewers (eg, "scientific reviewers") and journal editors generally poorly assess statistical quality. We finish by discussing our experience with statistical review at Anesthesia & Analgesia from 2006 to 2016.

  16. A Framework for Assessing High School Students' Statistical Reasoning.

    PubMed

    Chan, Shiau Wei; Ismail, Zaleha; Sumintono, Bambang

    2016-01-01

    Based on a synthesis of literature, earlier studies, analyses and observations on high school students, this study developed an initial framework for assessing students' statistical reasoning about descriptive statistics. Framework descriptors were established across five levels of statistical reasoning and four key constructs. The former consisted of idiosyncratic reasoning, verbal reasoning, transitional reasoning, procedural reasoning, and integrated process reasoning. The latter include describing data, organizing and reducing data, representing data, and analyzing and interpreting data. In contrast to earlier studies, this initial framework formulated a complete and coherent statistical reasoning framework. A statistical reasoning assessment tool was then constructed from this initial framework. The tool was administered to 10 tenth-grade students in a task-based interview. The initial framework was refined, and the statistical reasoning assessment tool was revised. The ten students then participated in the second task-based interview, and the data obtained were used to validate the framework. The findings showed that the students' statistical reasoning levels were consistent across the four constructs, and this result confirmed the framework's cohesion. Developed to contribute to statistics education, this newly developed statistical reasoning framework provides a guide for planning learning goals and designing instruction and assessments.

  17. A Framework for Assessing High School Students' Statistical Reasoning

    PubMed Central

    2016-01-01

    Based on a synthesis of literature, earlier studies, analyses and observations on high school students, this study developed an initial framework for assessing students’ statistical reasoning about descriptive statistics. Framework descriptors were established across five levels of statistical reasoning and four key constructs. The former consisted of idiosyncratic reasoning, verbal reasoning, transitional reasoning, procedural reasoning, and integrated process reasoning. The latter include describing data, organizing and reducing data, representing data, and analyzing and interpreting data. In contrast to earlier studies, this initial framework formulated a complete and coherent statistical reasoning framework. A statistical reasoning assessment tool was then constructed from this initial framework. The tool was administered to 10 tenth-grade students in a task-based interview. The initial framework was refined, and the statistical reasoning assessment tool was revised. The ten students then participated in the second task-based interview, and the data obtained were used to validate the framework. The findings showed that the students’ statistical reasoning levels were consistent across the four constructs, and this result confirmed the framework’s cohesion. Developed to contribute to statistics education, this newly developed statistical reasoning framework provides a guide for planning learning goals and designing instruction and assessments. PMID:27812091

  18. Statistical power of intervention analyses: simulation and empirical application to treated lumber prices

    Treesearch

    Jeffrey P. Prestemon

    2009-01-01

    Timber product markets are subject to large shocks deriving from natural disturbances and policy shifts. Statistical modeling of shocks is often done to assess their economic importance. In this article, I simulate the statistical power of univariate and bivariate methods of shock detection using time series intervention models. Simulations show that bivariate methods...

  19. Contour plot assessment of existing meta-analyses confirms robust association of statin use and acute kidney injury risk.

    PubMed

    Chevance, Aurélie; Schuster, Tibor; Steele, Russell; Ternès, Nils; Platt, Robert W

    2015-10-01

    Robustness of an existing meta-analysis can justify decisions on whether to conduct an additional study addressing the same research question. We illustrate the graphical assessment of the potential impact of an additional study on an existing meta-analysis using published data on statin use and the risk of acute kidney injury. A previously proposed graphical augmentation approach is used to assess the sensitivity of the current test and heterogeneity statistics extracted from existing meta-analysis data. In addition, we extended the graphical augmentation approach to assess potential changes in the pooled effect estimate after updating a current meta-analysis and applied the three graphical contour definitions to data from meta-analyses on statin use and acute kidney injury risk. In the considered example data, the pooled effect estimates and heterogeneity indices demonstrated to be considerably robust to the addition of a future study. Supportingly, for some previously inconclusive meta-analyses, a study update might yield statistically significant kidney injury risk increase associated with higher statin exposure. The illustrated contour approach should become a standard tool for the assessment of the robustness of meta-analyses. It can guide decisions on whether to conduct additional studies addressing a relevant research question. Copyright © 2015 Elsevier Inc. All rights reserved.

  20. Crop identification technology assessment for remote sensing. (CITARS) Volume 9: Statistical analysis of results

    NASA Technical Reports Server (NTRS)

    Davis, B. J.; Feiveson, A. H.

    1975-01-01

    Results are presented of CITARS data processing in raw form. Tables of descriptive statistics are given along with descriptions and results of inferential analyses. The inferential results are organized by questions which CITARS was designed to answer.

  1. Effects of Exercise in the Treatment of Overweight and Obese Children and Adolescents: A Systematic Review of Meta-Analyses

    PubMed Central

    Kelley, George A.; Kelley, Kristi S.

    2013-01-01

    Purpose. Conduct a systematic review of previous meta-analyses addressing the effects of exercise in the treatment of overweight and obese children and adolescents. Methods. Previous meta-analyses of randomized controlled exercise trials that assessed adiposity in overweight and obese children and adolescents were included by searching nine electronic databases and cross-referencing from retrieved studies. Methodological quality was assessed using the Assessment of Multiple Systematic Reviews (AMSTAR) Instrument. The alpha level for statistical significance was set at P ≤ 0.05. Results. Of the 308 studies reviewed, two aggregate data meta-analyses representing 14 and 17 studies and 481 and 701 boys and girls met all eligibility criteria. Methodological quality was 64% and 73%. For both studies, statistically significant reductions in percent body fat were observed (P = 0.006 and P < 0.00001). The number-needed-to treat (NNT) was 4 and 3 with an estimated 24.5 and 31.5 million overweight and obese children in the world potentially benefitting, 2.8 and 3.6 million in the US. No other measures of adiposity (BMI-related measures, body weight, and central obesity) were statistically significant. Conclusions. Exercise is efficacious for reducing percent body fat in overweight and obese children and adolescents. Insufficient evidence exists to suggest that exercise reduces other measures of adiposity. PMID:24455215

  2. Evidence for the Selective Reporting of Analyses and Discrepancies in Clinical Trials: A Systematic Review of Cohort Studies of Clinical Trials

    PubMed Central

    Dwan, Kerry; Altman, Douglas G.; Clarke, Mike; Gamble, Carrol; Higgins, Julian P. T.; Sterne, Jonathan A. C.; Williamson, Paula R.; Kirkham, Jamie J.

    2014-01-01

    Background Most publications about selective reporting in clinical trials have focussed on outcomes. However, selective reporting of analyses for a given outcome may also affect the validity of findings. If analyses are selected on the basis of the results, reporting bias may occur. The aims of this study were to review and summarise the evidence from empirical cohort studies that assessed discrepant or selective reporting of analyses in randomised controlled trials (RCTs). Methods and Findings A systematic review was conducted and included cohort studies that assessed any aspect of the reporting of analyses of RCTs by comparing different trial documents, e.g., protocol compared to trial report, or different sections within a trial publication. The Cochrane Methodology Register, Medline (Ovid), PsycInfo (Ovid), and PubMed were searched on 5 February 2014. Two authors independently selected studies, performed data extraction, and assessed the methodological quality of the eligible studies. Twenty-two studies (containing 3,140 RCTs) published between 2000 and 2013 were included. Twenty-two studies reported on discrepancies between information given in different sources. Discrepancies were found in statistical analyses (eight studies), composite outcomes (one study), the handling of missing data (three studies), unadjusted versus adjusted analyses (three studies), handling of continuous data (three studies), and subgroup analyses (12 studies). Discrepancy rates varied, ranging from 7% (3/42) to 88% (7/8) in statistical analyses, 46% (36/79) to 82% (23/28) in adjusted versus unadjusted analyses, and 61% (11/18) to 100% (25/25) in subgroup analyses. This review is limited in that none of the included studies investigated the evidence for bias resulting from selective reporting of analyses. It was not possible to combine studies to provide overall summary estimates, and so the results of studies are discussed narratively. Conclusions Discrepancies in analyses between publications and other study documentation were common, but reasons for these discrepancies were not discussed in the trial reports. To ensure transparency, protocols and statistical analysis plans need to be published, and investigators should adhere to these or explain discrepancies. Please see later in the article for the Editors' Summary PMID:24959719

  3. Quantifying, displaying and accounting for heterogeneity in the meta-analysis of RCTs using standard and generalised Q statistics

    PubMed Central

    2011-01-01

    Background Clinical researchers have often preferred to use a fixed effects model for the primary interpretation of a meta-analysis. Heterogeneity is usually assessed via the well known Q and I2 statistics, along with the random effects estimate they imply. In recent years, alternative methods for quantifying heterogeneity have been proposed, that are based on a 'generalised' Q statistic. Methods We review 18 IPD meta-analyses of RCTs into treatments for cancer, in order to quantify the amount of heterogeneity present and also to discuss practical methods for explaining heterogeneity. Results Differing results were obtained when the standard Q and I2 statistics were used to test for the presence of heterogeneity. The two meta-analyses with the largest amount of heterogeneity were investigated further, and on inspection the straightforward application of a random effects model was not deemed appropriate. Compared to the standard Q statistic, the generalised Q statistic provided a more accurate platform for estimating the amount of heterogeneity in the 18 meta-analyses. Conclusions Explaining heterogeneity via the pre-specification of trial subgroups, graphical diagnostic tools and sensitivity analyses produced a more desirable outcome than an automatic application of the random effects model. Generalised Q statistic methods for quantifying and adjusting for heterogeneity should be incorporated as standard into statistical software. Software is provided to help achieve this aim. PMID:21473747

  4. Formative Assessment in Mathematics for Engineering Students

    ERIC Educational Resources Information Center

    Ní Fhloinn, Eabhnat; Carr, Michael

    2017-01-01

    In this paper, we present a range of formative assessment types for engineering mathematics, including in-class exercises, homework, mock examination questions, table quizzes, presentations, critical analyses of statistical papers, peer-to-peer teaching, online assessments and electronic voting systems. We provide practical tips for the…

  5. Survey of the Methods and Reporting Practices in Published Meta-analyses of Test Performance: 1987 to 2009

    ERIC Educational Resources Information Center

    Dahabreh, Issa J.; Chung, Mei; Kitsios, Georgios D.; Terasawa, Teruhiko; Raman, Gowri; Tatsioni, Athina; Tobar, Annette; Lau, Joseph; Trikalinos, Thomas A.; Schmid, Christopher H.

    2013-01-01

    We performed a survey of meta-analyses of test performance to describe the evolution in their methods and reporting. Studies were identified through MEDLINE (1966-2009), reference lists, and relevant reviews. We extracted information on clinical topics, literature review methods, quality assessment, and statistical analyses. We reviewed 760…

  6. Methodological difficulties of conducting agroecological studies from a statistical perspective

    USDA-ARS?s Scientific Manuscript database

    Statistical methods for analysing agroecological data might not be able to help agroecologists to solve all of the current problems concerning crop and animal husbandry, but such methods could well help agroecologists to assess, tackle, and resolve several agroecological issues in a more reliable an...

  7. Chapter C. The Loma Prieta, California, Earthquake of October 17, 1989 - Building Structures

    USGS Publications Warehouse

    Çelebi, Mehmet

    1998-01-01

    Several approaches are used to assess the performance of the built environment following an earthquake -- preliminary damage surveys conducted by professionals, detailed studies of individual structures, and statistical analyses of groups of structures. Reports of damage that are issued by many organizations immediately following an earthquake play a key role in directing subsequent detailed investigations. Detailed studies of individual structures and statistical analyses of groups of structures may be motivated by particularly good or bad performance during an earthquake. Beyond this, practicing engineers typically perform stress analyses to assess the performance of a particular structure to vibrational levels experienced during an earthquake. The levels may be determined from recorded or estimated ground motions; actual levels usually differ from design levels. If a structure has seismic instrumentation to record response data, the estimated and recorded response and behavior of the structure can be compared.

  8. Computing Inter-Rater Reliability for Observational Data: An Overview and Tutorial

    PubMed Central

    Hallgren, Kevin A.

    2012-01-01

    Many research designs require the assessment of inter-rater reliability (IRR) to demonstrate consistency among observational ratings provided by multiple coders. However, many studies use incorrect statistical procedures, fail to fully report the information necessary to interpret their results, or do not address how IRR affects the power of their subsequent analyses for hypothesis testing. This paper provides an overview of methodological issues related to the assessment of IRR with a focus on study design, selection of appropriate statistics, and the computation, interpretation, and reporting of some commonly-used IRR statistics. Computational examples include SPSS and R syntax for computing Cohen’s kappa and intra-class correlations to assess IRR. PMID:22833776

  9. Citation of previous meta-analyses on the same topic: a clue to perpetuation of incorrect methods?

    PubMed

    Li, Tianjing; Dickersin, Kay

    2013-06-01

    Systematic reviews and meta-analyses serve as a basis for decision-making and clinical practice guidelines and should be carried out using appropriate methodology to avoid incorrect inferences. We describe the characteristics, statistical methods used for meta-analyses, and citation patterns of all 21 glaucoma systematic reviews we identified pertaining to the effectiveness of prostaglandin analog eye drops in treating primary open-angle glaucoma, published between December 2000 and February 2012. We abstracted data, assessed whether appropriate statistical methods were applied in meta-analyses, and examined citation patterns of included reviews. We identified two forms of problematic statistical analyses in 9 of the 21 systematic reviews examined. Except in 1 case, none of the 9 reviews that used incorrect statistical methods cited a previously published review that used appropriate methods. Reviews that used incorrect methods were cited 2.6 times more often than reviews that used appropriate statistical methods. We speculate that by emulating the statistical methodology of previous systematic reviews, systematic review authors may have perpetuated incorrect approaches to meta-analysis. The use of incorrect statistical methods, perhaps through emulating methods described in previous research, calls conclusions of systematic reviews into question and may lead to inappropriate patient care. We urge systematic review authors and journal editors to seek the advice of experienced statisticians before undertaking or accepting for publication a systematic review and meta-analysis. The author(s) have no proprietary or commercial interest in any materials discussed in this article. Copyright © 2013 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.

  10. Using Data Mining to Teach Applied Statistics and Correlation

    ERIC Educational Resources Information Center

    Hartnett, Jessica L.

    2016-01-01

    This article describes two class activities that introduce the concept of data mining and very basic data mining analyses. Assessment data suggest that students learned some of the conceptual basics of data mining, understood some of the ethical concerns related to the practice, and were able to perform correlations via the Statistical Package for…

  11. Are conventional statistical techniques exhaustive for defining metal background concentrations in harbour sediments? A case study: The Coastal Area of Bari (Southeast Italy).

    PubMed

    Mali, Matilda; Dell'Anna, Maria Michela; Mastrorilli, Piero; Damiani, Leonardo; Ungaro, Nicola; Belviso, Claudia; Fiore, Saverio

    2015-11-01

    Sediment contamination by metals poses significant risks to coastal ecosystems and is considered to be problematic for dredging operations. The determination of the background values of metal and metalloid distribution based on site-specific variability is fundamental in assessing pollution levels in harbour sediments. The novelty of the present work consists of addressing the scope and limitation of analysing port sediments through the use of conventional statistical techniques (such as: linear regression analysis, construction of cumulative frequency curves and the iterative 2σ technique), that are commonly employed for assessing Regional Geochemical Background (RGB) values in coastal sediments. This study ascertained that although the tout court use of such techniques in determining the RGB values in harbour sediments seems appropriate (the chemical-physical parameters of port sediments fit well with statistical equations), it should nevertheless be avoided because it may be misleading and can mask key aspects of the study area that can only be revealed by further investigations, such as mineralogical and multivariate statistical analyses. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Assessing the significance of pedobarographic signals using random field theory.

    PubMed

    Pataky, Todd C

    2008-08-07

    Traditional pedobarographic statistical analyses are conducted over discrete regions. Recent studies have demonstrated that regionalization can corrupt pedobarographic field data through conflation when arbitrary dividing lines inappropriately delineate smooth field processes. An alternative is to register images such that homologous structures optimally overlap and then conduct statistical tests at each pixel to generate statistical parametric maps (SPMs). The significance of SPM processes may be assessed within the framework of random field theory (RFT). RFT is ideally suited to pedobarographic image analysis because its fundamental data unit is a lattice sampling of a smooth and continuous spatial field. To correct for the vast number of multiple comparisons inherent in such data, recent pedobarographic studies have employed a Bonferroni correction to retain a constant family-wise error rate. This approach unfortunately neglects the spatial correlation of neighbouring pixels, so provides an overly conservative (albeit valid) statistical threshold. RFT generally relaxes the threshold depending on field smoothness and on the geometry of the search area, but it also provides a framework for assigning p values to suprathreshold clusters based on their spatial extent. The current paper provides an overview of basic RFT concepts and uses simulated and experimental data to validate both RFT-relevant field smoothness estimations and RFT predictions regarding the topological characteristics of random pedobarographic fields. Finally, previously published experimental data are re-analysed using RFT inference procedures to demonstrate how RFT yields easily understandable statistical results that may be incorporated into routine clinical and laboratory analyses.

  13. The SPARC Intercomparison of Middle Atmosphere Climatologies

    NASA Technical Reports Server (NTRS)

    Randel, William; Fleming, Eric; Geller, Marvin; Gelman, Mel; Hamilton, Kevin; Karoly, David; Ortland, Dave; Pawson, Steve; Swinbank, Richard; Udelhofen, Petra

    2003-01-01

    Our current confidence in 'observed' climatological winds and temperatures in the middle atmosphere (over altitudes approx. 10-80 km) is assessed by detailed intercomparisons of contemporary and historic data sets. These data sets include global meteorological analyses and assimilations, climatologies derived from research satellite measurements, and historical reference atmosphere circulation statistics. We also include comparisons with historical rocketsonde wind and temperature data, and with more recent lidar temperature measurements. The comparisons focus on a few basic circulation statistics, such as temperature, zonal wind, and eddy flux statistics. Special attention is focused on tropical winds and temperatures, where large differences exist among separate analyses. Assimilated data sets provide the most realistic tropical variability, but substantial differences exist among current schemes.

  14. A Comparison of Self versus Tutor Assessment among Hungarian Undergraduate Business Students

    ERIC Educational Resources Information Center

    Kun, András István

    2016-01-01

    This study analyses the self-assessment behaviour and efficiency of 163 undergraduate business students from Hungary. Using various statistical methods, the results support the hypothesis that high-achieving students are more accurate in their pre- and post-examination self-assessments, and also less likely to overestimate their performance, and,…

  15. A Primer on Receiver Operating Characteristic Analysis and Diagnostic Efficiency Statistics for Pediatric Psychology: We Are Ready to ROC

    PubMed Central

    2014-01-01

    Objective To offer a practical demonstration of receiver operating characteristic (ROC) analyses, diagnostic efficiency statistics, and their application to clinical decision making using a popular parent checklist to assess for potential mood disorder. Method Secondary analyses of data from 589 families seeking outpatient mental health services, completing the Child Behavior Checklist and semi-structured diagnostic interviews. Results Internalizing Problems raw scores discriminated mood disorders significantly better than did age- and gender-normed T scores, or an Affective Problems score. Internalizing scores <8 had a diagnostic likelihood ratio <0.3, and scores >30 had a diagnostic likelihood ratio of 7.4. Conclusions This study illustrates a series of steps in defining a clinical problem, operationalizing it, selecting a valid study design, and using ROC analyses to generate statistics that support clinical decisions. The ROC framework offers important advantages for clinical interpretation. Appendices include sample scripts using SPSS and R to check assumptions and conduct ROC analyses. PMID:23965298

  16. A primer on receiver operating characteristic analysis and diagnostic efficiency statistics for pediatric psychology: we are ready to ROC.

    PubMed

    Youngstrom, Eric A

    2014-03-01

    To offer a practical demonstration of receiver operating characteristic (ROC) analyses, diagnostic efficiency statistics, and their application to clinical decision making using a popular parent checklist to assess for potential mood disorder. Secondary analyses of data from 589 families seeking outpatient mental health services, completing the Child Behavior Checklist and semi-structured diagnostic interviews. Internalizing Problems raw scores discriminated mood disorders significantly better than did age- and gender-normed T scores, or an Affective Problems score. Internalizing scores <8 had a diagnostic likelihood ratio <0.3, and scores >30 had a diagnostic likelihood ratio of 7.4. This study illustrates a series of steps in defining a clinical problem, operationalizing it, selecting a valid study design, and using ROC analyses to generate statistics that support clinical decisions. The ROC framework offers important advantages for clinical interpretation. Appendices include sample scripts using SPSS and R to check assumptions and conduct ROC analyses.

  17. Classroom Assessments of 6000 Teachers: What Do the Results Show about the Effectiveness of Teaching and Learning?

    ERIC Educational Resources Information Center

    Hill, Flo H.; And Others

    This paper presents the results of a series of summary analyses of descriptive statistics concerning 5,720 Louisiana teachers who were assessed with the System for Teaching and Learning Assessment and Review (STAR)--a comprehensive on-the-job statewide teacher assessment system--during the second pilot year (1989-90). Data were collected by about…

  18. A Health Assessment Survey of Veteran Students: Utilizing a Community College-Veterans Affairs Medical Center Partnership.

    PubMed

    Misra-Hebert, Anita D; Santurri, Laura; DeChant, Richard; Watts, Brook; Sehgal, Ashwini R; Aron, David C

    2015-10-01

    To assess health status among student veterans at a community college utilizing a partnership between a Veterans Affairs Medical Center and a community college. Student veterans at Cuyahoga Community College in Cleveland, Ohio, in January to April 2013. A health assessment survey was sent to 978 veteran students. Descriptive analyses to assess prevalence of clinical diagnoses and health behaviors were performed. Logistic regression analyses were performed to assess for independent predictors of functional limitations. 204 students participated in the survey (21% response rate). Self-reported depression and unhealthy behaviors were high. Physical and emotional limitations (45% and 35%, respectively), and pain interfering with work (42%) were reported. Logistic regression analyses confirmed the independent association of self-reported depression with functional limitation (odds ratio [OR] = 3.3, 95% confidence interval [CI] 1.4-7.8, p < 0.05, and C statistic 0.72) and of post-traumatic stress disorder with pain interfering with work (OR 3.9, CI 1.1-13.6, p < 0.05, and C statistic 0.75). A health assessment survey identified priority areas to inform targeted health promotion for student veterans at a community college. A partnership between a Veterans Affairs Medical Center and a community college can be utilized to help understand the health needs of veteran students. Reprint & Copyright © 2015 Association of Military Surgeons of the U.S.

  19. Kidney function changes with aging in adults: comparison between cross-sectional and longitudinal data analyses in renal function assessment.

    PubMed

    Chung, Sang M; Lee, David J; Hand, Austin; Young, Philip; Vaidyanathan, Jayabharathi; Sahajwalla, Chandrahas

    2015-12-01

    The study evaluated whether the renal function decline rate per year with age in adults varies based on two primary statistical analyses: cross-section (CS), using one observation per subject, and longitudinal (LT), using multiple observations per subject over time. A total of 16628 records (3946 subjects; age range 30-92 years) of creatinine clearance and relevant demographic data were used. On average, four samples per subject were collected for up to 2364 days (mean: 793 days). A simple linear regression and random coefficient models were selected for CS and LT analyses, respectively. The renal function decline rates per year were 1.33 and 0.95 ml/min/year for CS and LT analyses, respectively, and were slower when the repeated individual measurements were considered. The study confirms that rates are different based on statistical analyses, and that a statistically robust longitudinal model with a proper sampling design provides reliable individual as well as population estimates of the renal function decline rates per year with age in adults. In conclusion, our findings indicated that one should be cautious in interpreting the renal function decline rate with aging information because its estimation was highly dependent on the statistical analyses. From our analyses, a population longitudinal analysis (e.g. random coefficient model) is recommended if individualization is critical, such as a dose adjustment based on renal function during a chronic therapy. Copyright © 2015 John Wiley & Sons, Ltd.

  20. Borrowing of strength and study weights in multivariate and network meta-analysis.

    PubMed

    Jackson, Dan; White, Ian R; Price, Malcolm; Copas, John; Riley, Richard D

    2017-12-01

    Multivariate and network meta-analysis have the potential for the estimated mean of one effect to borrow strength from the data on other effects of interest. The extent of this borrowing of strength is usually assessed informally. We present new mathematical definitions of 'borrowing of strength'. Our main proposal is based on a decomposition of the score statistic, which we show can be interpreted as comparing the precision of estimates from the multivariate and univariate models. Our definition of borrowing of strength therefore emulates the usual informal assessment. We also derive a method for calculating study weights, which we embed into the same framework as our borrowing of strength statistics, so that percentage study weights can accompany the results from multivariate and network meta-analyses as they do in conventional univariate meta-analyses. Our proposals are illustrated using three meta-analyses involving correlated effects for multiple outcomes, multiple risk factor associations and multiple treatments (network meta-analysis).

  1. Borrowing of strength and study weights in multivariate and network meta-analysis

    PubMed Central

    Jackson, Dan; White, Ian R; Price, Malcolm; Copas, John; Riley, Richard D

    2016-01-01

    Multivariate and network meta-analysis have the potential for the estimated mean of one effect to borrow strength from the data on other effects of interest. The extent of this borrowing of strength is usually assessed informally. We present new mathematical definitions of ‘borrowing of strength’. Our main proposal is based on a decomposition of the score statistic, which we show can be interpreted as comparing the precision of estimates from the multivariate and univariate models. Our definition of borrowing of strength therefore emulates the usual informal assessment. We also derive a method for calculating study weights, which we embed into the same framework as our borrowing of strength statistics, so that percentage study weights can accompany the results from multivariate and network meta-analyses as they do in conventional univariate meta-analyses. Our proposals are illustrated using three meta-analyses involving correlated effects for multiple outcomes, multiple risk factor associations and multiple treatments (network meta-analysis). PMID:26546254

  2. Systematic survey of the design, statistical analysis, and reporting of studies published in the 2008 volume of the Journal of Cerebral Blood Flow and Metabolism.

    PubMed

    Vesterinen, Hanna M; Vesterinen, Hanna V; Egan, Kieren; Deister, Amelie; Schlattmann, Peter; Macleod, Malcolm R; Dirnagl, Ulrich

    2011-04-01

    Translating experimental findings into clinically effective therapies is one of the major bottlenecks of modern medicine. As this has been particularly true for cerebrovascular research, attention has turned to the quality and validity of experimental cerebrovascular studies. We set out to assess the study design, statistical analyses, and reporting of cerebrovascular research. We assessed all original articles published in the Journal of Cerebral Blood Flow and Metabolism during the year 2008 against a checklist designed to capture the key attributes relating to study design, statistical analyses, and reporting. A total of 156 original publications were included (animal, in vitro, human). Few studies reported a primary research hypothesis, statement of purpose, or measures to safeguard internal validity (such as randomization, blinding, exclusion or inclusion criteria). Many studies lacked sufficient information regarding methods and results to form a reasonable judgment about their validity. In nearly 20% of studies, statistical tests were either not appropriate or information to allow assessment of appropriateness was lacking. This study identifies a number of factors that should be addressed if the quality of research in basic and translational biomedicine is to be improved. We support the widespread implementation of the ARRIVE (Animal Research Reporting In Vivo Experiments) statement for the reporting of experimental studies in biomedicine, for improving training in proper study design and analysis, and that reviewers and editors adopt a more constructively critical approach in the assessment of manuscripts for publication.

  3. Systematic survey of the design, statistical analysis, and reporting of studies published in the 2008 volume of the Journal of Cerebral Blood Flow and Metabolism

    PubMed Central

    Vesterinen, Hanna V; Egan, Kieren; Deister, Amelie; Schlattmann, Peter; Macleod, Malcolm R; Dirnagl, Ulrich

    2011-01-01

    Translating experimental findings into clinically effective therapies is one of the major bottlenecks of modern medicine. As this has been particularly true for cerebrovascular research, attention has turned to the quality and validity of experimental cerebrovascular studies. We set out to assess the study design, statistical analyses, and reporting of cerebrovascular research. We assessed all original articles published in the Journal of Cerebral Blood Flow and Metabolism during the year 2008 against a checklist designed to capture the key attributes relating to study design, statistical analyses, and reporting. A total of 156 original publications were included (animal, in vitro, human). Few studies reported a primary research hypothesis, statement of purpose, or measures to safeguard internal validity (such as randomization, blinding, exclusion or inclusion criteria). Many studies lacked sufficient information regarding methods and results to form a reasonable judgment about their validity. In nearly 20% of studies, statistical tests were either not appropriate or information to allow assessment of appropriateness was lacking. This study identifies a number of factors that should be addressed if the quality of research in basic and translational biomedicine is to be improved. We support the widespread implementation of the ARRIVE (Animal Research Reporting In Vivo Experiments) statement for the reporting of experimental studies in biomedicine, for improving training in proper study design and analysis, and that reviewers and editors adopt a more constructively critical approach in the assessment of manuscripts for publication. PMID:21157472

  4. Evaluation of a weighted test in the analysis of ordinal gait scores in an additivity model for five OP pesticides.

    EPA Science Inventory

    Appropriate statistical analyses are critical for evaluating interactions of mixtures with a common mode of action, as is often the case for cumulative risk assessments. Our objective is to develop analyses for use when a response variable is ordinal, and to test for interaction...

  5. Family Early Literacy Practices Questionnaire: A Validation Study for a Spanish-Speaking Population

    ERIC Educational Resources Information Center

    Lewis, Kandia

    2012-01-01

    The purpose of the current study was to evaluate the psychometric validity of a Spanish translated version of a family involvement questionnaire (the FELP) using a mixed-methods design. Thus, statistical analyses (i.e., factor analysis, reliability analysis, and item analysis) and qualitative analyses (i.e., focus group data) were assessed.…

  6. On Improving the Quality and Interpretation of Environmental Assessments using Statistical Analysis and Geographic Information Systems

    NASA Astrophysics Data System (ADS)

    Karuppiah, R.; Faldi, A.; Laurenzi, I.; Usadi, A.; Venkatesh, A.

    2014-12-01

    An increasing number of studies are focused on assessing the environmental footprint of different products and processes, especially using life cycle assessment (LCA). This work shows how combining statistical methods and Geographic Information Systems (GIS) with environmental analyses can help improve the quality of results and their interpretation. Most environmental assessments in literature yield single numbers that characterize the environmental impact of a process/product - typically global or country averages, often unchanging in time. In this work, we show how statistical analysis and GIS can help address these limitations. For example, we demonstrate a method to separately quantify uncertainty and variability in the result of LCA models using a power generation case study. This is important for rigorous comparisons between the impacts of different processes. Another challenge is lack of data that can affect the rigor of LCAs. We have developed an approach to estimate environmental impacts of incompletely characterized processes using predictive statistical models. This method is applied to estimate unreported coal power plant emissions in several world regions. There is also a general lack of spatio-temporal characterization of the results in environmental analyses. For instance, studies that focus on water usage do not put in context where and when water is withdrawn. Through the use of hydrological modeling combined with GIS, we quantify water stress on a regional and seasonal basis to understand water supply and demand risks for multiple users. Another example where it is important to consider regional dependency of impacts is when characterizing how agricultural land occupation affects biodiversity in a region. We developed a data-driven methodology used in conjuction with GIS to determine if there is a statistically significant difference between the impacts of growing different crops on different species in various biomes of the world.

  7. Technical Report of the NAEP Mathematics Assessment in Puerto Rico: Focus on Statistical Issues. NCES 2007-462

    ERIC Educational Resources Information Center

    Baxter, G. P.; Ahmed, S.; Sikali, E.; Waits, T.; Sloan, M.; Salvucci, S.

    2007-01-01

    In 2003, a trial National Assessment of Educational Progress (NAEP) mathematics assessment was administered in Spanish to public school students at grades 4 and 8 in Puerto Rico. Based on preliminary analyses of the 2003 data, changes were made in administration and translation procedures for the 2005 NAEP administration in Puerto Rico. This…

  8. Methodological and Reporting Quality of Systematic Reviews and Meta-analyses in Endodontics.

    PubMed

    Nagendrababu, Venkateshbabu; Pulikkotil, Shaju Jacob; Sultan, Omer Sheriff; Jayaraman, Jayakumar; Peters, Ove A

    2018-06-01

    The aim of this systematic review (SR) was to evaluate the quality of SRs and meta-analyses (MAs) in endodontics. A comprehensive literature search was conducted to identify relevant articles in the electronic databases from January 2000 to June 2017. Two reviewers independently assessed the articles for eligibility and data extraction. SRs and MAs on interventional studies with a minimum of 2 therapeutic strategies in endodontics were included in this SR. Methodologic and reporting quality were assessed using A Measurement Tool to Assess Systematic Reviews (AMSTAR) and Preferred Reporting Items for Systematic Review and Meta-Analyses (PRISMA), respectively. The interobserver reliability was calculated using the Cohen kappa statistic. Statistical analysis with the level of significance at P < .05 was performed using Kruskal-Wallis tests and simple linear regression analysis. A total of 30 articles were selected for the current SR. Using AMSTAR, the item related to the scientific quality of studies used in conclusion was adhered by less than 40% of studies. Using PRISMA, 3 items were reported by less than 40% of studies, which were on objectives, protocol registration, and funding. No association was evident comparing the number of authors and country with quality. Statistical significance was observed when quality was compared among journals, with studies published as Cochrane reviews superior to those published in other journals. AMSTAR and PRISMA scores were significantly related. SRs in endodontics showed variability in both methodologic and reporting quality. Copyright © 2018 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.

  9. P-MartCancer-Interactive Online Software to Enable Analysis of Shotgun Cancer Proteomic Datasets.

    PubMed

    Webb-Robertson, Bobbie-Jo M; Bramer, Lisa M; Jensen, Jeffrey L; Kobold, Markus A; Stratton, Kelly G; White, Amanda M; Rodland, Karin D

    2017-11-01

    P-MartCancer is an interactive web-based software environment that enables statistical analyses of peptide or protein data, quantitated from mass spectrometry-based global proteomics experiments, without requiring in-depth knowledge of statistical programming. P-MartCancer offers a series of statistical modules associated with quality assessment, peptide and protein statistics, protein quantification, and exploratory data analyses driven by the user via customized workflows and interactive visualization. Currently, P-MartCancer offers access and the capability to analyze multiple cancer proteomic datasets generated through the Clinical Proteomics Tumor Analysis Consortium at the peptide, gene, and protein levels. P-MartCancer is deployed as a web service (https://pmart.labworks.org/cptac.html), alternatively available via Docker Hub (https://hub.docker.com/r/pnnl/pmart-web/). Cancer Res; 77(21); e47-50. ©2017 AACR . ©2017 American Association for Cancer Research.

  10. The use and misuse of statistical analyses. [in geophysics and space physics

    NASA Technical Reports Server (NTRS)

    Reiff, P. H.

    1983-01-01

    The statistical techniques most often used in space physics include Fourier analysis, linear correlation, auto- and cross-correlation, power spectral density, and superposed epoch analysis. Tests are presented which can evaluate the significance of the results obtained through each of these. Data presented without some form of error analysis are frequently useless, since they offer no way of assessing whether a bump on a spectrum or on a superposed epoch analysis is real or merely a statistical fluctuation. Among many of the published linear correlations, for instance, the uncertainty in the intercept and slope is not given, so that the significance of the fitted parameters cannot be assessed.

  11. Prevention and anthropology.

    PubMed

    Jopp, Eilin; Scheffler, Christiane; Hermanussen, Michael

    2014-01-01

    Screening is an important issue in medicine and is used to early identify unrecognised diseases in persons who are apparently in good health. Screening strongly relies on the concept of "normal values". Normal values are defined as values that are frequently observed in a population and usually range within certain statistical limits. Screening for obesity should start early as the prevalence of obesity consolidates already at early school age. Though widely practiced, measuring BMI is not the ultimate solution for detecting obesity. Children with high BMI may be "robust" in skeletal dimensions. Assessing skeletal robustness and in particularly assessing developmental tempo in adolescents are also important issues in health screening. Yet, in spite of the necessity of screening investigations, appropriate reference values are often missing. Meanwhile, new concepts of growth diagrams have been developed. Stage line diagrams are useful for tracking developmental processes over time. Functional data analyses have efficiently been used for analysing longitudinal growth in height and assessing the tempo of maturation. Convenient low-cost statistics have also been developed for generating synthetic national references.

  12. Tests of Alignment among Assessment, Standards, and Instruction Using Generalized Linear Model Regression

    ERIC Educational Resources Information Center

    Fulmer, Gavin W.; Polikoff, Morgan S.

    2014-01-01

    An essential component in school accountability efforts is for assessments to be well-aligned with the standards or curriculum they are intended to measure. However, relatively little prior research has explored methods to determine statistical significance of alignment or misalignment. This study explores analyses of alignment as a special case…

  13. Closing the Gap: An Overview. The Achievement Gap: An Overview. Info Brief. Number 44

    ERIC Educational Resources Information Center

    Poliakoff, Anne Rogers

    2006-01-01

    Persistent gaps between the academic achievements of different groups of children are thoroughly documented by the U.S. National Assessment of Educational Progress and other statistical analyses of state assessments, grades, course selection, and dropout rates. Despite improvements in some years, the gap endures as a consistent and disturbing…

  14. Multilevel Motivation and Engagement: Assessing Construct Validity across Students and Schools

    ERIC Educational Resources Information Center

    Martin, Andrew J.; Malmberg, Lars-Erik; Liem, Gregory Arief D.

    2010-01-01

    Statistical biases associated with single-level analyses underscore the importance of partitioning variance/covariance matrices into individual and group levels. From a multilevel perspective based on data from 21,579 students in 58 high schools, the present study assesses the multilevel factor structure of motivation and engagement with a…

  15. Psycho-Motor Needs Assessment of Virginia School Children.

    ERIC Educational Resources Information Center

    Glen Haven Achievement Center, Fort Collins, CO.

    An effort to assess psycho-motor (P-M) needs among Virginia children in K-4 and in special primary classes for the educable mentally retarded is presented. Included are methods for selecting, combining, and developing evaluation measures, which are verified statistically by analyses of data collected from a stratified sample of approximately 4,500…

  16. Adjusting the Adjusted X[superscript 2]/df Ratio Statistic for Dichotomous Item Response Theory Analyses: Does the Model Fit?

    ERIC Educational Resources Information Center

    Tay, Louis; Drasgow, Fritz

    2012-01-01

    Two Monte Carlo simulation studies investigated the effectiveness of the mean adjusted X[superscript 2]/df statistic proposed by Drasgow and colleagues and, because of problems with the method, a new approach for assessing the goodness of fit of an item response theory model was developed. It has been previously recommended that mean adjusted…

  17. Impact of ontology evolution on functional analyses.

    PubMed

    Groß, Anika; Hartung, Michael; Prüfer, Kay; Kelso, Janet; Rahm, Erhard

    2012-10-15

    Ontologies are used in the annotation and analysis of biological data. As knowledge accumulates, ontologies and annotation undergo constant modifications to reflect this new knowledge. These modifications may influence the results of statistical applications such as functional enrichment analyses that describe experimental data in terms of ontological groupings. Here, we investigate to what degree modifications of the Gene Ontology (GO) impact these statistical analyses for both experimental and simulated data. The analysis is based on new measures for the stability of result sets and considers different ontology and annotation changes. Our results show that past changes in the GO are non-uniformly distributed over different branches of the ontology. Considering the semantic relatedness of significant categories in analysis results allows a more realistic stability assessment for functional enrichment studies. We observe that the results of term-enrichment analyses tend to be surprisingly stable despite changes in ontology and annotation.

  18. Reporting Practices and Use of Quantitative Methods in Canadian Journal Articles in Psychology.

    PubMed

    Counsell, Alyssa; Harlow, Lisa L

    2017-05-01

    With recent focus on the state of research in psychology, it is essential to assess the nature of the statistical methods and analyses used and reported by psychological researchers. To that end, we investigated the prevalence of different statistical procedures and the nature of statistical reporting practices in recent articles from the four major Canadian psychology journals. The majority of authors evaluated their research hypotheses through the use of analysis of variance (ANOVA), t -tests, and multiple regression. Multivariate approaches were less common. Null hypothesis significance testing remains a popular strategy, but the majority of authors reported a standardized or unstandardized effect size measure alongside their significance test results. Confidence intervals on effect sizes were infrequently employed. Many authors provided minimal details about their statistical analyses and less than a third of the articles presented on data complications such as missing data and violations of statistical assumptions. Strengths of and areas needing improvement for reporting quantitative results are highlighted. The paper concludes with recommendations for how researchers and reviewers can improve comprehension and transparency in statistical reporting.

  19. Antimicrobial susceptibility of Escherichia coli F4, Pasteurella multocida, and Streptococcus suis isolates from a diagnostic veterinary laboratory and recommendations for a surveillance system

    PubMed Central

    Glass-Kaastra, Shiona K.; Pearl, David L.; Reid-Smith, Richard J.; McEwen, Beverly; Slavic, Durda; McEwen, Scott A.; Fairles, Jim

    2014-01-01

    Antimicrobial susceptibility data on Escherichia coli F4, Pasteurella multocida, and Streptococcus suis isolates from Ontario swine (January 1998 to October 2010) were acquired from a comprehensive diagnostic veterinary laboratory in Ontario, Canada. In relation to the possible development of a surveillance system for antimicrobial resistance, data were assessed for ease of management, completeness, consistency, and applicability for temporal and spatial statistical analyses. Limited farm location data precluded spatial analyses and missing demographic data limited their use as predictors within multivariable statistical models. Changes in the standard panel of antimicrobials used for susceptibility testing reduced the number of antimicrobials available for temporal analyses. Data consistency and quality could improve over time in this and similar diagnostic laboratory settings by encouraging complete reporting with sample submission and by modifying database systems to limit free-text data entry. These changes could make more statistical methods available for disease surveillance and cluster detection. PMID:24688133

  20. Antimicrobial susceptibility of Escherichia coli F4, Pasteurella multocida, and Streptococcus suis isolates from a diagnostic veterinary laboratory and recommendations for a surveillance system.

    PubMed

    Glass-Kaastra, Shiona K; Pearl, David L; Reid-Smith, Richard J; McEwen, Beverly; Slavic, Durda; McEwen, Scott A; Fairles, Jim

    2014-04-01

    Antimicrobial susceptibility data on Escherichia coli F4, Pasteurella multocida, and Streptococcus suis isolates from Ontario swine (January 1998 to October 2010) were acquired from a comprehensive diagnostic veterinary laboratory in Ontario, Canada. In relation to the possible development of a surveillance system for antimicrobial resistance, data were assessed for ease of management, completeness, consistency, and applicability for temporal and spatial statistical analyses. Limited farm location data precluded spatial analyses and missing demographic data limited their use as predictors within multivariable statistical models. Changes in the standard panel of antimicrobials used for susceptibility testing reduced the number of antimicrobials available for temporal analyses. Data consistency and quality could improve over time in this and similar diagnostic laboratory settings by encouraging complete reporting with sample submission and by modifying database systems to limit free-text data entry. These changes could make more statistical methods available for disease surveillance and cluster detection.

  1. Publication of statistically significant research findings in prosthodontics & implant dentistry in the context of other dental specialties.

    PubMed

    Papageorgiou, Spyridon N; Kloukos, Dimitrios; Petridis, Haralampos; Pandis, Nikolaos

    2015-10-01

    To assess the hypothesis that there is excessive reporting of statistically significant studies published in prosthodontic and implantology journals, which could indicate selective publication. The last 30 issues of 9 journals in prosthodontics and implant dentistry were hand-searched for articles with statistical analyses. The percentages of significant and non-significant results were tabulated by parameter of interest. Univariable/multivariable logistic regression analyses were applied to identify possible predictors of reporting statistically significance findings. The results of this study were compared with similar studies in dentistry with random-effects meta-analyses. From the 2323 included studies 71% of them reported statistically significant results, with the significant results ranging from 47% to 86%. Multivariable modeling identified that geographical area and involvement of statistician were predictors of statistically significant results. Compared to interventional studies, the odds that in vitro and observational studies would report statistically significant results was increased by 1.20 times (OR: 2.20, 95% CI: 1.66-2.92) and 0.35 times (OR: 1.35, 95% CI: 1.05-1.73), respectively. The probability of statistically significant results from randomized controlled trials was significantly lower compared to various study designs (difference: 30%, 95% CI: 11-49%). Likewise the probability of statistically significant results in prosthodontics and implant dentistry was lower compared to other dental specialties, but this result did not reach statistical significant (P>0.05). The majority of studies identified in the fields of prosthodontics and implant dentistry presented statistically significant results. The same trend existed in publications of other specialties in dentistry. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. The Development of Statistics Textbook Supported with ICT and Portfolio-Based Assessment

    NASA Astrophysics Data System (ADS)

    Hendikawati, Putriaji; Yuni Arini, Florentina

    2016-02-01

    This research was development research that aimed to develop and produce a Statistics textbook model that supported with information and communication technology (ICT) and Portfolio-Based Assessment. This book was designed for students of mathematics at the college to improve students’ ability in mathematical connection and communication. There were three stages in this research i.e. define, design, and develop. The textbooks consisted of 10 chapters which each chapter contains introduction, core materials and include examples and exercises. The textbook developed phase begins with the early stages of designed the book (draft 1) which then validated by experts. Revision of draft 1 produced draft 2 which then limited test for readability test book. Furthermore, revision of draft 2 produced textbook draft 3 which simulated on a small sample to produce a valid model textbook. The data were analysed with descriptive statistics. The analysis showed that the Statistics textbook model that supported with ICT and Portfolio-Based Assessment valid and fill up the criteria of practicality.

  3. Pooling sexes when assessing ground reaction forces during walking: Statistical Parametric Mapping versus traditional approach.

    PubMed

    Castro, Marcelo P; Pataky, Todd C; Sole, Gisela; Vilas-Boas, Joao Paulo

    2015-07-16

    Ground reaction force (GRF) data from men and women are commonly pooled for analyses. However, it may not be justifiable to pool sexes on the basis of discrete parameters extracted from continuous GRF gait waveforms because this can miss continuous effects. Forty healthy participants (20 men and 20 women) walked at a cadence of 100 steps per minute across two force plates, recording GRFs. Two statistical methods were used to test the null hypothesis of no mean GRF differences between sexes: (i) Statistical Parametric Mapping-using the entire three-component GRF waveform; and (ii) traditional approach-using the first and second vertical GRF peaks. Statistical Parametric Mapping results suggested large sex differences, which post-hoc analyses suggested were due predominantly to higher anterior-posterior and vertical GRFs in early stance in women compared to men. Statistically significant differences were observed for the first GRF peak and similar values for the second GRF peak. These contrasting results emphasise that different parts of the waveform have different signal strengths and thus that one may use the traditional approach to choose arbitrary metrics and make arbitrary conclusions. We suggest that researchers and clinicians consider both the entire gait waveforms and sex-specificity when analysing GRF data. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. Nonindependence and sensitivity analyses in ecological and evolutionary meta-analyses.

    PubMed

    Noble, Daniel W A; Lagisz, Malgorzata; O'dea, Rose E; Nakagawa, Shinichi

    2017-05-01

    Meta-analysis is an important tool for synthesizing research on a variety of topics in ecology and evolution, including molecular ecology, but can be susceptible to nonindependence. Nonindependence can affect two major interrelated components of a meta-analysis: (i) the calculation of effect size statistics and (ii) the estimation of overall meta-analytic estimates and their uncertainty. While some solutions to nonindependence exist at the statistical analysis stages, there is little advice on what to do when complex analyses are not possible, or when studies with nonindependent experimental designs exist in the data. Here we argue that exploring the effects of procedural decisions in a meta-analysis (e.g. inclusion of different quality data, choice of effect size) and statistical assumptions (e.g. assuming no phylogenetic covariance) using sensitivity analyses are extremely important in assessing the impact of nonindependence. Sensitivity analyses can provide greater confidence in results and highlight important limitations of empirical work (e.g. impact of study design on overall effects). Despite their importance, sensitivity analyses are seldom applied to problems of nonindependence. To encourage better practice for dealing with nonindependence in meta-analytic studies, we present accessible examples demonstrating the impact that ignoring nonindependence can have on meta-analytic estimates. We also provide pragmatic solutions for dealing with nonindependent study designs, and for analysing dependent effect sizes. Additionally, we offer reporting guidelines that will facilitate disclosure of the sources of nonindependence in meta-analyses, leading to greater transparency and more robust conclusions. © 2017 John Wiley & Sons Ltd.

  5. Post Hoc Analyses of ApoE Genotype-Defined Subgroups in Clinical Trials.

    PubMed

    Kennedy, Richard E; Cutter, Gary R; Wang, Guoqiao; Schneider, Lon S

    2016-01-01

    Many post hoc analyses of clinical trials in Alzheimer's disease (AD) and mild cognitive impairment (MCI) are in small Phase 2 trials. Subject heterogeneity may lead to statistically significant post hoc results that cannot be replicated in larger follow-up studies. We investigated the extent of this problem using simulation studies mimicking current trial methods with post hoc analyses based on ApoE4 carrier status. We used a meta-database of 24 studies, including 3,574 subjects with mild AD and 1,171 subjects with MCI/prodromal AD, to simulate clinical trial scenarios. Post hoc analyses examined if rates of progression on the Alzheimer's Disease Assessment Scale-cognitive (ADAS-cog) differed between ApoE4 carriers and non-carriers. Across studies, ApoE4 carriers were younger and had lower baseline scores, greater rates of progression, and greater variability on the ADAS-cog. Up to 18% of post hoc analyses for 18-month trials in AD showed greater rates of progression for ApoE4 non-carriers that were statistically significant but unlikely to be confirmed in follow-up studies. The frequency of erroneous conclusions dropped below 3% with trials of 100 subjects per arm. In MCI, rates of statistically significant differences with greater progression in ApoE4 non-carriers remained below 3% unless sample sizes were below 25 subjects per arm. Statistically significant differences for ApoE4 in post hoc analyses often reflect heterogeneity among small samples rather than true differential effect among ApoE4 subtypes. Such analyses must be viewed cautiously. ApoE genotype should be incorporated into the design stage to minimize erroneous conclusions.

  6. Methodological Standards for Meta-Analyses and Qualitative Systematic Reviews of Cardiac Prevention and Treatment Studies: A Scientific Statement From the American Heart Association.

    PubMed

    Rao, Goutham; Lopez-Jimenez, Francisco; Boyd, Jack; D'Amico, Frank; Durant, Nefertiti H; Hlatky, Mark A; Howard, George; Kirley, Katherine; Masi, Christopher; Powell-Wiley, Tiffany M; Solomonides, Anthony E; West, Colin P; Wessel, Jennifer

    2017-09-05

    Meta-analyses are becoming increasingly popular, especially in the fields of cardiovascular disease prevention and treatment. They are often considered to be a reliable source of evidence for making healthcare decisions. Unfortunately, problems among meta-analyses such as the misapplication and misinterpretation of statistical methods and tests are long-standing and widespread. The purposes of this statement are to review key steps in the development of a meta-analysis and to provide recommendations that will be useful for carrying out meta-analyses and for readers and journal editors, who must interpret the findings and gauge methodological quality. To make the statement practical and accessible, detailed descriptions of statistical methods have been omitted. Based on a survey of cardiovascular meta-analyses, published literature on methodology, expert consultation, and consensus among the writing group, key recommendations are provided. Recommendations reinforce several current practices, including protocol registration; comprehensive search strategies; methods for data extraction and abstraction; methods for identifying, measuring, and dealing with heterogeneity; and statistical methods for pooling results. Other practices should be discontinued, including the use of levels of evidence and evidence hierarchies to gauge the value and impact of different study designs (including meta-analyses) and the use of structured tools to assess the quality of studies to be included in a meta-analysis. We also recommend choosing a pooling model for conventional meta-analyses (fixed effect or random effects) on the basis of clinical and methodological similarities among studies to be included, rather than the results of a test for statistical heterogeneity. © 2017 American Heart Association, Inc.

  7. Assessing the effect of land use change on catchment runoff by combined use of statistical tests and hydrological modelling: Case studies from Zimbabwe

    NASA Astrophysics Data System (ADS)

    Lørup, Jens Kristian; Refsgaard, Jens Christian; Mazvimavi, Dominic

    1998-03-01

    The purpose of this study was to identify and assess long-term impacts of land use change on catchment runoff in semi-arid Zimbabwe, based on analyses of long hydrological time series (25-50 years) from six medium-sized (200-1000 km 2) non-experimental rural catchments. A methodology combining common statistical methods with hydrological modelling was adopted in order to distinguish between the effects of climate variability and the effects of land use change. The hydrological model (NAM) was in general able to simulate the observed hydrographs very well during the reference period, thus providing a means to account for the effects of climate variability and hence strengthening the power of the subsequent statistical tests. In the test period the validated model was used to provide the runoff record which would have occurred in the absence of land use change. The analyses indicated a decrease in the annual runoff for most of the six catchments, with the largest changes occurring for catchments located within communal land, where large increases in population and agricultural intensity have taken place. However, the decrease was only statistically significant at the 5% level for one of the catchments.

  8. Sensitivity Analyses of the Change in FVC in a Phase 3 Trial of Pirfenidone for Idiopathic Pulmonary Fibrosis.

    PubMed

    Lederer, David J; Bradford, Williamson Z; Fagan, Elizabeth A; Glaspole, Ian; Glassberg, Marilyn K; Glasscock, Kenneth F; Kardatzke, David; King, Talmadge E; Lancaster, Lisa H; Nathan, Steven D; Pereira, Carlos A; Sahn, Steven A; Swigris, Jeffrey J; Noble, Paul W

    2015-07-01

    FVC outcomes in clinical trials on idiopathic pulmonary fibrosis (IPF) can be substantially influenced by the analytic methodology and the handling of missing data. We conducted a series of sensitivity analyses to assess the robustness of the statistical finding and the stability of the estimate of the magnitude of treatment effect on the primary end point of FVC change in a phase 3 trial evaluating pirfenidone in adults with IPF. Source data included all 555 study participants randomized to treatment with pirfenidone or placebo in the Assessment of Pirfenidone to Confirm Efficacy and Safety in Idiopathic Pulmonary Fibrosis (ASCEND) study. Sensitivity analyses were conducted to assess whether alternative statistical tests and methods for handling missing data influenced the observed magnitude of treatment effect on the primary end point of change from baseline to week 52 in FVC. The distribution of FVC change at week 52 was systematically different between the two treatment groups and favored pirfenidone in each analysis. The method used to impute missing data due to death had a marked effect on the magnitude of change in FVC in both treatment groups; however, the magnitude of treatment benefit was generally consistent on a relative basis, with an approximate 50% reduction in FVC decline observed in the pirfenidone group in each analysis. Our results confirm the robustness of the statistical finding on the primary end point of change in FVC in the ASCEND trial and corroborate the estimated magnitude of the pirfenidone treatment effect in patients with IPF. ClinicalTrials.gov; No.: NCT01366209; URL: www.clinicaltrials.gov.

  9. Mediation analysis in nursing research: a methodological review.

    PubMed

    Liu, Jianghong; Ulrich, Connie

    2016-12-01

    Mediation statistical models help clarify the relationship between independent predictor variables and dependent outcomes of interest by assessing the impact of third variables. This type of statistical analysis is applicable for many clinical nursing research questions, yet its use within nursing remains low. Indeed, mediational analyses may help nurse researchers develop more effective and accurate prevention and treatment programs as well as help bridge the gap between scientific knowledge and clinical practice. In addition, this statistical approach allows nurse researchers to ask - and answer - more meaningful and nuanced questions that extend beyond merely determining whether an outcome occurs. Therefore, the goal of this paper is to provide a brief tutorial on the use of mediational analyses in clinical nursing research by briefly introducing the technique and, through selected empirical examples from the nursing literature, demonstrating its applicability in advancing nursing science.

  10. Formative assessment in mathematics for engineering students

    NASA Astrophysics Data System (ADS)

    Ní Fhloinn, Eabhnat; Carr, Michael

    2017-07-01

    In this paper, we present a range of formative assessment types for engineering mathematics, including in-class exercises, homework, mock examination questions, table quizzes, presentations, critical analyses of statistical papers, peer-to-peer teaching, online assessments and electronic voting systems. We provide practical tips for the implementation of such assessments, with a particular focus on time or resource constraints and large class sizes, as well as effective methods of feedback. In addition, we consider the benefits of such formative assessments for students and staff.

  11. A probabilistic analysis of electrical equipment vulnerability to carbon fibers

    NASA Technical Reports Server (NTRS)

    Elber, W.

    1980-01-01

    The statistical problems of airborne carbon fibers falling onto electrical circuits were idealized and analyzed. The probability of making contact between randomly oriented finite length fibers and sets of parallel conductors with various spacings and lengths was developed theoretically. The probability of multiple fibers joining to bridge a single gap between conductors, or forming continuous networks is included. From these theoretical considerations, practical statistical analyses to assess the likelihood of causing electrical malfunctions was produced. The statistics obtained were confirmed by comparison with results of controlled experiments.

  12. A software platform for statistical evaluation of patient respiratory patterns in radiation therapy.

    PubMed

    Dunn, Leon; Kenny, John

    2017-10-01

    The aim of this work was to design and evaluate a software tool for analysis of a patient's respiration, with the goal of optimizing the effectiveness of motion management techniques during radiotherapy imaging and treatment. A software tool which analyses patient respiratory data files (.vxp files) created by the Varian Real-Time Position Management System (RPM) was developed to analyse patient respiratory data. The software, called RespAnalysis, was created in MATLAB and provides four modules, one each for determining respiration characteristics, providing breathing coaching (biofeedback training), comparing pre and post-training characteristics and performing a fraction-by-fraction assessment. The modules analyse respiratory traces to determine signal characteristics and specifically use a Sample Entropy algorithm as the key means to quantify breathing irregularity. Simulated respiratory signals, as well as 91 patient RPM traces were analysed with RespAnalysis to test the viability of using the Sample Entropy for predicting breathing regularity. Retrospective assessment of patient data demonstrated that the Sample Entropy metric was a predictor of periodic irregularity in respiration data, however, it was found to be insensitive to amplitude variation. Additional waveform statistics assessing the distribution of signal amplitudes over time coupled with Sample Entropy method were found to be useful in assessing breathing regularity. The RespAnalysis software tool presented in this work uses the Sample Entropy method to analyse patient respiratory data recorded for motion management purposes in radiation therapy. This is applicable during treatment simulation and during subsequent treatment fractions, providing a way to quantify breathing irregularity, as well as assess the need for breathing coaching. It was demonstrated that the Sample Entropy metric was correlated to the irregularity of the patient's respiratory motion in terms of periodicity, whilst other metrics, such as percentage deviation of inhale/exhale peak positions provided insight into respiratory amplitude regularity. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  13. Novel public health risk assessment process developed to support syndromic surveillance for the 2012 Olympic and Paralympic Games.

    PubMed

    Smith, Gillian E; Elliot, Alex J; Ibbotson, Sue; Morbey, Roger; Edeghere, Obaghe; Hawker, Jeremy; Catchpole, Mike; Endericks, Tina; Fisher, Paul; McCloskey, Brian

    2017-09-01

    Syndromic surveillance aims to provide early warning and real time estimates of the extent of incidents; and reassurance about lack of impact of mass gatherings. We describe a novel public health risk assessment process to ensure those leading the response to the 2012 Olympic Games were alerted to unusual activity that was of potential public health importance, and not inundated with multiple statistical 'alarms'. Statistical alarms were assessed to identify those which needed to result in 'alerts' as reliably as possible. There was no previously developed method for this. We identified factors that increased our concern about an alarm suggesting that an 'alert' should be made. Between 2 July and 12 September 2012, 350 674 signals were analysed resulting in 4118 statistical alarms. Using the risk assessment process, 122 'alerts' were communicated to Olympic incident directors. Use of a novel risk assessment process enabled the interpretation of large number of statistical alarms in a manageable way for the period of a sustained mass gathering. This risk assessment process guided the prioritization and could be readily adapted to other surveillance systems. The process, which is novel to our knowledge, continues as a legacy of the Games. © Crown copyright 2016.

  14. Emotional and cognitive effects of peer tutoring among secondary school mathematics students

    NASA Astrophysics Data System (ADS)

    Alegre Ansuategui, Francisco José; Moliner Miravet, Lidón

    2017-11-01

    This paper describes an experience of same-age peer tutoring conducted with 19 eighth-grade mathematics students in a secondary school in Castellon de la Plana (Spain). Three constructs were analysed before and after launching the program: academic performance, mathematics self-concept and attitude of solidarity. Students' perceptions of the method were also analysed. The quantitative data was gathered by means of a mathematics self-concept questionnaire, an attitude of solidarity questionnaire and the students' numerical ratings. A statistical analysis was performed using Student's t-test. The qualitative information was gathered by means of discussion groups and a field diary. This information was analysed using descriptive analysis and by categorizing the information. Results show statistically significant improvements in all the variables and the positive assessment of the experience and the interactions that took place between the students.

  15. P-MartCancer–Interactive Online Software to Enable Analysis of Shotgun Cancer Proteomic Datasets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Webb-Robertson, Bobbie-Jo M.; Bramer, Lisa M.; Jensen, Jeffrey L.

    P-MartCancer is a new interactive web-based software environment that enables biomedical and biological scientists to perform in-depth analyses of global proteomics data without requiring direct interaction with the data or with statistical software. P-MartCancer offers a series of statistical modules associated with quality assessment, peptide and protein statistics, protein quantification and exploratory data analyses driven by the user via customized workflows and interactive visualization. Currently, P-MartCancer offers access to multiple cancer proteomic datasets generated through the Clinical Proteomics Tumor Analysis Consortium (CPTAC) at the peptide, gene and protein levels. P-MartCancer is deployed using Azure technologies (http://pmart.labworks.org/cptac.html), the web-service is alternativelymore » available via Docker Hub (https://hub.docker.com/r/pnnl/pmart-web/) and many statistical functions can be utilized directly from an R package available on GitHub (https://github.com/pmartR).« less

  16. Recent evaluations of crack-opening-area in circumferentially cracked pipes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rahman, S.; Brust, F.; Ghadiali, N.

    1997-04-01

    Leak-before-break (LBB) analyses for circumferentially cracked pipes are currently being conducted in the nuclear industry to justify elimination of pipe whip restraints and jet shields which are present because of the expected dynamic effects from pipe rupture. The application of the LBB methodology frequently requires calculation of leak rates. The leak rates depend on the crack-opening area of the through-wall crack in the pipe. In addition to LBB analyses which assume a hypothetical flaw size, there is also interest in the integrity of actual leaking cracks corresponding to current leakage detection requirements in NRC Regulatory Guide 1.45, or for assessingmore » temporary repair of Class 2 and 3 pipes that have leaks as are being evaluated in ASME Section XI. The objectives of this study were to review, evaluate, and refine current predictive models for performing crack-opening-area analyses of circumferentially cracked pipes. The results from twenty-five full-scale pipe fracture experiments, conducted in the Degraded Piping Program, the International Piping Integrity Research Group Program, and the Short Cracks in Piping and Piping Welds Program, were used to verify the analytical models. Standard statistical analyses were performed to assess used to verify the analytical models. Standard statistical analyses were performed to assess quantitatively the accuracy of the predictive models. The evaluation also involved finite element analyses for determining the crack-opening profile often needed to perform leak-rate calculations.« less

  17. Quantitative cancer risk assessment based on NIOSH and UCC epidemiological data for workers exposed to ethylene oxide.

    PubMed

    Valdez-Flores, Ciriaco; Sielken, Robert L; Teta, M Jane

    2010-04-01

    The most recent epidemiological data on individual workers in the NIOSH and updated UCC occupational studies have been used to characterize the potential excess cancer risks of environmental exposure to ethylene oxide (EO). In addition to refined analyses of the separate cohorts, power has been increased by analyzing the combined cohorts. In previous SMR analyses of the separate studies and the present analyses of the updated and pooled studies of over 19,000 workers, none of the SMRs for any combination of the 12 cancer endpoints and six sub-cohorts analyzed were statistically significantly greater than one including the ones of greatest previous interest: leukemia, lymphohematopoietic tissue, lymphoid tumors, NHL, and breast cancer. In our study, no evidence of a positive cumulative exposure-response relationship was found. Fitted Cox proportional hazards models with cumulative EO exposure do not have statistically significant positive slopes. The lack of increasing trends was corroborated by categorical analyses. Cox model estimates of the concentrations corresponding to a 1-in-a-million extra environmental cancer risk are all greater than approximately 1ppb and are more than 1500-fold greater than the 0.4ppt estimate in the 2006 EPA draft IRIS risk assessment. The reasons for this difference are identified and discussed. Copyright 2009 Elsevier Inc. All rights reserved.

  18. School Effects on Educational Achievement in Mathematics and Science: 1985-86. National Assessment of Educational Progress. Research and Development Report.

    ERIC Educational Resources Information Center

    Arnold, Carolyn L.; Kaufman, Phillip D.

    This report examines the effects of both student and school characteristics on mathematics and science achievement levels in the third, seventh, and eleventh grades using data from the 1985-86 National Assessment of Educational Progress (NAEP). Analyses feature hierarchical linear models (HLM), a regression-like statistical technique that…

  19. ASSESSING THE IMPACT OF LANDUSE/LANDCOVER ON STREAM CHEMISTRY IN MARYLAND

    EPA Science Inventory

    Spatial and statistical analyses were conducted to investigate the relationships between stream chemistry (nitrate, sulfate, dissolved organic carbon, etc.), habitat and satellite-derived landuse maps for the state of Maryland. Hydrologic Unit Code (HUC) watershed boundaries (8-...

  20. Simultaneous assessment of phase chemistry, phase abundance and bulk chemistry with statistical electron probe micro-analyses: Application to cement clinkers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilson, William; Krakowiak, Konrad J.; Ulm, Franz-Josef, E-mail: ulm@mit.edu

    2014-01-15

    According to recent developments in cement clinker engineering, the optimization of chemical substitutions in the main clinker phases offers a promising approach to improve both reactivity and grindability of clinkers. Thus, monitoring the chemistry of the phases may become part of the quality control at the cement plants, along with the usual measurements of the abundance of the mineralogical phases (quantitative X-ray diffraction) and the bulk chemistry (X-ray fluorescence). This paper presents a new method to assess these three complementary quantities with a single experiment. The method is based on electron microprobe spot analyses, performed over a grid located onmore » a representative surface of the sample and interpreted with advanced statistical tools. This paper describes the method and the experimental program performed on industrial clinkers to establish the accuracy in comparison to conventional methods. -- Highlights: •A new method of clinker characterization •Combination of electron probe technique with cluster analysis •Simultaneous assessment of phase abundance, composition and bulk chemistry •Experimental validation performed on industrial clinkers.« less

  1. Trends in statistical methods in articles published in Archives of Plastic Surgery between 2012 and 2017.

    PubMed

    Han, Kyunghwa; Jung, Inkyung

    2018-05-01

    This review article presents an assessment of trends in statistical methods and an evaluation of their appropriateness in articles published in the Archives of Plastic Surgery (APS) from 2012 to 2017. We reviewed 388 original articles published in APS between 2012 and 2017. We categorized the articles that used statistical methods according to the type of statistical method, the number of statistical methods, and the type of statistical software used. We checked whether there were errors in the description of statistical methods and results. A total of 230 articles (59.3%) published in APS between 2012 and 2017 used one or more statistical method. Within these articles, there were 261 applications of statistical methods with continuous or ordinal outcomes, and 139 applications of statistical methods with categorical outcome. The Pearson chi-square test (17.4%) and the Mann-Whitney U test (14.4%) were the most frequently used methods. Errors in describing statistical methods and results were found in 133 of the 230 articles (57.8%). Inadequate description of P-values was the most common error (39.1%). Among the 230 articles that used statistical methods, 71.7% provided details about the statistical software programs used for the analyses. SPSS was predominantly used in the articles that presented statistical analyses. We found that the use of statistical methods in APS has increased over the last 6 years. It seems that researchers have been paying more attention to the proper use of statistics in recent years. It is expected that these positive trends will continue in APS.

  2. An evaluation of the periapical status of teeth with necrotic pulps using periapical radiography and cone-beam computed tomography.

    PubMed

    Abella, F; Patel, S; Durán-Sindreu, F; Mercadé, M; Bueno, R; Roig, M

    2014-04-01

    To evaluate the presence or absence of periapical (PA) radiolucencies on individual roots of teeth with necrotic pulps, as assessed with digital PA radiographs and cone-beam computed tomography (CBCT). Digital PA radiographs and CBCT scans were taken from 161 endodontically untreated teeth (from 155 patients) diagnosed with non-vital pulps (pulp necrosis with normal PA tissue, symptomatic apical periodontitis, asymptomatic apical periodontitis, acute apical abscess and chronic apical abscess). Images were assessed by two calibrated endodontists to analyse the radiographic PA status of the teeth. A consensus was reached in the event of any disagreement. The data were analysed using a McNemar's test, and significance was set at P ≤ 0.05. Three hundred and forty paired images of roots were assessed with both digital PA radiographs and CBCT images. Fifteen additional roots were identified with CBCT. PA radiolucencies were present in 132 (38.8%) roots when assessed with PA radiographs, and in 196 (57.6%) roots when assessed with CBCT. This difference was statistically significant (P < 0.05). In teeth diagnosed with pulp necrosis, symptomatic apical periodontitis or acute apical abscess, CBCT images revealed a statistically larger number of PA radiolucencies than did PA radiographs (P < 0.05). No statistical differences were observed between PA radiographs and CBCT in teeth classified with asymptomatic apical periodontitis (P = 0.31) or chronic apical abscess (P = 1). Unlike PA radiographs, CBCT revealed a higher prevalence of PA radiolucencies when endodontically untreated teeth with non-vital pulps were examined. © 2013 International Endodontic Journal. Published by John Wiley & Sons Ltd.

  3. Performance of Between-Study Heterogeneity Measures in the Cochrane Library.

    PubMed

    Ma, Xiaoyue; Lin, Lifeng; Qu, Zhiyong; Zhu, Motao; Chu, Haitao

    2018-05-29

    The growth in comparative effectiveness research and evidence-based medicine has increased attention to systematic reviews and meta-analyses. Meta-analysis synthesizes and contrasts evidence from multiple independent studies to improve statistical efficiency and reduce bias. Assessing heterogeneity is critical for performing a meta-analysis and interpreting results. As a widely used heterogeneity measure, the I statistic quantifies the proportion of total variation across studies that is due to real differences in effect size. The presence of outlying studies can seriously exaggerate the I statistic. Two alternative heterogeneity measures, the Ir and Im, have been recently proposed to reduce the impact of outlying studies. To evaluate these measures' performance empirically, we applied them to 20,599 meta-analyses in the Cochrane Library. We found that the Ir and Im have strong agreement with the I, while they are more robust than the I when outlying studies appear.

  4. Mediation analysis in nursing research: a methodological review

    PubMed Central

    Liu, Jianghong; Ulrich, Connie

    2017-01-01

    Mediation statistical models help clarify the relationship between independent predictor variables and dependent outcomes of interest by assessing the impact of third variables. This type of statistical analysis is applicable for many clinical nursing research questions, yet its use within nursing remains low. Indeed, mediational analyses may help nurse researchers develop more effective and accurate prevention and treatment programs as well as help bridge the gap between scientific knowledge and clinical practice. In addition, this statistical approach allows nurse researchers to ask – and answer – more meaningful and nuanced questions that extend beyond merely determining whether an outcome occurs. Therefore, the goal of this paper is to provide a brief tutorial on the use of mediational analyses in clinical nursing research by briefly introducing the technique and, through selected empirical examples from the nursing literature, demonstrating its applicability in advancing nursing science. PMID:26176804

  5. Statistical Model of Dynamic Markers of the Alzheimer's Pathological Cascade.

    PubMed

    Balsis, Steve; Geraci, Lisa; Benge, Jared; Lowe, Deborah A; Choudhury, Tabina K; Tirso, Robert; Doody, Rachelle S

    2018-05-05

    Alzheimer's disease (AD) is a progressive disease reflected in markers across assessment modalities, including neuroimaging, cognitive testing, and evaluation of adaptive function. Identifying a single continuum of decline across assessment modalities in a single sample is statistically challenging because of the multivariate nature of the data. To address this challenge, we implemented advanced statistical analyses designed specifically to model complex data across a single continuum. We analyzed data from the Alzheimer's Disease Neuroimaging Initiative (ADNI; N = 1,056), focusing on indicators from the assessments of magnetic resonance imaging (MRI) volume, fluorodeoxyglucose positron emission tomography (FDG-PET) metabolic activity, cognitive performance, and adaptive function. Item response theory was used to identify the continuum of decline. Then, through a process of statistical scaling, indicators across all modalities were linked to that continuum and analyzed. Findings revealed that measures of MRI volume, FDG-PET metabolic activity, and adaptive function added measurement precision beyond that provided by cognitive measures, particularly in the relatively mild range of disease severity. More specifically, MRI volume, and FDG-PET metabolic activity become compromised in the very mild range of severity, followed by cognitive performance and finally adaptive function. Our statistically derived models of the AD pathological cascade are consistent with existing theoretical models.

  6. The use of statistical tools in field testing of putative effects of genetically modified plants on nontarget organisms

    PubMed Central

    Semenov, Alexander V; Elsas, Jan Dirk; Glandorf, Debora C M; Schilthuizen, Menno; Boer, Willem F

    2013-01-01

    Abstract To fulfill existing guidelines, applicants that aim to place their genetically modified (GM) insect-resistant crop plants on the market are required to provide data from field experiments that address the potential impacts of the GM plants on nontarget organisms (NTO's). Such data may be based on varied experimental designs. The recent EFSA guidance document for environmental risk assessment (2010) does not provide clear and structured suggestions that address the statistics of field trials on effects on NTO's. This review examines existing practices in GM plant field testing such as the way of randomization, replication, and pseudoreplication. Emphasis is placed on the importance of design features used for the field trials in which effects on NTO's are assessed. The importance of statistical power and the positive and negative aspects of various statistical models are discussed. Equivalence and difference testing are compared, and the importance of checking the distribution of experimental data is stressed to decide on the selection of the proper statistical model. While for continuous data (e.g., pH and temperature) classical statistical approaches – for example, analysis of variance (ANOVA) – are appropriate, for discontinuous data (counts) only generalized linear models (GLM) are shown to be efficient. There is no golden rule as to which statistical test is the most appropriate for any experimental situation. In particular, in experiments in which block designs are used and covariates play a role GLMs should be used. Generic advice is offered that will help in both the setting up of field testing and the interpretation and data analysis of the data obtained in this testing. The combination of decision trees and a checklist for field trials, which are provided, will help in the interpretation of the statistical analyses of field trials and to assess whether such analyses were correctly applied. We offer generic advice to risk assessors and applicants that will help in both the setting up of field testing and the interpretation and data analysis of the data obtained in field testing. PMID:24567836

  7. The use of statistical tools in field testing of putative effects of genetically modified plants on nontarget organisms.

    PubMed

    Semenov, Alexander V; Elsas, Jan Dirk; Glandorf, Debora C M; Schilthuizen, Menno; Boer, Willem F

    2013-08-01

    To fulfill existing guidelines, applicants that aim to place their genetically modified (GM) insect-resistant crop plants on the market are required to provide data from field experiments that address the potential impacts of the GM plants on nontarget organisms (NTO's). Such data may be based on varied experimental designs. The recent EFSA guidance document for environmental risk assessment (2010) does not provide clear and structured suggestions that address the statistics of field trials on effects on NTO's. This review examines existing practices in GM plant field testing such as the way of randomization, replication, and pseudoreplication. Emphasis is placed on the importance of design features used for the field trials in which effects on NTO's are assessed. The importance of statistical power and the positive and negative aspects of various statistical models are discussed. Equivalence and difference testing are compared, and the importance of checking the distribution of experimental data is stressed to decide on the selection of the proper statistical model. While for continuous data (e.g., pH and temperature) classical statistical approaches - for example, analysis of variance (ANOVA) - are appropriate, for discontinuous data (counts) only generalized linear models (GLM) are shown to be efficient. There is no golden rule as to which statistical test is the most appropriate for any experimental situation. In particular, in experiments in which block designs are used and covariates play a role GLMs should be used. Generic advice is offered that will help in both the setting up of field testing and the interpretation and data analysis of the data obtained in this testing. The combination of decision trees and a checklist for field trials, which are provided, will help in the interpretation of the statistical analyses of field trials and to assess whether such analyses were correctly applied. We offer generic advice to risk assessors and applicants that will help in both the setting up of field testing and the interpretation and data analysis of the data obtained in field testing.

  8. Ecological Momentary Assessments and Automated Time Series Analysis to Promote Tailored Health Care: A Proof-of-Principle Study.

    PubMed

    van der Krieke, Lian; Emerencia, Ando C; Bos, Elisabeth H; Rosmalen, Judith Gm; Riese, Harriëtte; Aiello, Marco; Sytema, Sjoerd; de Jonge, Peter

    2015-08-07

    Health promotion can be tailored by combining ecological momentary assessments (EMA) with time series analysis. This combined method allows for studying the temporal order of dynamic relationships among variables, which may provide concrete indications for intervention. However, application of this method in health care practice is hampered because analyses are conducted manually and advanced statistical expertise is required. This study aims to show how this limitation can be overcome by introducing automated vector autoregressive modeling (VAR) of EMA data and to evaluate its feasibility through comparisons with results of previously published manual analyses. We developed a Web-based open source application, called AutoVAR, which automates time series analyses of EMA data and provides output that is intended to be interpretable by nonexperts. The statistical technique we used was VAR. AutoVAR tests and evaluates all possible VAR models within a given combinatorial search space and summarizes their results, thereby replacing the researcher's tasks of conducting the analysis, making an informed selection of models, and choosing the best model. We compared the output of AutoVAR to the output of a previously published manual analysis (n=4). An illustrative example consisting of 4 analyses was provided. Compared to the manual output, the AutoVAR output presents similar model characteristics and statistical results in terms of the Akaike information criterion, the Bayesian information criterion, and the test statistic of the Granger causality test. Results suggest that automated analysis and interpretation of times series is feasible. Compared to a manual procedure, the automated procedure is more robust and can save days of time. These findings may pave the way for using time series analysis for health promotion on a larger scale. AutoVAR was evaluated using the results of a previously conducted manual analysis. Analysis of additional datasets is needed in order to validate and refine the application for general use.

  9. Ecological Momentary Assessments and Automated Time Series Analysis to Promote Tailored Health Care: A Proof-of-Principle Study

    PubMed Central

    Emerencia, Ando C; Bos, Elisabeth H; Rosmalen, Judith GM; Riese, Harriëtte; Aiello, Marco; Sytema, Sjoerd; de Jonge, Peter

    2015-01-01

    Background Health promotion can be tailored by combining ecological momentary assessments (EMA) with time series analysis. This combined method allows for studying the temporal order of dynamic relationships among variables, which may provide concrete indications for intervention. However, application of this method in health care practice is hampered because analyses are conducted manually and advanced statistical expertise is required. Objective This study aims to show how this limitation can be overcome by introducing automated vector autoregressive modeling (VAR) of EMA data and to evaluate its feasibility through comparisons with results of previously published manual analyses. Methods We developed a Web-based open source application, called AutoVAR, which automates time series analyses of EMA data and provides output that is intended to be interpretable by nonexperts. The statistical technique we used was VAR. AutoVAR tests and evaluates all possible VAR models within a given combinatorial search space and summarizes their results, thereby replacing the researcher’s tasks of conducting the analysis, making an informed selection of models, and choosing the best model. We compared the output of AutoVAR to the output of a previously published manual analysis (n=4). Results An illustrative example consisting of 4 analyses was provided. Compared to the manual output, the AutoVAR output presents similar model characteristics and statistical results in terms of the Akaike information criterion, the Bayesian information criterion, and the test statistic of the Granger causality test. Conclusions Results suggest that automated analysis and interpretation of times series is feasible. Compared to a manual procedure, the automated procedure is more robust and can save days of time. These findings may pave the way for using time series analysis for health promotion on a larger scale. AutoVAR was evaluated using the results of a previously conducted manual analysis. Analysis of additional datasets is needed in order to validate and refine the application for general use. PMID:26254160

  10. Night shift work and breast cancer risk: what do the meta-analyses tell us?

    PubMed

    Pahwa, Manisha; Labrèche, France; Demers, Paul A

    2018-05-22

    Objectives This paper aims to compare results, assess the quality, and discuss the implications of recently published meta-analyses of night shift work and breast cancer risk. Methods A comprehensive search was conducted for meta-analyses published from 2007-2017 that included at least one pooled effect size (ES) for breast cancer associated with any night shift work exposure metric and were accompanied by a systematic literature review. Pooled ES from each meta-analysis were ascertained with a focus on ever/never exposure associations. Assessments of heterogeneity and publication bias were also extracted. The AMSTAR 2 checklist was used to evaluate quality. Results Seven meta-analyses, published from 2013-2016, collectively included 30 cohort and case-control studies spanning 1996-2016. Five meta-analyses reported pooled ES for ever/never night shift work exposure; these ranged from 0.99 [95% confidence interval (CI) 0.95-1.03, N=10 cohort studies) to 1.40 (95% CI 1.13-1.73, N=9 high quality studies). Estimates for duration, frequency, and cumulative night shift work exposure were scant and mostly not statistically significant. Meta-analyses of cohort, Asian, and more fully-adjusted studies generally resulted in lower pooled ES than case-control, European, American, or minimally-adjusted studies. Most reported statistically significant between-study heterogeneity. Publication bias was not evident in any of the meta-analyses. Only one meta-analysis was strong in critical quality domains. Conclusions Fairly consistent elevated pooled ES were found for ever/never night shift work and breast cancer risk, but results for other shift work exposure metrics were inconclusive. Future evaluations of shift work should incorporate high quality meta-analyses that better appraise individual study quality.

  11. Manual vs. computer-assisted sperm analysis: can CASA replace manual assessment of human semen in clinical practice?

    PubMed

    Talarczyk-Desole, Joanna; Berger, Anna; Taszarek-Hauke, Grażyna; Hauke, Jan; Pawelczyk, Leszek; Jedrzejczak, Piotr

    2017-01-01

    The aim of the study was to check the quality of computer-assisted sperm analysis (CASA) system in comparison to the reference manual method as well as standardization of the computer-assisted semen assessment. The study was conducted between January and June 2015 at the Andrology Laboratory of the Division of Infertility and Reproductive Endocrinology, Poznań University of Medical Sciences, Poland. The study group consisted of 230 men who gave sperm samples for the first time in our center as part of an infertility investigation. The samples underwent manual and computer-assisted assessment of concentration, motility and morphology. A total of 184 samples were examined twice: manually, according to the 2010 WHO recommendations, and with CASA, using the program set-tings provided by the manufacturer. Additionally, 46 samples underwent two manual analyses and two computer-assisted analyses. The p-value of p < 0.05 was considered as statistically significant. Statistically significant differences were found between all of the investigated sperm parameters, except for non-progressive motility, measured with CASA and manually. In the group of patients where all analyses with each method were performed twice on the same sample we found no significant differences between both assessments of the same probe, neither in the samples analyzed manually nor with CASA, although standard deviation was higher in the CASA group. Our results suggest that computer-assisted sperm analysis requires further improvement for a wider application in clinical practice.

  12. Probabilistic dietary exposure assessment taking into account variability in both amount and frequency of consumption.

    PubMed

    Slob, Wout

    2006-07-01

    Probabilistic dietary exposure assessments that are fully based on Monte Carlo sampling from the raw intake data may not be appropriate. This paper shows that the data should first be analysed by using a statistical model that is able to take the various dimensions of food consumption patterns into account. A (parametric) model is discussed that takes into account the interindividual variation in (daily) consumption frequencies, as well as in amounts consumed. Further, the model can be used to include covariates, such as age, sex, or other individual attributes. Some illustrative examples show how this model may be used to estimate the probability of exceeding an (acute or chronic) exposure limit. These results are compared with the results based on directly counting the fraction of observed intakes exceeding the limit value. This comparison shows that the latter method is not adequate, in particular for the acute exposure situation. A two-step approach for probabilistic (acute) exposure assessment is proposed: first analyse the consumption data by a (parametric) statistical model as discussed in this paper, and then use Monte Carlo techniques for combining the variation in concentrations with the variation in consumption (by sampling from the statistical model). This approach results in an estimate of the fraction of the population as a function of the fraction of days at which the exposure limit is exceeded by the individual.

  13. Sensitivity Analyses of the Change in FVC in a Phase 3 Trial of Pirfenidone for Idiopathic Pulmonary Fibrosis

    PubMed Central

    Bradford, Williamson Z.; Fagan, Elizabeth A.; Glaspole, Ian; Glassberg, Marilyn K.; Glasscock, Kenneth F.; King, Talmadge E.; Lancaster, Lisa H.; Nathan, Steven D.; Pereira, Carlos A.; Sahn, Steven A.; Swigris, Jeffrey J.; Noble, Paul W.

    2015-01-01

    BACKGROUND: FVC outcomes in clinical trials on idiopathic pulmonary fibrosis (IPF) can be substantially influenced by the analytic methodology and the handling of missing data. We conducted a series of sensitivity analyses to assess the robustness of the statistical finding and the stability of the estimate of the magnitude of treatment effect on the primary end point of FVC change in a phase 3 trial evaluating pirfenidone in adults with IPF. METHODS: Source data included all 555 study participants randomized to treatment with pirfenidone or placebo in the Assessment of Pirfenidone to Confirm Efficacy and Safety in Idiopathic Pulmonary Fibrosis (ASCEND) study. Sensitivity analyses were conducted to assess whether alternative statistical tests and methods for handling missing data influenced the observed magnitude of treatment effect on the primary end point of change from baseline to week 52 in FVC. RESULTS: The distribution of FVC change at week 52 was systematically different between the two treatment groups and favored pirfenidone in each analysis. The method used to impute missing data due to death had a marked effect on the magnitude of change in FVC in both treatment groups; however, the magnitude of treatment benefit was generally consistent on a relative basis, with an approximate 50% reduction in FVC decline observed in the pirfenidone group in each analysis. CONCLUSIONS: Our results confirm the robustness of the statistical finding on the primary end point of change in FVC in the ASCEND trial and corroborate the estimated magnitude of the pirfenidone treatment effect in patients with IPF. TRIAL REGISTRY: ClinicalTrials.gov; No.: NCT01366209; URL: www.clinicaltrials.gov PMID:25856121

  14. Technical Report of the NAEP Mathematics Assessment in Puerto Rico: Focus on Statistical Issues (NCES 2007-462rev)

    ERIC Educational Resources Information Center

    Baxter, G. P.; Ahmed, S.; Sikali, E.; Waits, T.; Sloan, M.; Salvucci, S.

    2007-01-01

    The Nation's Report Card[TM] informs the public about the academic achievement of elementary and secondary students in the United States and its jurisdictions, including Puerto Rico. In 2003, a trial NAEP mathematics assessment was administered in Spanish to public school students at grades 4 and 8 in Puerto Rico. Based on preliminary analyses of…

  15. Patterns of medicinal plant use: an examination of the Ecuadorian Shuar medicinal flora using contingency table and binomial analyses.

    PubMed

    Bennett, Bradley C; Husby, Chad E

    2008-03-28

    Botanical pharmacopoeias are non-random subsets of floras, with some taxonomic groups over- or under-represented. Moerman [Moerman, D.E., 1979. Symbols and selectivity: a statistical analysis of Native American medical ethnobotany, Journal of Ethnopharmacology 1, 111-119] introduced linear regression/residual analysis to examine these patterns. However, regression, the commonly-employed analysis, suffers from several statistical flaws. We use contingency table and binomial analyses to examine patterns of Shuar medicinal plant use (from Amazonian Ecuador). We first analyzed the Shuar data using Moerman's approach, modified to better meet requirements of linear regression analysis. Second, we assessed the exact randomization contingency table test for goodness of fit. Third, we developed a binomial model to test for non-random selection of plants in individual families. Modified regression models (which accommodated assumptions of linear regression) reduced R(2) to from 0.59 to 0.38, but did not eliminate all problems associated with regression analyses. Contingency table analyses revealed that the entire flora departs from the null model of equal proportions of medicinal plants in all families. In the binomial analysis, only 10 angiosperm families (of 115) differed significantly from the null model. These 10 families are largely responsible for patterns seen at higher taxonomic levels. Contingency table and binomial analyses offer an easy and statistically valid alternative to the regression approach.

  16. Accounting for Multiple Births in Neonatal and Perinatal Trials: Systematic Review and Case Study

    PubMed Central

    Hibbs, Anna Maria; Black, Dennis; Palermo, Lisa; Cnaan, Avital; Luan, Xianqun; Truog, William E; Walsh, Michele C; Ballard, Roberta A

    2010-01-01

    Objectives To determine the prevalence in the neonatal literature of statistical approaches accounting for the unique clustering patterns of multiple births. To explore the sensitivity of an actual trial to several analytic approaches to multiples. Methods A systematic review of recent perinatal trials assessed the prevalence of studies accounting for clustering of multiples. The NO CLD trial served as a case study of the sensitivity of the outcome to several statistical strategies. We calculated odds ratios using non-clustered (logistic regression) and clustered (generalized estimating equations, multiple outputation) analyses. Results In the systematic review, most studies did not describe the randomization of twins and did not account for clustering. Of those studies that did, exclusion of multiples and generalized estimating equations were the most common strategies. The NO CLD study included 84 infants with a sibling enrolled in the study. Multiples were more likely than singletons to be white and were born to older mothers (p<0.01). Analyses that accounted for clustering were statistically significant; analyses assuming independence were not. Conclusions The statistical approach to multiples can influence the odds ratio and width of confidence intervals, thereby affecting the interpretation of a study outcome. A minority of perinatal studies address this issue. PMID:19969305

  17. Accounting for multiple births in neonatal and perinatal trials: systematic review and case study.

    PubMed

    Hibbs, Anna Maria; Black, Dennis; Palermo, Lisa; Cnaan, Avital; Luan, Xianqun; Truog, William E; Walsh, Michele C; Ballard, Roberta A

    2010-02-01

    To determine the prevalence in the neonatal literature of statistical approaches accounting for the unique clustering patterns of multiple births and to explore the sensitivity of an actual trial to several analytic approaches to multiples. A systematic review of recent perinatal trials assessed the prevalence of studies accounting for clustering of multiples. The Nitric Oxide to Prevent Chronic Lung Disease (NO CLD) trial served as a case study of the sensitivity of the outcome to several statistical strategies. We calculated odds ratios using nonclustered (logistic regression) and clustered (generalized estimating equations, multiple outputation) analyses. In the systematic review, most studies did not describe the random assignment of twins and did not account for clustering. Of those studies that did, exclusion of multiples and generalized estimating equations were the most common strategies. The NO CLD study included 84 infants with a sibling enrolled in the study. Multiples were more likely than singletons to be white and were born to older mothers (P < .01). Analyses that accounted for clustering were statistically significant; analyses assuming independence were not. The statistical approach to multiples can influence the odds ratio and width of confidence intervals, thereby affecting the interpretation of a study outcome. A minority of perinatal studies address this issue. Copyright 2010 Mosby, Inc. All rights reserved.

  18. Critical Thinking in the Business Classroom

    ERIC Educational Resources Information Center

    Reid, Joanne R.; Anderson, Phyllis R.

    2012-01-01

    A minicourse in critical thinking was implemented to improve student outcomes in two sessions of a senior-level business course at a Midwestern university. Statistical analyses of two quantitative assessments revealed significant improvements in critical thinking skills. Improvements in student outcomes in case studies and computerized business…

  19. Cognitive and Behavioral Treatments of Agoraphobia: Clinical, Behavioral, and Psychophysiological Outcomes.

    ERIC Educational Resources Information Center

    Michelson, Larry; And Others

    1985-01-01

    Agoraphobics (N=37) were randomly assigned to one of three cognitive-behavioral treatments: paradoxical intention, graduated exposure, or progressive deep muscle relaxation training. Results of follow-up analyses revealed statistically significant differences across treatments, tripartite response systems, and assessment phases. (Author/BL)

  20. Statistics for Radiology Research.

    PubMed

    Obuchowski, Nancy A; Subhas, Naveen; Polster, Joshua

    2017-02-01

    Biostatistics is an essential component in most original research studies in imaging. In this article we discuss five key statistical concepts for study design and analyses in modern imaging research: statistical hypothesis testing, particularly focusing on noninferiority studies; imaging outcomes especially when there is no reference standard; dealing with the multiplicity problem without spending all your study power; relevance of confidence intervals in reporting and interpreting study results; and finally tools for assessing quantitative imaging biomarkers. These concepts are presented first as examples of conversations between investigator and biostatistician, and then more detailed discussions of the statistical concepts follow. Three skeletal radiology examples are used to illustrate the concepts. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  1. Subjective global assessment of nutritional status in children.

    PubMed

    Mahdavi, Aida Malek; Ostadrahimi, Alireza; Safaiyan, Abdolrasool

    2010-10-01

    This study was aimed to compare the subjective and objective nutritional assessments and to analyse the performance of subjective global assessment (SGA) of nutritional status in diagnosing undernutrition in paediatric patients. One hundred and forty children (aged 2-12 years) hospitalized consecutively in Tabriz Paediatric Hospital from June 2008 to August 2008 underwent subjective assessment using the SGA questionnaire and objective assessment, including anthropometric and biochemical measurements. Agreement between two assessment methods was analysed by the kappa (κ) statistic. Statistical indicators including (sensitivity, specificity, predictive values, error rates, accuracy, powers, likelihood ratios and odds ratio) between SGA and objective assessment method were determined. The overall prevalence of undernutrition according to the SGA (70.7%) was higher than that by objective assessment of nutritional status (48.5%). Agreement between the two evaluation methods was only fair to moderate (κ = 0.336, P < 0.001). The sensitivity, specificity, positive and negative predictive value of the SGA method for screening undernutrition in this population were 88.235%, 45.833%, 60.606% and 80.487%, respectively. Accuracy, positive and negative power of the SGA method were 66.428%, 56.074% and 41.25%, respectively. Likelihood ratio positive, likelihood ratio negative and odds ratio of the SGA method were 1.628, 0.256 and 6.359, respectively. Our findings indicated that in assessing nutritional status of children, there is not a good level of agreement between SGA and objective nutritional assessment. In addition, SGA is a highly sensitive tool for assessing nutritional status and could identify children at risk of developing undernutrition. © 2009 Blackwell Publishing Ltd.

  2. Bayesian methods in reliability

    NASA Astrophysics Data System (ADS)

    Sander, P.; Badoux, R.

    1991-11-01

    The present proceedings from a course on Bayesian methods in reliability encompasses Bayesian statistical methods and their computational implementation, models for analyzing censored data from nonrepairable systems, the traits of repairable systems and growth models, the use of expert judgment, and a review of the problem of forecasting software reliability. Specific issues addressed include the use of Bayesian methods to estimate the leak rate of a gas pipeline, approximate analyses under great prior uncertainty, reliability estimation techniques, and a nonhomogeneous Poisson process. Also addressed are the calibration sets and seed variables of expert judgment systems for risk assessment, experimental illustrations of the use of expert judgment for reliability testing, and analyses of the predictive quality of software-reliability growth models such as the Weibull order statistics.

  3. Applications of Stochastic Analyses for Collaborative Learning and Cognitive Assessment

    DTIC Science & Technology

    2007-04-01

    models (Visser, Maartje, Raijmakers, & Molenaar , 2002). The second part of this paper illustrates two applications of the methods described in the...clustering three-way data sets. Computational Statistics and Data Analysis, 51 (11), 5368–5376. Visser, I., Maartje, E., Raijmakers, E. J., & Molenaar

  4. Assessment and evaluations of I-80 truck loads and their load effects : final report.

    DOT National Transportation Integrated Search

    2016-12-01

    The research objective is to examine the safety of Wyoming bridges on the I-80 corridor considering the actual truck traffic on the : interstate based upon weigh in motion (WIM) data. This was accomplished by performing statistical analyses to determ...

  5. Statistical methods for convergence detection of multi-objective evolutionary algorithms.

    PubMed

    Trautmann, H; Wagner, T; Naujoks, B; Preuss, M; Mehnen, J

    2009-01-01

    In this paper, two approaches for estimating the generation in which a multi-objective evolutionary algorithm (MOEA) shows statistically significant signs of convergence are introduced. A set-based perspective is taken where convergence is measured by performance indicators. The proposed techniques fulfill the requirements of proper statistical assessment on the one hand and efficient optimisation for real-world problems on the other hand. The first approach accounts for the stochastic nature of the MOEA by repeating the optimisation runs for increasing generation numbers and analysing the performance indicators using statistical tools. This technique results in a very robust offline procedure. Moreover, an online convergence detection method is introduced as well. This method automatically stops the MOEA when either the variance of the performance indicators falls below a specified threshold or a stagnation of their overall trend is detected. Both methods are analysed and compared for two MOEA and on different classes of benchmark functions. It is shown that the methods successfully operate on all stated problems needing less function evaluations while preserving good approximation quality at the same time.

  6. Visualizing statistical significance of disease clusters using cartograms.

    PubMed

    Kronenfeld, Barry J; Wong, David W S

    2017-05-15

    Health officials and epidemiological researchers often use maps of disease rates to identify potential disease clusters. Because these maps exaggerate the prominence of low-density districts and hide potential clusters in urban (high-density) areas, many researchers have used density-equalizing maps (cartograms) as a basis for epidemiological mapping. However, we do not have existing guidelines for visual assessment of statistical uncertainty. To address this shortcoming, we develop techniques for visual determination of statistical significance of clusters spanning one or more districts on a cartogram. We developed the techniques within a geovisual analytics framework that does not rely on automated significance testing, and can therefore facilitate visual analysis to detect clusters that automated techniques might miss. On a cartogram of the at-risk population, the statistical significance of a disease cluster is determinate from the rate, area and shape of the cluster under standard hypothesis testing scenarios. We develop formulae to determine, for a given rate, the area required for statistical significance of a priori and a posteriori designated regions under certain test assumptions. Uniquely, our approach enables dynamic inference of aggregate regions formed by combining individual districts. The method is implemented in interactive tools that provide choropleth mapping, automated legend construction and dynamic search tools to facilitate cluster detection and assessment of the validity of tested assumptions. A case study of leukemia incidence analysis in California demonstrates the ability to visually distinguish between statistically significant and insignificant regions. The proposed geovisual analytics approach enables intuitive visual assessment of statistical significance of arbitrarily defined regions on a cartogram. Our research prompts a broader discussion of the role of geovisual exploratory analyses in disease mapping and the appropriate framework for visually assessing the statistical significance of spatial clusters.

  7. The Relationships between the Iowa Test of Basic Skills and the Washington Assessment of Student Learning in the State of Washington. Technical Report.

    ERIC Educational Resources Information Center

    Joireman, Jeff; Abbott, Martin L.

    This report examines the overlap between student test results on the Iowa Test of Basic Skills (ITBS) and the Washington Assessment of Student Learning (WASL). The two tests were compared and contrasted in terms of content and measurement philosophy, and analyses studied the statistical relationship between the ITBS and the WASL. The ITBS assesses…

  8. Statistical Analyses for Probabilistic Assessments of the Reactor Pressure Vessel Structural Integrity: Building a Master Curve on an Extract of the 'Euro' Fracture Toughness Dataset, Controlling Statistical Uncertainty for Both Mono-Temperature and multi-temperature tests

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Josse, Florent; Lefebvre, Yannick; Todeschini, Patrick

    2006-07-01

    Assessing the structural integrity of a nuclear Reactor Pressure Vessel (RPV) subjected to pressurized-thermal-shock (PTS) transients is extremely important to safety. In addition to conventional deterministic calculations to confirm RPV integrity, Electricite de France (EDF) carries out probabilistic analyses. Probabilistic analyses are interesting because some key variables, albeit conventionally taken at conservative values, can be modeled more accurately through statistical variability. One variable which significantly affects RPV structural integrity assessment is cleavage fracture initiation toughness. The reference fracture toughness method currently in use at EDF is the RCCM and ASME Code lower-bound K{sub IC} based on the indexing parameter RT{submore » NDT}. However, in order to quantify the toughness scatter for probabilistic analyses, the master curve method is being analyzed at present. Furthermore, the master curve method is a direct means of evaluating fracture toughness based on K{sub JC} data. In the framework of the master curve investigation undertaken by EDF, this article deals with the following two statistical items: building a master curve from an extract of a fracture toughness dataset (from the European project 'Unified Reference Fracture Toughness Design curves for RPV Steels') and controlling statistical uncertainty for both mono-temperature and multi-temperature tests. Concerning the first point, master curve temperature dependence is empirical in nature. To determine the 'original' master curve, Wallin postulated that a unified description of fracture toughness temperature dependence for ferritic steels is possible, and used a large number of data corresponding to nuclear-grade pressure vessel steels and welds. Our working hypothesis is that some ferritic steels may behave in slightly different ways. Therefore we focused exclusively on the basic french reactor vessel metal of types A508 Class 3 and A 533 grade B Class 1, taking the sampling level and direction into account as well as the test specimen type. As for the second point, the emphasis is placed on the uncertainties in applying the master curve approach. For a toughness dataset based on different specimens of a single product, application of the master curve methodology requires the statistical estimation of one parameter: the reference temperature T{sub 0}. Because of the limited number of specimens, estimation of this temperature is uncertain. The ASTM standard provides a rough evaluation of this statistical uncertainty through an approximate confidence interval. In this paper, a thorough study is carried out to build more meaningful confidence intervals (for both mono-temperature and multi-temperature tests). These results ensure better control over uncertainty, and allow rigorous analysis of the impact of its influencing factors: the number of specimens and the temperatures at which they have been tested. (authors)« less

  9. Analysis and interpretation of cost data in randomised controlled trials: review of published studies

    PubMed Central

    Barber, Julie A; Thompson, Simon G

    1998-01-01

    Objective To review critically the statistical methods used for health economic evaluations in randomised controlled trials where an estimate of cost is available for each patient in the study. Design Survey of published randomised trials including an economic evaluation with cost values suitable for statistical analysis; 45 such trials published in 1995 were identified from Medline. Main outcome measures The use of statistical methods for cost data was assessed in terms of the descriptive statistics reported, use of statistical inference, and whether the reported conclusions were justified. Results Although all 45 trials reviewed apparently had cost data for each patient, only 9 (20%) reported adequate measures of variability for these data and only 25 (56%) gave results of statistical tests or a measure of precision for the comparison of costs between the randomised groups. Only 16 (36%) of the articles gave conclusions which were justified on the basis of results presented in the paper. No paper reported sample size calculations for costs. Conclusions The analysis and interpretation of cost data from published trials reveal a lack of statistical awareness. Strong and potentially misleading conclusions about the relative costs of alternative therapies have often been reported in the absence of supporting statistical evidence. Improvements in the analysis and reporting of health economic assessments are urgently required. Health economic guidelines need to be revised to incorporate more detailed statistical advice. Key messagesHealth economic evaluations required for important healthcare policy decisions are often carried out in randomised controlled trialsA review of such published economic evaluations assessed whether statistical methods for cost outcomes have been appropriately used and interpretedFew publications presented adequate descriptive information for costs or performed appropriate statistical analysesIn at least two thirds of the papers, the main conclusions regarding costs were not justifiedThe analysis and reporting of health economic assessments within randomised controlled trials urgently need improving PMID:9794854

  10. Propensity score to detect baseline imbalance in cluster randomized trials: the role of the c-statistic.

    PubMed

    Leyrat, Clémence; Caille, Agnès; Foucher, Yohann; Giraudeau, Bruno

    2016-01-22

    Despite randomization, baseline imbalance and confounding bias may occur in cluster randomized trials (CRTs). Covariate imbalance may jeopardize the validity of statistical inferences if they occur on prognostic factors. Thus, the diagnosis of a such imbalance is essential to adjust statistical analysis if required. We developed a tool based on the c-statistic of the propensity score (PS) model to detect global baseline covariate imbalance in CRTs and assess the risk of confounding bias. We performed a simulation study to assess the performance of the proposed tool and applied this method to analyze the data from 2 published CRTs. The proposed method had good performance for large sample sizes (n =500 per arm) and when the number of unbalanced covariates was not too small as compared with the total number of baseline covariates (≥40% of unbalanced covariates). We also provide a strategy for pre selection of the covariates needed to be included in the PS model to enhance imbalance detection. The proposed tool could be useful in deciding whether covariate adjustment is required before performing statistical analyses of CRTs.

  11. A Meta-Analysis and Multisite Time-Series Analysis of the Differential Toxicity of Major Fine Particulate Matter Constituents

    PubMed Central

    Levy, Jonathan I.; Diez, David; Dou, Yiping; Barr, Christopher D.; Dominici, Francesca

    2012-01-01

    Health risk assessments of particulate matter less than 2.5 μm in diameter (PM2.5) often assume that all constituents of PM2.5 are equally toxic. While investigators in previous epidemiologic studies have evaluated health risks from various PM2.5 constituents, few have conducted the analyses needed to directly inform risk assessments. In this study, the authors performed a literature review and conducted a multisite time-series analysis of hospital admissions and exposure to PM2.5 constituents (elemental carbon, organic carbon matter, sulfate, and nitrate) in a population of 12 million US Medicare enrollees for the period 2000–2008. The literature review illustrated a general lack of multiconstituent models or insight about probabilities of differential impacts per unit of concentration change. Consistent with previous results, the multisite time-series analysis found statistically significant associations between short-term changes in elemental carbon and cardiovascular hospital admissions. Posterior probabilities from multiconstituent models provided evidence that some individual constituents were more toxic than others, and posterior parameter estimates coupled with correlations among these estimates provided necessary information for risk assessment. Ratios of constituent toxicities, commonly used in risk assessment to describe differential toxicity, were extremely uncertain for all comparisons. These analyses emphasize the subtlety of the statistical techniques and epidemiologic studies necessary to inform risk assessments of particle constituents. PMID:22510275

  12. Identifying and characterizing hepatitis C virus hotspots in Massachusetts: a spatial epidemiological approach.

    PubMed

    Stopka, Thomas J; Goulart, Michael A; Meyers, David J; Hutcheson, Marga; Barton, Kerri; Onofrey, Shauna; Church, Daniel; Donahue, Ashley; Chui, Kenneth K H

    2017-04-20

    Hepatitis C virus (HCV) infections have increased during the past decade but little is known about geographic clustering patterns. We used a unique analytical approach, combining geographic information systems (GIS), spatial epidemiology, and statistical modeling to identify and characterize HCV hotspots, statistically significant clusters of census tracts with elevated HCV counts and rates. We compiled sociodemographic and HCV surveillance data (n = 99,780 cases) for Massachusetts census tracts (n = 1464) from 2002 to 2013. We used a five-step spatial epidemiological approach, calculating incremental spatial autocorrelations and Getis-Ord Gi* statistics to identify clusters. We conducted logistic regression analyses to determine factors associated with the HCV hotspots. We identified nine HCV clusters, with the largest in Boston, New Bedford/Fall River, Worcester, and Springfield (p < 0.05). In multivariable analyses, we found that HCV hotspots were independently and positively associated with the percent of the population that was Hispanic (adjusted odds ratio [AOR]: 1.07; 95% confidence interval [CI]: 1.04, 1.09) and the percent of households receiving food stamps (AOR: 1.83; 95% CI: 1.22, 2.74). HCV hotspots were independently and negatively associated with the percent of the population that were high school graduates or higher (AOR: 0.91; 95% CI: 0.89, 0.93) and the percent of the population in the "other" race/ethnicity category (AOR: 0.88; 95% CI: 0.85, 0.91). We identified locations where HCV clusters were a concern, and where enhanced HCV prevention, treatment, and care can help combat the HCV epidemic in Massachusetts. GIS, spatial epidemiological and statistical analyses provided a rigorous approach to identify hotspot clusters of disease, which can inform public health policy and intervention targeting. Further studies that incorporate spatiotemporal cluster analyses, Bayesian spatial and geostatistical models, spatially weighted regression analyses, and assessment of associations between HCV clustering and the built environment are needed to expand upon our combined spatial epidemiological and statistical methods.

  13. Evaluation and Assessment of a Biomechanics Computer-Aided Instruction.

    ERIC Educational Resources Information Center

    Washington, N.; Parnianpour, M.; Fraser, J. M.

    1999-01-01

    Describes the Biomechanics Tutorial, a computer-aided instructional tool that was developed at Ohio State University to expedite the transition from lecture to application for undergraduate students. Reports evaluation results that used statistical analyses and student questionnaires to show improved performance on posttests as well as positive…

  14. Judgmental and Statistical DIF Analyses of the PISA-2003 Mathematics Literacy Items

    ERIC Educational Resources Information Center

    Yildirim, Huseyin Husnu; Berberoglu, Giray

    2009-01-01

    Comparisons of human characteristics across different language groups and cultures become more important in today's educational assessment practices as evidenced by the increasing interest in international comparative studies. Within this context, the fairness of the results across different language and cultural groups draws the attention of…

  15. Determining Sample Sizes for Precise Contrast Analysis with Heterogeneous Variances

    ERIC Educational Resources Information Center

    Jan, Show-Li; Shieh, Gwowen

    2014-01-01

    The analysis of variance (ANOVA) is one of the most frequently used statistical analyses in practical applications. Accordingly, the single and multiple comparison procedures are frequently applied to assess the differences among mean effects. However, the underlying assumption of homogeneous variances may not always be tenable. This study…

  16. Misclassification bias in areal estimates

    Treesearch

    Raymond L. Czaplewski

    1992-01-01

    In addition to thematic maps, remote sensing provides estimates of area in different thematic categories. Areal estimates are frequently used for resource inventories, management planning, and assessment analyses. Misclassification causes bias in these statistical areal estimates. For example, if a small percentage of a common cover type is misclassified as a rare...

  17. Assessing groundwater vulnerability to agrichemical contamination in the Midwest US

    USGS Publications Warehouse

    Burkart, M.R.; Kolpin, D.W.; James, D.E.

    1999-01-01

    Agrichemicals (herbicides and nitrate) are significant sources of diffuse pollution to groundwater. Indirect methods are needed to assess the potential for groundwater contamination by diffuse sources because groundwater monitoring is too costly to adequately define the geographic extent of contamination at a regional or national scale. This paper presents examples of the application of statistical, overlay and index, and process-based modeling methods for groundwater vulnerability assessments to a variety of data from the Midwest U.S. The principles for vulnerability assessment include both intrinsic (pedologic, climatologic, and hydrogeologic factors) and specific (contaminant and other anthropogenic factors) vulnerability of a location. Statistical methods use the frequency of contaminant occurrence, contaminant concentration, or contamination probability as a response variable. Statistical assessments are useful for defining the relations among explanatory and response variables whether they define intrinsic or specific vulnerability. Multivariate statistical analyses are useful for ranking variables critical to estimating water quality responses of interest. Overlay and index methods involve intersecting maps of intrinsic and specific vulnerability properties and indexing the variables by applying appropriate weights. Deterministic models use process-based equations to simulate contaminant transport and are distinguished from the other methods in their potential to predict contaminant transport in both space and time. An example of a one-dimensional leaching model linked to a geographic information system (GIS) to define a regional metamodel for contamination in the Midwest is included.

  18. Randomized trials are frequently fragmented in multiple secondary publications.

    PubMed

    Ebrahim, Shanil; Montoya, Luis; Kamal El Din, Mostafa; Sohani, Zahra N; Agarwal, Arnav; Bance, Sheena; Saquib, Juliann; Saquib, Nazmus; Ioannidis, John P A

    2016-11-01

    To assess the frequency and features of secondary publications of randomized controlled trials (RCTs). For 191 RCTs published in high-impact journals in 2009, we searched for secondary publications coauthored by at least one same author of the primary trial publication. We evaluated the probability of having secondary publications, characteristics of the primary trial publication that predict having secondary publications, types of secondary analyses conducted, and statistical significance of those analyses. Of 191 primary trials, 88 (46%) had a total of 475 secondary publications by 2/2014. Eight trials had >10 (up to 51) secondary publications each. In multivariable modeling, the risk of having subsequent secondary publications increased 1.32-fold (95% CI 1.05-1.68) per 10-fold increase in sample size, and 1.71-fold (95% CI 1.19-2.45) in the presence of a design article. In a sample of 197 secondary publications examined in depth, 193 tested different hypotheses than the primary publication. Of the 193, 43 tested differences between subgroups, 85 assessed predictive factors associated with an outcome of interest, 118 evaluated different outcomes than the original article, 71 had differences in eligibility criteria, and 21 assessed different durations of follow-up; 176 (91%) presented at least one analysis with statistically significant results. Approximately half of randomized trials in high-impact journals have secondary publications published with a few trials followed by numerous secondary publications. Almost all of these publications report some statistically significant results. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. Assessment of statistical methods used in library-based approaches to microbial source tracking.

    PubMed

    Ritter, Kerry J; Carruthers, Ethan; Carson, C Andrew; Ellender, R D; Harwood, Valerie J; Kingsley, Kyle; Nakatsu, Cindy; Sadowsky, Michael; Shear, Brian; West, Brian; Whitlock, John E; Wiggins, Bruce A; Wilbur, Jayson D

    2003-12-01

    Several commonly used statistical methods for fingerprint identification in microbial source tracking (MST) were examined to assess the effectiveness of pattern-matching algorithms to correctly identify sources. Although numerous statistical methods have been employed for source identification, no widespread consensus exists as to which is most appropriate. A large-scale comparison of several MST methods, using identical fecal sources, presented a unique opportunity to assess the utility of several popular statistical methods. These included discriminant analysis, nearest neighbour analysis, maximum similarity and average similarity, along with several measures of distance or similarity. Threshold criteria for excluding uncertain or poorly matched isolates from final analysis were also examined for their ability to reduce false positives and increase prediction success. Six independent libraries used in the study were constructed from indicator bacteria isolated from fecal materials of humans, seagulls, cows and dogs. Three of these libraries were constructed using the rep-PCR technique and three relied on antibiotic resistance analysis (ARA). Five of the libraries were constructed using Escherichia coli and one using Enterococcus spp. (ARA). Overall, the outcome of this study suggests a high degree of variability across statistical methods. Despite large differences in correct classification rates among the statistical methods, no single statistical approach emerged as superior. Thresholds failed to consistently increase rates of correct classification and improvement was often associated with substantial effective sample size reduction. Recommendations are provided to aid in selecting appropriate analyses for these types of data.

  20. Incorporating GIS building data and census housing statistics for sub-block-level population estimation

    USGS Publications Warehouse

    Wu, S.-S.; Wang, L.; Qiu, X.

    2008-01-01

    This article presents a deterministic model for sub-block-level population estimation based on the total building volumes derived from geographic information system (GIS) building data and three census block-level housing statistics. To assess the model, we generated artificial blocks by aggregating census block areas and calculating the respective housing statistics. We then applied the model to estimate populations for sub-artificial-block areas and assessed the estimates with census populations of the areas. Our analyses indicate that the average percent error of population estimation for sub-artificial-block areas is comparable to those for sub-census-block areas of the same size relative to associated blocks. The smaller the sub-block-level areas, the higher the population estimation errors. For example, the average percent error for residential areas is approximately 0.11 percent for 100 percent block areas and 35 percent for 5 percent block areas.

  1. Statistical analysis of the determinations of the Sun's Galactocentric distance

    NASA Astrophysics Data System (ADS)

    Malkin, Zinovy

    2013-02-01

    Based on several tens of R0 measurements made during the past two decades, several studies have been performed to derive the best estimate of R0. Some used just simple averaging to derive a result, whereas others provided comprehensive analyses of possible errors in published results. In either case, detailed statistical analyses of data used were not performed. However, a computation of the best estimates of the Galactic rotation constants is not only an astronomical but also a metrological task. Here we perform an analysis of 53 R0 measurements (published in the past 20 years) to assess the consistency of the data. Our analysis shows that they are internally consistent. It is also shown that any trend in the R0 estimates from the last 20 years is statistically negligible, which renders the presence of a bandwagon effect doubtful. On the other hand, the formal errors in the published R0 estimates improve significantly with time.

  2. Analysis and meta-analysis of single-case designs: an introduction.

    PubMed

    Shadish, William R

    2014-04-01

    The last 10 years have seen great progress in the analysis and meta-analysis of single-case designs (SCDs). This special issue includes five articles that provide an overview of current work on that topic, including standardized mean difference statistics, multilevel models, Bayesian statistics, and generalized additive models. Each article analyzes a common example across articles and presents syntax or macros for how to do them. These articles are followed by commentaries from single-case design researchers and journal editors. This introduction briefly describes each article and then discusses several issues that must be addressed before we can know what analyses will eventually be best to use in SCD research. These issues include modeling trend, modeling error covariances, computing standardized effect size estimates, assessing statistical power, incorporating more accurate models of outcome distributions, exploring whether Bayesian statistics can improve estimation given the small samples common in SCDs, and the need for annotated syntax and graphical user interfaces that make complex statistics accessible to SCD researchers. The article then discusses reasons why SCD researchers are likely to incorporate statistical analyses into their research more often in the future, including changing expectations and contingencies regarding SCD research from outside SCD communities, changes and diversity within SCD communities, corrections of erroneous beliefs about the relationship between SCD research and statistics, and demonstrations of how statistics can help SCD researchers better meet their goals. Copyright © 2013 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved.

  3. Reporting of Positive Results in Randomized Controlled Trials of Mindfulness-Based Mental Health Interventions.

    PubMed

    Coronado-Montoya, Stephanie; Levis, Alexander W; Kwakkenbos, Linda; Steele, Russell J; Turner, Erick H; Thombs, Brett D

    2016-01-01

    A large proportion of mindfulness-based therapy trials report statistically significant results, even in the context of very low statistical power. The objective of the present study was to characterize the reporting of "positive" results in randomized controlled trials of mindfulness-based therapy. We also assessed mindfulness-based therapy trial registrations for indications of possible reporting bias and reviewed recent systematic reviews and meta-analyses to determine whether reporting biases were identified. CINAHL, Cochrane CENTRAL, EMBASE, ISI, MEDLINE, PsycInfo, and SCOPUS databases were searched for randomized controlled trials of mindfulness-based therapy. The number of positive trials was described and compared to the number that might be expected if mindfulness-based therapy were similarly effective compared to individual therapy for depression. Trial registries were searched for mindfulness-based therapy registrations. CINAHL, Cochrane CENTRAL, EMBASE, ISI, MEDLINE, PsycInfo, and SCOPUS were also searched for mindfulness-based therapy systematic reviews and meta-analyses. 108 (87%) of 124 published trials reported ≥1 positive outcome in the abstract, and 109 (88%) concluded that mindfulness-based therapy was effective, 1.6 times greater than the expected number of positive trials based on effect size d = 0.55 (expected number positive trials = 65.7). Of 21 trial registrations, 13 (62%) remained unpublished 30 months post-trial completion. No trial registrations adequately specified a single primary outcome measure with time of assessment. None of 36 systematic reviews and meta-analyses concluded that effect estimates were overestimated due to reporting biases. The proportion of mindfulness-based therapy trials with statistically significant results may overstate what would occur in practice.

  4. Psychometric properties of the Danish student well-being questionnaire assessed in >250,000 student responders.

    PubMed

    Niclasen, Janni; Keilow, Maria; Obel, Carsten

    2018-05-01

    Well-being is considered a prerequisite for learning. The Danish Ministry of Education initiated the development of a new 40-item student well-being questionnaire in 2014 to monitor well-being among all Danish public school students on a yearly basis. The aim of this study was to investigate the basic psychometric properties of this questionnaire. We used the data from the 2015 Danish student well-being survey for 268,357 students in grades 4-9 (about 85% of the study population). Descriptive statistics, exploratory factor analyses, confirmatory factor analyses and Cronbach's α reliability measures were used in the analyses. The factor analyses did not unambiguously support one particular factor structure. However, based on the basic descriptive statistics, exploratory factor analyses, confirmatory factor analyses, the semantics of the individual items and Cronbach's α, we propose a four-factor structure including 27 of the 40 items originally proposed. The four scales measure school connectedness, learning self-efficacy, learning environment and classroom management. Two bullying items and two psychosomatic items should be considered separately, leaving 31 items in the questionnaire. The proposed four-factor structure addresses central aspects of well-being, which, if used constructively, may support public schools' work to increase levels of student well-being.

  5. Assessing the effects of habitat patches ensuring propagule supply and different costs inclusion in marine spatial planning through multivariate analyses.

    PubMed

    Appolloni, L; Sandulli, R; Vetrano, G; Russo, G F

    2018-05-15

    Marine Protected Areas are considered key tools for conservation of coastal ecosystems. However, many reserves are characterized by several problems mainly related to inadequate zonings that often do not protect high biodiversity and propagule supply areas precluding, at the same time, economic important zones for local interests. The Gulf of Naples is here employed as a study area to assess the effects of inclusion of different conservation features and costs in reserve design process. In particular eight scenarios are developed using graph theory to identify propagule source patches and fishing and exploitation activities as costs-in-use for local population. Scenarios elaborated by MARXAN, software commonly used for marine conservation planning, are compared using multivariate analyses (MDS, PERMANOVA and PERMDISP) in order to assess input data having greatest effects on protected areas selection. MARXAN is heuristic software able to give a number of different correct results, all of them near to the best solution. Its outputs show that the most important areas to be protected, in order to ensure long-term habitat life and adequate propagule supply, are mainly located around the Gulf islands. In addition through statistical analyses it allowed us to prove that different choices on conservation features lead to statistically different scenarios. The presence of propagule supply patches forces MARXAN to select almost the same areas to protect decreasingly different MARXAN results and, thus, choices for reserves area selection. The multivariate analyses applied here to marine spatial planning proved to be very helpful allowing to identify i) how different scenario input data affect MARXAN and ii) what features have to be taken into account in study areas characterized by peculiar biological and economic interests. Copyright © 2018 Elsevier Ltd. All rights reserved.

  6. Statistics of Statisticians: Critical Mass of Statistics and Operational Research Groups

    NASA Astrophysics Data System (ADS)

    Kenna, Ralph; Berche, Bertrand

    Using a recently developed model, inspired by mean field theory in statistical physics, and data from the UK's Research Assessment Exercise, we analyse the relationship between the qualities of statistics and operational research groups and the quantities of researchers in them. Similar to other academic disciplines, we provide evidence for a linear dependency of quality on quantity up to an upper critical mass, which is interpreted as the average maximum number of colleagues with whom a researcher can communicate meaningfully within a research group. The model also predicts a lower critical mass, which research groups should strive to achieve to avoid extinction. For statistics and operational research, the lower critical mass is estimated to be 9 ± 3. The upper critical mass, beyond which research quality does not significantly depend on group size, is 17 ± 6.

  7. Studies in interactive communication. I - The effects of four communication modes on the behavior of teams during cooperative problem-solving.

    NASA Technical Reports Server (NTRS)

    Chapanis, A.; Ochsman, R. B.; Parrish, R. N.; Weeks, G. D.

    1972-01-01

    Two-man teams solved credible, 'real-world' problems for which computer assistance has been or could be useful. Conversations were carried on in one of four modes of communication: (1) typewriting, (2) handwriting, (3) voice, and (4) natural, unrestricted communication. Two groups of subjects (experienced and inexperienced typists) were tested in the typewriting mode. Performance was assessed on three classes of dependent measures: time to solution, behavioral measures of activity, and linguistic measures. Significant and meaningful differences among the communication modes were found in each of the three classes of dependent variable. This paper is concerned mainly with the results of the activity analyses. Behavior was recorded in 15 different categories. The analyses of variance yielded 34 statistically significant terms of which 27 were judged to be practically significant as well. When the data were transformed to eliminate heterogeneity, the analyses of variance yielded 35 statistically significant terms of which 26 were judged to be practically significant.

  8. Accuracy of medical subject heading indexing of dental survival analyses.

    PubMed

    Layton, Danielle M; Clarke, Michael

    2014-01-01

    To assess the Medical Subject Headings (MeSH) indexing of articles that employed time-to-event analyses to report outcomes of dental treatment in patients. Articles published in 2008 in 50 dental journals with the highest impact factors were hand searched to identify articles reporting dental treatment outcomes over time in human subjects with time-to-event statistics (included, n = 95), without time-to-event statistics (active controls, n = 91), and all other articles (passive controls, n = 6,769). The search was systematic (kappa 0.92 for screening, 0.86 for eligibility). Outcome-, statistic- and time-related MeSH were identified, and differences in allocation between groups were analyzed with chi-square and Fischer exact statistics. The most frequently allocated MeSH for included and active control articles were "dental restoration failure" (77% and 52%, respectively) and "treatment outcome" (54% and 48%, respectively). Outcome MeSH was similar between these groups (86% and 77%, respectively) and significantly greater than passive controls (10%, P < .001). Significantly more statistical MeSH were allocated to the included articles than to the active or passive controls (67%, 15%, and 1%, respectively, P < .001). Sixty-nine included articles specifically used Kaplan-Meier or life table analyses, but only 42% (n = 29) were indexed as such. Significantly more time-related MeSH were allocated to the included than the active controls (92% and 79%, respectively, P = .02), or to the passive controls (22%, P < .001). MeSH allocation within MEDLINE to time-to-event dental articles was inaccurate and inconsistent. Statistical MeSH were omitted from 30% of the included articles and incorrectly allocated to 15% of active controls. Such errors adversely impact search accuracy.

  9. The thresholds for statistical and clinical significance – a five-step procedure for evaluation of intervention effects in randomised clinical trials

    PubMed Central

    2014-01-01

    Background Thresholds for statistical significance are insufficiently demonstrated by 95% confidence intervals or P-values when assessing results from randomised clinical trials. First, a P-value only shows the probability of getting a result assuming that the null hypothesis is true and does not reflect the probability of getting a result assuming an alternative hypothesis to the null hypothesis is true. Second, a confidence interval or a P-value showing significance may be caused by multiplicity. Third, statistical significance does not necessarily result in clinical significance. Therefore, assessment of intervention effects in randomised clinical trials deserves more rigour in order to become more valid. Methods Several methodologies for assessing the statistical and clinical significance of intervention effects in randomised clinical trials were considered. Balancing simplicity and comprehensiveness, a simple five-step procedure was developed. Results For a more valid assessment of results from a randomised clinical trial we propose the following five-steps: (1) report the confidence intervals and the exact P-values; (2) report Bayes factor for the primary outcome, being the ratio of the probability that a given trial result is compatible with a ‘null’ effect (corresponding to the P-value) divided by the probability that the trial result is compatible with the intervention effect hypothesised in the sample size calculation; (3) adjust the confidence intervals and the statistical significance threshold if the trial is stopped early or if interim analyses have been conducted; (4) adjust the confidence intervals and the P-values for multiplicity due to number of outcome comparisons; and (5) assess clinical significance of the trial results. Conclusions If the proposed five-step procedure is followed, this may increase the validity of assessments of intervention effects in randomised clinical trials. PMID:24588900

  10. Analysis of vocal and swallowing functions after horizontal glottectomy.

    PubMed

    Topaloğlu, İlhan; Bal, Muhlis; Salturk, Ziya; Berkiten, Güler; Atar, Yavuz

    2016-08-01

    We conducted a cross-sectional study to assess vocal and swallowing functions after horizontal glottectomy. Our study population was made up of 22 men aged 45 to 72 years (mean: 58.3) who underwent horizontal glottectomy and completed at least 1 year of follow-up. To compare postoperative results, 20 similarly aged men were included as a control group; all glottectomy patients and all controls were smokers. We used three methods-acoustic and aerodynamic voice analyses, the GRBAS (grade, roughness, breathiness, asthenicity, and strain) scale, and the voice handicap index-30 (VHI-30)-to assess vocal function objectively, perceptually, and subjectively, respectively. We also assessed swallowing function objectively by fiberoptic endoscopic evaluation of swallowing (FEES) and subjectively with the M.D. Anderson dysphagia inventory (MDADI). The 22 patients were also subcategorized into three groups according to the extent of their arytenoid cartilage resection, and their outcomes were compared. Acoustic and aerodynamic analyses showed that the mean maximum phonation time was significantly shorter and the fundamental frequency was significantly lower in the glottectomy group than in the controls (p = 0.001 for both), and that the mean jitter and shimmer values and the mean harmonics-to-noise ratio were all significantly higher (p = 0.001 for all); there were no significant differences among the three arytenoid subgroups. Self-assessments revealed that there were no statistically significant differences among the three subgroups in GRBAS scale scores except for the breathiness score (p = 0.045), which was lower in the arytenoid preservation subgroup than in the total resection subgroup; there were no statistically significant differences among the three subgroups in VHI-30 scores. Finally, swallow testing found no statistically significant differences in FEES scores or MDADI scores. We conclude that horizontal glottectomy caused a deterioration in vocal function, but swallowing function was satisfactory.

  11. Bootstrapping in Applied Linguistics: Assessing Its Potential Using Shared Data

    ERIC Educational Resources Information Center

    Plonsky, Luke; Egbert, Jesse; Laflair, Geoffrey T.

    2015-01-01

    Parametric analyses such as t tests and ANOVAs are the norm--if not the default--statistical tests found in quantitative applied linguistics research (Gass 2009). Applied statisticians and one applied linguist (Larson-Hall 2010, 2012; Larson-Hall and Herrington 2010), however, have argued that this approach may not be appropriate for small samples…

  12. Multivariate geomorphic analysis of forest streams: Implications for assessment of land use impacts on channel condition

    Treesearch

    Richard. D. Wood-Smith; John M. Buffington

    1996-01-01

    Multivariate statistical analyses of geomorphic variables from 23 forest stream reaches in southeast Alaska result in successful discrimination between pristine streams and those disturbed by land management, specifically timber harvesting and associated road building. Results of discriminant function analysis indicate that a three-variable model discriminates 10...

  13. Using Rasch Analysis to Identify Uncharacteristic Responses to Undergraduate Assessments

    ERIC Educational Resources Information Center

    Edwards, Antony; Alcock, Lara

    2010-01-01

    Rasch Analysis is a statistical technique that is commonly used to analyse both test data and Likert survey data, to construct and evaluate question item banks, and to evaluate change in longitudinal studies. In this article, we introduce the dichotomous Rasch model, briefly discussing its assumptions. Then, using data collected in an…

  14. Publication Bias in Meta-Analyses of the Efficacy of Psychotherapeutic Interventions for Depression

    ERIC Educational Resources Information Center

    Niemeyer, Helen; Musch, Jochen; Pietrowsky, Reinhard

    2013-01-01

    Objective: The aim of this study was to assess whether systematic reviews investigating psychotherapeutic interventions for depression are affected by publication bias. Only homogeneous data sets were included, as heterogeneous data sets can distort statistical tests of publication bias. Method: We applied Begg and Mazumdar's adjusted rank…

  15. Knowledge about Hepatitis B and Predictors of Hepatitis B Vaccination among Vietnamese American College Students

    ERIC Educational Resources Information Center

    Hwang, Jessica P.; Huang, Chih-Hsun; Yi, Jenny K.

    2008-01-01

    Asian American college students are at high risk for hepatitis B virus (HBV). Participants and Methods: Vietnamese American students completed a questionnaire assessing HBV knowledge and attitudes. The authors performed statistical analyses to examine the relationship between HBV knowledge and participant characteristics. They also performed…

  16. Provision of Pre-Primary Education as a Basic Right in Tanzania: Reflections from Policy Documents

    ERIC Educational Resources Information Center

    Mtahabwa, Lyabwene

    2010-01-01

    This study sought to assess provision of pre-primary education in Tanzania as a basic right through analyses of relevant policy documents. Documents which were published over the past decade were considered, including educational policies, action plans, national papers, the "Basic Education Statistics in Tanzania" documents, strategy…

  17. Cross-sectional associations between air pollution and chronic bronchitis: an ESCAPE meta-analysis across five cohorts.

    PubMed

    Cai, Yutong; Schikowski, Tamara; Adam, Martin; Buschka, Anna; Carsin, Anne-Elie; Jacquemin, Benedicte; Marcon, Alessandro; Sanchez, Margaux; Vierkötter, Andrea; Al-Kanaani, Zaina; Beelen, Rob; Birk, Matthias; Brunekreef, Bert; Cirach, Marta; Clavel-Chapelon, Françoise; Declercq, Christophe; de Hoogh, Kees; de Nazelle, Audrey; Ducret-Stich, Regina E; Valeria Ferretti, Virginia; Forsberg, Bertil; Gerbase, Margaret W; Hardy, Rebecca; Heinrich, Joachim; Hoek, Gerard; Jarvis, Debbie; Keidel, Dirk; Kuh, Diana; Nieuwenhuijsen, Mark J; Ragettli, Martina S; Ranzi, Andrea; Rochat, Thierry; Schindler, Christian; Sugiri, Dorothea; Temam, Sofia; Tsai, Ming-Yi; Varraso, Raphaëlle; Kauffmann, Francine; Krämer, Ursula; Sunyer, Jordi; Künzli, Nino; Probst-Hensch, Nicole; Hansell, Anna L

    2014-11-01

    This study aimed to assess associations of outdoor air pollution on prevalence of chronic bronchitis symptoms in adults in five cohort studies (Asthma-E3N, ECRHS, NSHD, SALIA, SAPALDIA) participating in the European Study of Cohorts for Air Pollution Effects (ESCAPE) project. Annual average particulate matter (PM(10), PM(2.5), PM(absorbance), PM(coarse)), NO(2), nitrogen oxides (NO(x)) and road traffic measures modelled from ESCAPE measurement campaigns 2008-2011 were assigned to home address at most recent assessments (1998-2011). Symptoms examined were chronic bronchitis (cough and phlegm for ≥3 months of the year for ≥2 years), chronic cough (with/without phlegm) and chronic phlegm (with/without cough). Cohort-specific cross-sectional multivariable logistic regression analyses were conducted using common confounder sets (age, sex, smoking, interview season, education), followed by meta-analysis. 15 279 and 10 537 participants respectively were included in the main NO(2) and PM analyses at assessments in 1998-2011. Overall, there were no statistically significant associations with any air pollutant or traffic exposure. Sensitivity analyses including in asthmatics only, females only or using back-extrapolated NO(2) and PM10 for assessments in 1985-2002 (ECRHS, NSHD, SALIA, SAPALDIA) did not alter conclusions. In never-smokers, all associations were positive, but reached statistical significance only for chronic phlegm with PM(coarse) OR 1.31 (1.05 to 1.64) per 5 µg/m(3) increase and PM(10) with similar effect size. Sensitivity analyses of older cohorts showed increased risk of chronic cough with PM(2.5abs) (black carbon) exposures. Results do not show consistent associations between chronic bronchitis symptoms and current traffic-related air pollution in adult European populations. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  18. Total mercury in infant food, occurrence and exposure assessment in Portugal.

    PubMed

    Martins, Carla; Vasco, Elsa; Paixão, Eleonora; Alvito, Paula

    2013-01-01

    Commercial infant food labelled as from organic and conventional origin (n = 87) was analysed for total mercury content using a direct mercury analyser (DMA). Median contents of mercury were 0.50, 0.50 and 0.40 μg kg⁻¹ for processed cereal-based food, infant formulae and baby foods, respectively, with a maximum value of 19.56 μg kg⁻¹ in a baby food containing fish. Processed cereal-based food samples showed statistically significant differences for mercury content between organic and conventional analysed products. Consumption of commercial infant food analysed did not pose a risk to infants when the provisionally tolerable weekly intake (PTWI) for food other than fish and shellfish was considered. By the contrary, a risk to some infants could not be excluded when using the PTWI for fish and shellfish. This is the first study reporting contents of total mercury in commercial infant food from both farming systems and the first on exposure assessment of children to mercury in Portugal.

  19. Mathematics authentic assessment on statistics learning: the case for student mini projects

    NASA Astrophysics Data System (ADS)

    Fauziah, D.; Mardiyana; Saputro, D. R. S.

    2018-03-01

    Mathematics authentic assessment is a form of meaningful measurement of student learning outcomes for the sphere of attitude, skill and knowledge in mathematics. The construction of attitude, skill and knowledge achieved through the fulfilment of tasks which involve active and creative role of the students. One type of authentic assessment is student mini projects, started from planning, data collecting, organizing, processing, analysing and presenting the data. The purpose of this research is to learn the process of using authentic assessments on statistics learning which is conducted by teachers and to discuss specifically the use of mini projects to improving students’ learning in the school of Surakarta. This research is an action research, where the data collected through the results of the assessments rubric of student mini projects. The result of data analysis shows that the average score of rubric of student mini projects result is 82 with 96% classical completeness. This study shows that the application of authentic assessment can improve students’ mathematics learning outcomes. Findings showed that teachers and students participate actively during teaching and learning process, both inside and outside of the school. Student mini projects also provide opportunities to interact with other people in the real context while collecting information and giving presentation to the community. Additionally, students are able to exceed more on the process of statistics learning using authentic assessment.

  20. An Assessment of Phylogenetic Tools for Analyzing the Interplay Between Interspecific Interactions and Phenotypic Evolution.

    PubMed

    Drury, J P; Grether, G F; Garland, T; Morlon, H

    2018-05-01

    Much ecological and evolutionary theory predicts that interspecific interactions often drive phenotypic diversification and that species phenotypes in turn influence species interactions. Several phylogenetic comparative methods have been developed to assess the importance of such processes in nature; however, the statistical properties of these methods have gone largely untested. Focusing mainly on scenarios of competition between closely-related species, we assess the performance of available comparative approaches for analyzing the interplay between interspecific interactions and species phenotypes. We find that many currently used statistical methods often fail to detect the impact of interspecific interactions on trait evolution, that sister-taxa analyses are particularly unreliable in general, and that recently developed process-based models have more satisfactory statistical properties. Methods for detecting predictors of species interactions are generally more reliable than methods for detecting character displacement. In weighing the strengths and weaknesses of different approaches, we hope to provide a clear guide for empiricists testing hypotheses about the reciprocal effect of interspecific interactions and species phenotypes and to inspire further development of process-based models.

  1. A construct-driven investigation of gender differences in a leadership-role assessment center.

    PubMed

    Anderson, Neil; Lievens, Filip; van Dam, Karen; Born, Marise

    2006-05-01

    This study examined gender differences in a large-scale assessment center for officer entry in the British Army. Subgroup differences were investigated for a sample of 1,857 candidates: 1,594 men and 263 women. A construct-driven approach was chosen (a) by examining gender differences at the construct level, (b) by formulating a priori hypotheses about which constructs would be susceptible to gender effects, and (c) by using both effect size statistics and latent mean analyses to investigate gender differences in assessment center ratings. Results showed that female candidates were rated notably higher on constructs reflecting an interpersonally oriented leadership style (i.e., oral communication and interaction) and on drive and determination. These results are discussed in light of role congruity theory and of the advantages of using latent mean analyses.

  2. Statistics for NAEG: past efforts, new results, and future plans

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gilbert, R.O.; Simpson, J.C.; Kinnison, R.R.

    A brief review of Nevada Applied Ecology Group (NAEG) objectives is followed by a summary of past statistical analyses conducted by Pacific Northwest Laboratory for the NAEG. Estimates of spatial pattern of radionuclides and other statistical analyses at NS's 201, 219 and 221 are reviewed as background for new analyses presented in this paper. Suggested NAEG activities and statistical analyses needed for the projected termination date of NAEG studies in March 1986 are given.

  3. A decade of individual participant data meta-analyses: A review of current practice.

    PubMed

    Simmonds, Mark; Stewart, Gavin; Stewart, Lesley

    2015-11-01

    Individual participant data (IPD) systematic reviews and meta-analyses are often considered to be the gold standard for meta-analysis. In the ten years since the first review into the methodology and reporting practice of IPD reviews was published much has changed in the field. This paper investigates current reporting and statistical practice in IPD systematic reviews. A systematic review was performed to identify systematic reviews that collected and analysed IPD. Data were extracted from each included publication on a variety of issues related to the reporting of IPD review process, and the statistical methods used. There has been considerable growth in the use of "one-stage" methods to perform IPD meta-analyses. The majority of reviews consider at least one covariate other than the primary intervention, either using subgroup analysis or including covariates in one-stage regression models. Random-effects analyses, however, are not often used. Reporting of review methods was often limited, with few reviews presenting a risk-of-bias assessment. Details on issues specific to the use of IPD were little reported, including how IPD were obtained; how data was managed and checked for consistency and errors; and for how many studies and participants IPD were sought and obtained. While the last ten years have seen substantial changes in how IPD meta-analyses are performed there remains considerable scope for improving the quality of reporting for both the process of IPD systematic reviews, and the statistical methods employed in them. It is to be hoped that the publication of the PRISMA-IPD guidelines specific to IPD reviews will improve reporting in this area. Copyright © 2015 Elsevier Inc. All rights reserved.

  4. Reliable mortality statistics for Turkey: Are we there yet?

    PubMed

    Özdemir, Raziye; Rao, Chalapati; Öcek, Zeliha; Dinç Horasan, Gönül

    2015-06-10

    The Turkish government has implemented several reforms to improve the Turkish Statistical Institute Death Reporting System (TURKSTAT-DRS) since 2009. However, there has been no assessment to evaluate the impact of these reforms on causes of death statistics. This study attempted to analyse the impact of these reforms on the TURKSTAT-DRS for Turkey, and in the case of Izmir, one of the most developed provinces in Turkey. The evaluation framework comprised three main components each with specific criteria. Firstly, data from TURKSTAT for Turkey and Izmir for the periods 2001-2008 and 2009-2013 were assessed in terms of the following dimensions that represent quality of mortality statistics (a. completeness of death registration, b. trends in proportions of deaths with ill-defined causes). Secondly, the quality of information recorded on individual death certificates from Izmir in 2010 was analysed for a. missing information, b. timeliness of death notifications and c. characteristics of deaths with ill-defined causes. Finally, TURKSTAT data were analysed to estimate life tables and summary mortality indicators for Turkey and Izmir, as well as the leading causes-of-death in Turkey in 2013. Registration of adult deaths in Izmir as well as at the national level for Turkey has considerably improved since the introduction of reforms in 2009, along with marked decline in the proportions of deaths assigned ill-defined causes. Death certificates from Izmir indicated significant gaps in recorded information for demographic as well as epidemiological variables, particularly for infant deaths, and in the detailed recording of causes of death. Life expectancy at birth estimated from local data is 3-4 years higher than similar estimates for Turkey from international studies, and this requires further investigation and confirmation. The TURKSTAT-DRS is now an improved source of mortality and cause of death statistics for Turkey. The reliability and validity of TURKSTAT data needs to be established through a detailed research program to evaluate completeness of death registration and validity of registered causes of death. Similar evaluation and data analysis of mortality indicators is required at regular intervals at national and sub-national level, to increase confidence in their utility as primary data for epidemiology and health policy.

  5. Statistical aspects of genetic association testing in small samples, based on selective DNA pooling data in the arctic fox.

    PubMed

    Szyda, Joanna; Liu, Zengting; Zatoń-Dobrowolska, Magdalena; Wierzbicki, Heliodor; Rzasa, Anna

    2008-01-01

    We analysed data from a selective DNA pooling experiment with 130 individuals of the arctic fox (Alopex lagopus), which originated from 2 different types regarding body size. The association between alleles of 6 selected unlinked molecular markers and body size was tested by using univariate and multinomial logistic regression models, applying odds ratio and test statistics from the power divergence family. Due to the small sample size and the resulting sparseness of the data table, in hypothesis testing we could not rely on the asymptotic distributions of the tests. Instead, we tried to account for data sparseness by (i) modifying confidence intervals of odds ratio; (ii) using a normal approximation of the asymptotic distribution of the power divergence tests with different approaches for calculating moments of the statistics; and (iii) assessing P values empirically, based on bootstrap samples. As a result, a significant association was observed for 3 markers. Furthermore, we used simulations to assess the validity of the normal approximation of the asymptotic distribution of the test statistics under the conditions of small and sparse samples.

  6. Economic evaluation of factorial randomised controlled trials: challenges, methods and recommendations

    PubMed Central

    Gray, Alastair

    2017-01-01

    Increasing numbers of economic evaluations are conducted alongside randomised controlled trials. Such studies include factorial trials, which randomise patients to different levels of two or more factors and can therefore evaluate the effect of multiple treatments alone and in combination. Factorial trials can provide increased statistical power or assess interactions between treatments, but raise additional challenges for trial‐based economic evaluations: interactions may occur more commonly for costs and quality‐adjusted life‐years (QALYs) than for clinical endpoints; economic endpoints raise challenges for transformation and regression analysis; and both factors must be considered simultaneously to assess which treatment combination represents best value for money. This article aims to examine issues associated with factorial trials that include assessment of costs and/or cost‐effectiveness, describe the methods that can be used to analyse such studies and make recommendations for health economists, statisticians and trialists. A hypothetical worked example is used to illustrate the challenges and demonstrate ways in which economic evaluations of factorial trials may be conducted, and how these methods affect the results and conclusions. Ignoring interactions introduces bias that could result in adopting a treatment that does not make best use of healthcare resources, while considering all interactions avoids bias but reduces statistical power. We also introduce the concept of the opportunity cost of ignoring interactions as a measure of the bias introduced by not taking account of all interactions. We conclude by offering recommendations for planning, analysing and reporting economic evaluations based on factorial trials, taking increased analysis costs into account. © 2017 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. PMID:28470760

  7. Assessment of sampling stability in ecological applications of discriminant analysis

    USGS Publications Warehouse

    Williams, B.K.; Titus, K.

    1988-01-01

    A simulation study was undertaken to assess the sampling stability of the variable loadings in linear discriminant function analysis. A factorial design was used for the factors of multivariate dimensionality, dispersion structure, configuration of group means, and sample size. A total of 32,400 discriminant analyses were conducted, based on data from simulated populations with appropriate underlying statistical distributions. A review of 60 published studies and 142 individual analyses indicated that sample sizes in ecological studies often have met that requirement. However, individual group sample sizes frequently were very unequal, and checks of assumptions usually were not reported. The authors recommend that ecologists obtain group sample sizes that are at least three times as large as the number of variables measured.

  8. Approach for Input Uncertainty Propagation and Robust Design in CFD Using Sensitivity Derivatives

    NASA Technical Reports Server (NTRS)

    Putko, Michele M.; Taylor, Arthur C., III; Newman, Perry A.; Green, Lawrence L.

    2002-01-01

    An implementation of the approximate statistical moment method for uncertainty propagation and robust optimization for quasi 3-D Euler CFD code is presented. Given uncertainties in statistically independent, random, normally distributed input variables, first- and second-order statistical moment procedures are performed to approximate the uncertainty in the CFD output. Efficient calculation of both first- and second-order sensitivity derivatives is required. In order to assess the validity of the approximations, these moments are compared with statistical moments generated through Monte Carlo simulations. The uncertainties in the CFD input variables are also incorporated into a robust optimization procedure. For this optimization, statistical moments involving first-order sensitivity derivatives appear in the objective function and system constraints. Second-order sensitivity derivatives are used in a gradient-based search to successfully execute a robust optimization. The approximate methods used throughout the analyses are found to be valid when considering robustness about input parameter mean values.

  9. The Chinese version of the Myocardial Infarction Dimensional Assessment Scale (MIDAS): Mokken scaling

    PubMed Central

    2012-01-01

    Background Hierarchical scales are very useful in clinical practice due to their ability to discriminate precisely between individuals, and the original English version of the Myocardial Infarction Dimensional Assessment Scale has been shown to contain a hierarchy of items. The purpose of this study was to analyse a Mandarin Chinese translation of the Myocardial Infarction Dimensional Assessment Scale for a hierarchy of items according to the criteria of Mokken scaling. Data from 180 Chinese participants who completed the Chinese translation of the Myocardial Infarction Dimensional Assessment Scale were analysed using the Mokken Scaling Procedure and the 'R' statistical programme using the diagnostics available in these programmes. Correlation between Mandarin Chinese items and a Chinese translation of the Short Form (36) Health Survey was also analysed. Findings Fifteen items from the Mandarin Chinese Myocardial Infarction Dimensional Assessment Scale were retained in a strong and reliable Mokken scale; invariant item ordering was not evident and the Mokken scaled items of the Chinese Myocardial Infarction Dimensional Assessment Scale correlated with the Short Form (36) Health Survey. Conclusions Items from the Mandarin Chinese Myocardial Infarction Dimensional Assessment Scale form a Mokken scale and this offers further insight into how the items of the Myocardial Infarction Dimensional Assessment Scale relate to the measurement of health-related quality of life people with a myocardial infarction. PMID:22221696

  10. msap: a tool for the statistical analysis of methylation-sensitive amplified polymorphism data.

    PubMed

    Pérez-Figueroa, A

    2013-05-01

    In this study msap, an R package which analyses methylation-sensitive amplified polymorphism (MSAP or MS-AFLP) data is presented. The program provides a deep analysis of epigenetic variation starting from a binary data matrix indicating the banding pattern between the isoesquizomeric endonucleases HpaII and MspI, with differential sensitivity to cytosine methylation. After comparing the restriction fragments, the program determines if each fragment is susceptible to methylation (representative of epigenetic variation) or if there is no evidence of methylation (representative of genetic variation). The package provides, in a user-friendly command line interface, a pipeline of different analyses of the variation (genetic and epigenetic) among user-defined groups of samples, as well as the classification of the methylation occurrences in those groups. Statistical testing provides support to the analyses. A comprehensive report of the analyses and several useful plots could help researchers to assess the epigenetic and genetic variation in their MSAP experiments. msap is downloadable from CRAN (http://cran.r-project.org/) and its own webpage (http://msap.r-forge.R-project.org/). The package is intended to be easy to use even for those people unfamiliar with the R command line environment. Advanced users may take advantage of the available source code to adapt msap to more complex analyses. © 2013 Blackwell Publishing Ltd.

  11. Assessment of Students' Scientific and Alternative Conceptions of Energy and Momentum Using Concentration Analysis

    ERIC Educational Resources Information Center

    Dega, Bekele Gashe; Govender, Nadaraj

    2016-01-01

    This study compares the scientific and alternative conceptions of energy and momentum of university first-year science students in Ethiopia and the US. Written data were collected using the Energy and Momentum Conceptual Survey developed by Singh and Rosengrant. The Concentration Analysis statistical method was used for analysing the Ethiopian…

  12. Construct Validity in TOEFL iBT Speaking Tasks: Insights from Natural Language Processing

    ERIC Educational Resources Information Center

    Kyle, Kristopher; Crossley, Scott A.; McNamara, Danielle S.

    2016-01-01

    This study explores the construct validity of speaking tasks included in the TOEFL iBT (e.g., integrated and independent speaking tasks). Specifically, advanced natural language processing (NLP) tools, MANOVA difference statistics, and discriminant function analyses (DFA) are used to assess the degree to which and in what ways responses to these…

  13. The Influence of Mathematics Professional Development, School-Level, and Teacher-Level Variables on Primary Students' Mathematics Achievement

    ERIC Educational Resources Information Center

    Polly, Drew; Wang, Chuang; Martin, Christie; Lambert, Richard; Pugalee, David; Middleton, Catherina

    2018-01-01

    This study examined the influence of a professional development project about an internet-based mathematics formative assessment tool and related pedagogies on primary teachers' instruction and student achievement. Teachers participated in 72 h of professional development during the year. Descriptive statistics and multivariate analyses of…

  14. Differential Neonatal and Postneonatal Infant Mortality Rates across US Counties: The Role of Socioeconomic Conditions and Rurality

    ERIC Educational Resources Information Center

    Sparks, P. Johnelle; McLaughlin, Diane K.; Stokes, C. Shannon

    2009-01-01

    Purpose: To examine differences in correlates of neonatal and postneonatal infant mortality rates, across counties, by degree of rurality. Methods: Neonatal and postneonatal mortality rates were calculated from the 1998 to 2002 Compressed Mortality Files from the National Center for Health Statistics. Bivariate analyses assessed the relationship…

  15. Evaluating a measure of social health derived from two mental health recovery measures: the California Quality of Life (CA-QOL) and Mental Health Statistics Improvement Program Consumer Survey (MHSIP).

    PubMed

    Carlson, Jordan A; Sarkin, Andrew J; Levack, Ashley E; Sklar, Marisa; Tally, Steven R; Gilmer, Todd P; Groessl, Erik J

    2011-08-01

    Social health is important to measure when assessing outcomes in community mental health. Our objective was to validate social health scales using items from two broader commonly used measures that assess mental health outcomes. Participants were 609 adults receiving psychological treatment services. Items were identified from the California Quality of Life (CA-QOL) and Mental Health Statistics Improvement Program (MHSIP) outcome measures by their conceptual correspondence with social health and compared to the Social Functioning Questionnaire (SFQ) using correlational analyses. Pearson correlations for the identified CA-QOL and MSHIP items with the SFQ ranged from .42 to .62, and the identified scale scores produced Pearson correlation coefficients of .56, .70, and, .70 with the SFQ. Concurrent validity with social health was supported for the identified scales. The current inclusion of these assessment tools allows community mental health programs to include social health in their assessments.

  16. Water levels and groundwater and surface-water exchanges in lakes of the northeast Twin Cities Metropolitan Area, Minnesota, 2002 through 2015

    USGS Publications Warehouse

    Jones, Perry M.; Trost, Jared J.; Erickson, Melinda L.

    2016-10-19

    OverviewThis study assessed lake-water levels and regional and local groundwater and surface-water exchanges near northeast Twin Cities Metropolitan Area lakes applying three approaches: statistical analysis, field study, and groundwater-flow modeling.  Statistical analyses of lake levels were completed to assess the effect of physical setting and climate on lake-level fluctuations of selected lakes. A field study of groundwater and surface-water interactions in selected lakes was completed to (1) estimate potential percentages of surface-water contributions to well water across the northeast Twin Cities Metropolitan Area, (2) estimate general ages for waters extracted from the wells, and (3) assess groundwater inflow to lakes and lake-water outflow to aquifers downgradient from White Bear Lake.  Groundwater flow was simulated using a steady-state, groundwater-flow model to assess regional groundwater and surface-water exchanges and the effects of groundwater withdrawals, climate, and other factors on water levels of northeast Twin Cities Metropolitan Area lakes.

  17. Safety Assessment of Food and Feed from GM Crops in Europe: Evaluating EFSA's Alternative Framework for the Rat 90-day Feeding Study.

    PubMed

    Hong, Bonnie; Du, Yingzhou; Mukerji, Pushkor; Roper, Jason M; Appenzeller, Laura M

    2017-07-12

    Regulatory-compliant rodent subchronic feeding studies are compulsory regardless of a hypothesis to test, according to recent EU legislation for the safety assessment of whole food/feed produced from genetically modified (GM) crops containing a single genetic transformation event (European Union Commission Implementing Regulation No. 503/2013). The Implementing Regulation refers to guidelines set forth by the European Food Safety Authority (EFSA) for the design, conduct, and analysis of rodent subchronic feeding studies. The set of EFSA recommendations was rigorously applied to a 90-day feeding study in Sprague-Dawley rats. After study completion, the appropriateness and applicability of these recommendations were assessed using a battery of statistical analysis approaches including both retrospective and prospective statistical power analyses as well as variance-covariance decomposition. In the interest of animal welfare considerations, alternative experimental designs were investigated and evaluated in the context of informing the health risk assessment of food/feed from GM crops.

  18. Whole-Range Assessment: A Simple Method for Analysing Allelopathic Dose-Response Data

    PubMed Central

    An, Min; Pratley, J. E.; Haig, T.; Liu, D.L.

    2005-01-01

    Based on the typical biological responses of an organism to allelochemicals (hormesis), concepts of whole-range assessment and inhibition index were developed for improved analysis of allelopathic data. Examples of their application are presented using data drawn from the literature. The method is concise and comprehensive, and makes data grouping and multiple comparisons simple, logical, and possible. It improves data interpretation, enhances research outcomes, and is a statistically efficient summary of the plant response profiles. PMID:19330165

  19. Assessing signal-to-noise in quantitative proteomics: multivariate statistical analysis in DIGE experiments.

    PubMed

    Friedman, David B

    2012-01-01

    All quantitative proteomics experiments measure variation between samples. When performing large-scale experiments that involve multiple conditions or treatments, the experimental design should include the appropriate number of individual biological replicates from each condition to enable the distinction between a relevant biological signal from technical noise. Multivariate statistical analyses, such as principal component analysis (PCA), provide a global perspective on experimental variation, thereby enabling the assessment of whether the variation describes the expected biological signal or the unanticipated technical/biological noise inherent in the system. Examples will be shown from high-resolution multivariable DIGE experiments where PCA was instrumental in demonstrating biologically significant variation as well as sample outliers, fouled samples, and overriding technical variation that would not be readily observed using standard univariate tests.

  20. A case study: application of statistical process control tool for determining process capability and sigma level.

    PubMed

    Chopra, Vikram; Bairagi, Mukesh; Trivedi, P; Nagar, Mona

    2012-01-01

    Statistical process control is the application of statistical methods to the measurement and analysis of variation process. Various regulatory authorities such as Validation Guidance for Industry (2011), International Conference on Harmonisation ICH Q10 (2009), the Health Canada guidelines (2009), Health Science Authority, Singapore: Guidance for Product Quality Review (2008), and International Organization for Standardization ISO-9000:2005 provide regulatory support for the application of statistical process control for better process control and understanding. In this study risk assessments, normal probability distributions, control charts, and capability charts are employed for selection of critical quality attributes, determination of normal probability distribution, statistical stability, and capability of production processes, respectively. The objective of this study is to determine tablet production process quality in the form of sigma process capability. By interpreting data and graph trends, forecasting of critical quality attributes, sigma process capability, and stability of process were studied. The overall study contributes to an assessment of process at the sigma level with respect to out-of-specification attributes produced. Finally, the study will point to an area where the application of quality improvement and quality risk assessment principles for achievement of six sigma-capable processes is possible. Statistical process control is the most advantageous tool for determination of the quality of any production process. This tool is new for the pharmaceutical tablet production process. In the case of pharmaceutical tablet production processes, the quality control parameters act as quality assessment parameters. Application of risk assessment provides selection of critical quality attributes among quality control parameters. Sequential application of normality distributions, control charts, and capability analyses provides a valid statistical process control study on process. Interpretation of such a study provides information about stability, process variability, changing of trends, and quantification of process ability against defective production. Comparative evaluation of critical quality attributes by Pareto charts provides the least capable and most variable process that is liable for improvement. Statistical process control thus proves to be an important tool for six sigma-capable process development and continuous quality improvement.

  1. Proliferative changes in the bronchial epithelium of former smokers treated with retinoids.

    PubMed

    Hittelman, Walter N; Liu, Diane D; Kurie, Jonathan M; Lotan, Reuben; Lee, Jin Soo; Khuri, Fadlo; Ibarguen, Heladio; Morice, Rodolfo C; Walsh, Garrett; Roth, Jack A; Minna, John; Ro, Jae Y; Broxson, Anita; Hong, Waun Ki; Lee, J Jack

    2007-11-07

    Retinoids have shown antiproliferative and chemopreventive activity. We analyzed data from a randomized, placebo-controlled chemoprevention trial to determine whether a 3-month treatment with either 9-cis-retinoic acid (RA) or 13-cis-RA and alpha-tocopherol reduced Ki-67, a proliferation biomarker, in the bronchial epithelium. Former smokers (n = 225) were randomly assigned to receive 3 months of daily oral 9-cis-RA (100 mg), 13-cis-RA (1 mg/kg) and alpha-tocopherol (1200 IU), or placebo. Bronchoscopic biopsy specimens obtained before and after treatment were immunohistochemically assessed for changes in the Ki-67 proliferative index (i.e., percentage of cells with Ki-67-positive nuclear staining) in the basal and parabasal layers of the bronchial epithelium. Per-subject and per-biopsy site analyses were conducted. Multicovariable analyses, including a mixed-effects model and a generalized estimating equations model, were used to investigate the treatment effect (Ki-67 labeling index and percentage of bronchial epithelial biopsy sites with a Ki-67 index > or = 5%) with adjustment for multiple covariates, such as smoking history and metaplasia. Coefficient estimates and 95% confidence intervals (CIs) were obtained from the models. All statistical tests were two-sided. In per-subject analyses, Ki-67 labeling in the basal layer was not changed by any treatment; the percentage of subjects with a high Ki-67 labeling in the parabasal layer dropped statistically significantly after treatment with 13-cis-RA and alpha-tocopherol treatment (P = .04) compared with placebo, but the drop was not statistically significant after 9-cis-RA treatment (P = .17). A similar effect was observed in the parabasal layer in a per-site analysis; the percentage of sites with high Ki-67 labeling dropped statistically significantly after 9-cis-RA treatment (coefficient estimate = -0.72, 95% CI = -1.24 to -0.20; P = .007) compared with placebo, and after 13-cis-RA and alpha-tocopherol treatment (coefficient estimate = -0.66, 95% CI = -1.15 to -0.17; P = .008). In per-subject analyses, treatment with 13-cis-RA and alpha-tocopherol, compared with placebo, was statistically significantly associated with reduced bronchial epithelial cell proliferation; treatment with 9-cis-RA was not. In per-site analyses, statistically significant associations were obtained with both treatments.

  2. Proliferative Changes in the Bronchial Epithelium of Former Smokers Treated With Retinoids

    PubMed Central

    Hittelman, Walter N.; Liu, Diane D.; Kurie, Jonathan M.; Lotan, Reuben; Lee, Jin Soo; Khuri, Fadlo; Ibarguen, Heladio; Morice, Rodolfo C.; Walsh, Garrett; Roth, Jack A.; Minna, John; Ro, Jae Y.; Broxson, Anita; Hong, Waun Ki; Lee, J. Jack

    2012-01-01

    Background Retinoids have shown antiproliferative and chemopreventive activity. We analyzed data from a randomized, placebo-controlled chemoprevention trial to determine whether a 3-month treatment with either 9-cis-retinoic acid (RA) or 13-cis-RA and α-tocopherol reduced Ki-67, a proliferation biomarker, in the bronchial epithelium. Methods Former smokers (n = 225) were randomly assigned to receive 3 months of daily oral 9-cis-RA (100 mg), 13-cis-RA (1 mg/kg) and α-tocopherol (1200 IU), or placebo. Bronchoscopic biopsy specimens obtained before and after treatment were immunohistochemically assessed for changes in the Ki-67 proliferative index (i.e., percentage of cells with Ki-67–positive nuclear staining) in the basal and parabasal layers of the bronchial epithelium. Per-subject and per–biopsy site analyses were conducted. Multicovariable analyses, including a mixed-effects model and a generalized estimating equations model, were used to investigate the treatment effect (Ki-67 labeling index and percentage of bronchial epithelial biopsy sites with a Ki-67 index ≥ 5%) with adjustment for multiple covariates, such as smoking history and metaplasia. Coefficient estimates and 95% confidence intervals (CIs) were obtained from the models. All statistical tests were two-sided. Results In per-subject analyses, Ki-67 labeling in the basal layer was not changed by any treatment; the percentage of subjects with a high Ki-67 labeling in the parabasal layer dropped statistically significantly after treatment with 13-cis-RA and α-tocopherol treatment (P = .04) compared with placebo, but the drop was not statistically significant after 9-cis-RA treatment (P = .17). A similar effect was observed in the parabasal layer in a per-site analysis; the percentage of sites with high Ki-67 labeling dropped statistically significantly after 9-cis-RA treatment (coefficient estimate = −0.72, 95% CI = −1.24 to −0.20; P = .007) compared with placebo, and after 13-cis-RA and α-tocopherol treatment (coefficient estimate = −0.66, 95% CI = −1.15 to −0.17; P = .008). Conclusions In per-subject analyses, treatment with 13-cis-RA and α-tocopherol, compared with placebo, was statistically significantly associated with reduced bronchial epithelial cell proliferation; treatment with 9-cis-RA was not. In per-site analyses, statistically significant associations were obtained with both treatments. PMID:17971525

  3. Application of multivariate statistical techniques in microbial ecology.

    PubMed

    Paliy, O; Shankar, V

    2016-03-01

    Recent advances in high-throughput methods of molecular analyses have led to an explosion of studies generating large-scale ecological data sets. In particular, noticeable effect has been attained in the field of microbial ecology, where new experimental approaches provided in-depth assessments of the composition, functions and dynamic changes of complex microbial communities. Because even a single high-throughput experiment produces large amount of data, powerful statistical techniques of multivariate analysis are well suited to analyse and interpret these data sets. Many different multivariate techniques are available, and often it is not clear which method should be applied to a particular data set. In this review, we describe and compare the most widely used multivariate statistical techniques including exploratory, interpretive and discriminatory procedures. We consider several important limitations and assumptions of these methods, and we present examples of how these approaches have been utilized in recent studies to provide insight into the ecology of the microbial world. Finally, we offer suggestions for the selection of appropriate methods based on the research question and data set structure. © 2016 John Wiley & Sons Ltd.

  4. Stepping inside the niche: microclimate data are critical for accurate assessment of species' vulnerability to climate change

    PubMed Central

    Storlie, Collin; Merino-Viteri, Andres; Phillips, Ben; VanDerWal, Jeremy; Welbergen, Justin; Williams, Stephen

    2014-01-01

    To assess a species' vulnerability to climate change, we commonly use mapped environmental data that are coarsely resolved in time and space. Coarsely resolved temperature data are typically inaccurate at predicting temperatures in microhabitats used by an organism and may also exhibit spatial bias in topographically complex areas. One consequence of these inaccuracies is that coarsely resolved layers may predict thermal regimes at a site that exceed species' known thermal limits. In this study, we use statistical downscaling to account for environmental factors and develop high-resolution estimates of daily maximum temperatures for a 36 000 km2 study area over a 38-year period. We then demonstrate that this statistical downscaling provides temperature estimates that consistently place focal species within their fundamental thermal niche, whereas coarsely resolved layers do not. Our results highlight the need for incorporation of fine-scale weather data into species' vulnerability analyses and demonstrate that a statistical downscaling approach can yield biologically relevant estimates of thermal regimes. PMID:25252835

  5. [Reevaluation of the methodological quality in meta-analyses of accelerated rehabilitation on recovery after surgery for colorectal cancer].

    PubMed

    Ding, S N; Pan, H Y; Zhang, J G

    2017-03-14

    Objective: To evaluate the methodological quality and impacts on outcomes for systematic reviews (SRs) of accelerated rehabilitation versus traditional control for colorectal surgery. Methods: We comprehensively searched six databases and additional websites to collect SRs, or meta-analysis from inception to July 2016. The Overview Quality Assessment Questionnaire (OQAQ) was applied for quality assessment of the included studies, the tools recommended by the Cochrane Collaboration was applied for quality assessment for RCT and CCT and the Newcastle-Ottawa Scale (NOS) was applied to assess observational study. The relative ratios (RRs) and 95% confidence intervals (CIs) were integrated using Review Manager 5.3 software. Results: Fourteen meta-analyses were included in total. The mean OQAQ score was 3.8 with 95% CI 3.2 to 4.3. Only three meta-analyses were assessed as good quality. Two studies misused statistical models. A total of 42 primary studies referenced by meta-analyses were included, of which, 25 RCTs were levelled grade B and 1 CCT was levelled grade C. An estimated mean NOS score of 16 observation studies was 6.75 (totally scored 9 with 95% CI 6.4 to 7.1), of which, 10 studies scored ≥7 were high quality, 6 studies scored 6 were moderate quality. Conclusions: Currently, the overall quality of meta-analyses about comparing the effects and safety between accelerated rehabilitation and traditional control for colorectal surgery is fairly poor and the evidence level is lower. Health providers should apply the evidence with caution in clinical practice.

  6. The use of belief-based probabilistic methods in volcanology: Scientists' views and implications for risk assessments

    NASA Astrophysics Data System (ADS)

    Donovan, Amy; Oppenheimer, Clive; Bravo, Michael

    2012-12-01

    This paper constitutes a philosophical and social scientific study of expert elicitation in the assessment and management of volcanic risk on Montserrat during the 1995-present volcanic activity. It outlines the broader context of subjective probabilistic methods and then uses a mixed-method approach to analyse the use of these methods in volcanic crises. Data from a global survey of volcanologists regarding the use of statistical methods in hazard assessment are presented. Detailed qualitative data from Montserrat are then discussed, particularly concerning the expert elicitation procedure that was pioneered during the eruptions. These data are analysed and conclusions about the use of these methods in volcanology are drawn. The paper finds that while many volcanologists are open to the use of these methods, there are still some concerns, which are similar to the concerns encountered in the literature on probabilistic and determinist approaches to seismic hazard analysis.

  7. Progressive statistics for studies in sports medicine and exercise science.

    PubMed

    Hopkins, William G; Marshall, Stephen W; Batterham, Alan M; Hanin, Juri

    2009-01-01

    Statistical guidelines and expert statements are now available to assist in the analysis and reporting of studies in some biomedical disciplines. We present here a more progressive resource for sample-based studies, meta-analyses, and case studies in sports medicine and exercise science. We offer forthright advice on the following controversial or novel issues: using precision of estimation for inferences about population effects in preference to null-hypothesis testing, which is inadequate for assessing clinical or practical importance; justifying sample size via acceptable precision or confidence for clinical decisions rather than via adequate power for statistical significance; showing SD rather than SEM, to better communicate the magnitude of differences in means and nonuniformity of error; avoiding purely nonparametric analyses, which cannot provide inferences about magnitude and are unnecessary; using regression statistics in validity studies, in preference to the impractical and biased limits of agreement; making greater use of qualitative methods to enrich sample-based quantitative projects; and seeking ethics approval for public access to the depersonalized raw data of a study, to address the need for more scrutiny of research and better meta-analyses. Advice on less contentious issues includes the following: using covariates in linear models to adjust for confounders, to account for individual differences, and to identify potential mechanisms of an effect; using log transformation to deal with nonuniformity of effects and error; identifying and deleting outliers; presenting descriptive, effect, and inferential statistics in appropriate formats; and contending with bias arising from problems with sampling, assignment, blinding, measurement error, and researchers' prejudices. This article should advance the field by stimulating debate, promoting innovative approaches, and serving as a useful checklist for authors, reviewers, and editors.

  8. Does Anxiety Modify the Risk for, or Severity of, Conduct Problems Among Children With Co-Occurring ADHD: Categorical and Dimensional and Analyses.

    PubMed

    Danforth, Jeffrey S; Doerfler, Leonard A; Connor, Daniel F

    2017-08-01

    The goal was to examine whether anxiety modifies the risk for, or severity of, conduct problems in children with ADHD. Assessment included both categorical and dimensional measures of ADHD, anxiety, and conduct problems. Analyses compared conduct problems between children with ADHD features alone versus children with co-occurring ADHD and anxiety features. When assessed by dimensional rating scales, results showed that compared with children with ADHD alone, those children with ADHD co-occurring with anxiety are at risk for more intense conduct problems. When assessment included a Diagnostic and Statistical Manual of Mental Disorders (4th ed.; DSM-IV) diagnosis via the Schedule for Affective Disorders and Schizophrenia for School Age Children-Epidemiologic Version (K-SADS), results showed that compared with children with ADHD alone, those children with ADHD co-occurring with anxiety neither had more intense conduct problems nor were they more likely to be diagnosed with oppositional defiant disorder or conduct disorder. Different methodological measures of ADHD, anxiety, and conduct problem features influenced the outcome of the analyses.

  9. Advanced Statistics for Exotic Animal Practitioners.

    PubMed

    Hodsoll, John; Hellier, Jennifer M; Ryan, Elizabeth G

    2017-09-01

    Correlation and regression assess the association between 2 or more variables. This article reviews the core knowledge needed to understand these analyses, moving from visual analysis in scatter plots through correlation, simple and multiple linear regression, and logistic regression. Correlation estimates the strength and direction of a relationship between 2 variables. Regression can be considered more general and quantifies the numerical relationships between an outcome and 1 or multiple variables in terms of a best-fit line, allowing predictions to be made. Each technique is discussed with examples and the statistical assumptions underlying their correct application. Copyright © 2017 Elsevier Inc. All rights reserved.

  10. Statistical analysis of an inter-laboratory comparison of small-scale safety and thermal testing of RDX

    DOE PAGES

    Brown, Geoffrey W.; Sandstrom, Mary M.; Preston, Daniel N.; ...

    2014-11-17

    In this study, the Integrated Data Collection Analysis (IDCA) program has conducted a proficiency test for small-scale safety and thermal (SSST) testing of homemade explosives (HMEs). Described here are statistical analyses of the results from this test for impact, friction, electrostatic discharge, and differential scanning calorimetry analysis of the RDX Class 5 Type II standard. The material was tested as a well-characterized standard several times during the proficiency test to assess differences among participants and the range of results that may arise for well-behaved explosive materials.

  11. The Effect of Scale Dependent Discretization on the Progressive Failure of Composite Materials Using Multiscale Analyses

    NASA Technical Reports Server (NTRS)

    Ricks, Trenton M.; Lacy, Thomas E., Jr.; Pineda, Evan J.; Bednarcyk, Brett A.; Arnold, Steven M.

    2013-01-01

    A multiscale modeling methodology, which incorporates a statistical distribution of fiber strengths into coupled micromechanics/ finite element analyses, is applied to unidirectional polymer matrix composites (PMCs) to analyze the effect of mesh discretization both at the micro- and macroscales on the predicted ultimate tensile (UTS) strength and failure behavior. The NASA code FEAMAC and the ABAQUS finite element solver were used to analyze the progressive failure of a PMC tensile specimen that initiates at the repeating unit cell (RUC) level. Three different finite element mesh densities were employed and each coupled with an appropriate RUC. Multiple simulations were performed in order to assess the effect of a statistical distribution of fiber strengths on the bulk composite failure and predicted strength. The coupled effects of both the micro- and macroscale discretizations were found to have a noticeable effect on the predicted UTS and computational efficiency of the simulations.

  12. Association between sleep difficulties as well as duration and hypertension: is BMI a mediator?

    PubMed

    Carrillo-Larco, R M; Bernabe-Ortiz, A; Sacksteder, K A; Diez-Canseco, F; Cárdenas, M K; Gilman, R H; Miranda, J J

    2017-01-01

    Sleep difficulties and short sleep duration have been associated with hypertension. Though body mass index (BMI) may be a mediator variable, the mediation effect has not been defined. We aimed to assess the association between sleep duration and sleep difficulties with hypertension, to determine if BMI is a mediator variable, and to quantify the mediation effect. We conducted a mediation analysis and calculated prevalence ratios with 95% confidence intervals. The exposure variables were sleep duration and sleep difficulties, and the outcome was hypertension. Sleep difficulties were statistically significantly associated with a 43% higher prevalence of hypertension in multivariable analyses; results were not statistically significant for sleep duration. In these analyses, and in sex-specific subgroup analyses, we found no strong evidence that BMI mediated the association between sleep indices and risk of hypertension. Our findings suggest that BMI does not appear to mediate the association between sleep patterns and hypertension. These results highlight the need to further study the mechanisms underlying the relationship between sleep patterns and cardiovascular risk factors.

  13. Post Hoc Analyses of the Effect of Crisaborole Topical Ointment, 2% on Atopic Dermatitis: Associated Pruritus from Phase 1 and 2 Clinical Studies.

    PubMed

    Draelos, Zoe Diana; Stein Gold, Linda F; Murrell, Dedee F; Hughes, Matilda H; Zane, Lee T

    2016-02-01

    Two post hoc analyses assessed the antipruritic activity of crisaborole topical ointment, 2% (crisaborole; Anacor Pharmaceuticals, Inc., Palo Alto, CA), a first-in-class boron-based phosphodiesterase-4 inhibitor in development for treatment of mild to moderate atopic dermatitis (AD). Two pooled analyses included data from 4 studies evaluating crisaborole in AD (study 1, phase 1b, systemic exposure, safety, and pharmacokinetics [PK] under maximal-use conditions in children and adolescents; study 2, phase 2a, safety and PK in adolescents; study 3, phase 2a, efficacy and safety in adults; study 4, phase 2, efficacy and safety in adolescents). Pooled data from studies 1 and 2 included whole body assessments; studies 3 and 4 included target lesion assessments. Pruritus severity was evaluated using a 4-point rating scale (0=none to 3=severe). Efficacy assessments included percent change from baseline in pruritus severity scores at days 8 (first pooled assessment), 15, 22, and 29 (whole body assessments) or days 15 (first pooled assessment), 22, and 29 (target lesions). Paired t-tests comparing change from baseline against zero were used to calculate P values. Categorical shifts in pruritus severity were also assessed (no to mild pruritus, 0-1.5; moderate to severe pruritus, 2-3). In the pooled analysis of studies 1 and 2 (N=57), the percent change from baseline in pruritus severity scores were 63.0% and 64.9% at days 8 and 29, respectively (P<0.001 for each). Similar results were observed in the pooled analysis of studies 3 and 4 (N=67). In both analyses, most patients had mild to no pruritus from the first time point assessed through the remainder of treatment. Treatment with crisaborole topical ointment, 2% resulted in statistically significant reductions in pruritus severity at the first time point evaluated in both analyses. These findings provide preliminary evidence of the antipruritic activity of crisaborole topical ointment, 2%.

  14. An improved approach for flight readiness certification: Methodology for failure risk assessment and application examples. Volume 2: Software documentation

    NASA Technical Reports Server (NTRS)

    Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.

    1992-01-01

    An improved methodology for quantitatively evaluating failure risk of spaceflight systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with engineering analysis to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in engineering analyses of failure phenomena, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which engineering analysis models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes, These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. Conventional engineering analysis models currently employed for design of failure prediction are used in this methodology. The PFA methodology is described and examples of its application are presented. Conventional approaches to failure risk evaluation for spaceflight systems are discussed, and the rationale for the approach taken in the PFA methodology is presented. The statistical methods, engineering models, and computer software used in fatigue failure mode applications are thoroughly documented.

  15. An improved approach for flight readiness certification: Methodology for failure risk assessment and application examples, volume 1

    NASA Technical Reports Server (NTRS)

    Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.

    1992-01-01

    An improved methodology for quantitatively evaluating failure risk of spaceflight systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with engineering analysis to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in engineering analyses of failure phenomena, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which engineering analysis models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes. These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. Conventional engineering analysis models currently employed for design of failure prediction are used in this methodology. The PFA methodology is described and examples of its application are presented. Conventional approaches to failure risk evaluation for spaceflight systems are discussed, and the rationale for the approach taken in the PFA methodology is presented. The statistical methods, engineering models, and computer software used in fatigue failure mode applications are thoroughly documented.

  16. Analysis of Texas Achievement Data for Elementary African American and Latino Females

    ERIC Educational Resources Information Center

    Larke, Patricia J.; Webb-Hasan, Gwendolyn; Jimarez, Teresa; Li, Yeping

    2014-01-01

    This study provides a critical look at achievement of African American (AA), and Latino (L) females in third and fifth grades on the Texas Assessment of Knowledge and Skills (TAKS) in reading, mathematics and science. Descriptive statistics were used to analyze the 2007 and 2011 TAKS raw data. Data analyses indicate that AAL females had the lowest…

  17. Triangulating Evidence to Investigate the Validity of Measures: Evidence from Discussion during Instruction, Cognitive Interviews, and Written Assessments

    ERIC Educational Resources Information Center

    Burmester, Kristen O'Rourke

    2011-01-01

    Classrooms are a primary site of evidence about learning. Yet classroom proceedings often occur behind closed doors and hence evidence of student learning is observable only to the classroom teacher. The informal and undocumented nature of this information means that it is rarely included in statistical models or quantifiable analyses. This…

  18. Statistical uncertainty of eddy flux-based estimates of gross ecosystem carbon exchange at Howland Forest, Maine

    Treesearch

    S.C. Hagen; B.H. Braswell; E. Linder; S. Frolking; A.D. Richardson; David Hollinger. D.Y; Hollinger. D.Y

    2006-01-01

    We present an uncertainty analysis of gross ecosystem carbon exchange (GEE) estimates derived from 7 years of continuous eddy covariance measurements of forest atmosphere CO2 fluxes at Howland Forest, Maine, USA. These data, which have high temporal resolution, can be used to validate process modeling analyses, remote sensing assessments, and field surveys. However,...

  19. Characterizing Uncertainty and Variability in PBPK Models ...

    EPA Pesticide Factsheets

    Mode-of-action based risk and safety assessments can rely upon tissue dosimetry estimates in animals and humans obtained from physiologically-based pharmacokinetic (PBPK) modeling. However, risk assessment also increasingly requires characterization of uncertainty and variability; such characterization for PBPK model predictions represents a continuing challenge to both modelers and users. Current practices show significant progress in specifying deterministic biological models and the non-deterministic (often statistical) models, estimating their parameters using diverse data sets from multiple sources, and using them to make predictions and characterize uncertainty and variability. The International Workshop on Uncertainty and Variability in PBPK Models, held Oct 31-Nov 2, 2006, sought to identify the state-of-the-science in this area and recommend priorities for research and changes in practice and implementation. For the short term, these include: (1) multidisciplinary teams to integrate deterministic and non-deterministic/statistical models; (2) broader use of sensitivity analyses, including for structural and global (rather than local) parameter changes; and (3) enhanced transparency and reproducibility through more complete documentation of the model structure(s) and parameter values, the results of sensitivity and other analyses, and supporting, discrepant, or excluded data. Longer-term needs include: (1) theoretic and practical methodological impro

  20. Orphan therapies: making best use of postmarket data.

    PubMed

    Maro, Judith C; Brown, Jeffrey S; Dal Pan, Gerald J; Li, Lingling

    2014-08-01

    Postmarket surveillance of the comparative safety and efficacy of orphan therapeutics is challenging, particularly when multiple therapeutics are licensed for the same orphan indication. To make best use of product-specific registry data collected to fulfill regulatory requirements, we propose the creation of a distributed electronic health data network among registries. Such a network could support sequential statistical analyses designed to detect early warnings of excess risks. We use a simulated example to explore the circumstances under which a distributed network may prove advantageous. We perform sample size calculations for sequential and non-sequential statistical studies aimed at comparing the incidence of hepatotoxicity following initiation of two newly licensed therapies for homozygous familial hypercholesterolemia. We calculate the sample size savings ratio, or the proportion of sample size saved if one conducted a sequential study as compared to a non-sequential study. Then, using models to describe the adoption and utilization of these therapies, we simulate when these sample sizes are attainable in calendar years. We then calculate the analytic calendar time savings ratio, analogous to the sample size savings ratio. We repeat these analyses for numerous scenarios. Sequential analyses detect effect sizes earlier or at the same time as non-sequential analyses. The most substantial potential savings occur when the market share is more imbalanced (i.e., 90% for therapy A) and the effect size is closest to the null hypothesis. However, due to low exposure prevalence, these savings are difficult to realize within the 30-year time frame of this simulation for scenarios in which the outcome of interest occurs at or more frequently than one event/100 person-years. We illustrate a process to assess whether sequential statistical analyses of registry data performed via distributed networks may prove a worthwhile infrastructure investment for pharmacovigilance.

  1. Does income inequality get under the skin? A multilevel analysis of depression, anxiety and mental disorders in Sao Paulo, Brazil.

    PubMed

    Chiavegatto Filho, Alexandre Dias Porto; Kawachi, Ichiro; Wang, Yuan Pang; Viana, Maria Carmen; Andrade, Laura Helena Silveira Guerra

    2013-11-01

    Test the original income inequality theory, by analysing its association with depression, anxiety and any mental disorders. We analysed a sample of 3542 individuals aged 18 years and older selected through a stratified, multistage area probability sample of households from the São Paulo Metropolitan Area. Mental disorder symptoms were assessed using the Diagnostic and Statistical Manual of Mental Disorders (DSM-IV) criteria. Bayesian multilevel logistic models were performed. Living in areas with medium and high-income inequality was statistically associated with increased risk of depression, relative to low-inequality areas (OR 1.76; 95% CI 1.21 to 2.55, and 1.53; 95% CI 1.07 to 2.19, respectively). The same was not true for anxiety (OR 1.25; 95% CI 0.90 to 1.73, and OR 1.07; 95% CI 0.79 to 1.46). In the case of any mental disorder, results were mixed. In general, our findings were consistent with the income inequality theory, that is, people living in places with higher income inequality had an overall higher odd of mental disorders, albeit not always statistically significant. The fact that depression, but not anxiety, was statistically significant could indicate a pathway by which inequality influences health.

  2. Estimating mortality using data from civil registration: a cross-sectional study in India

    PubMed Central

    Rao, Chalapati; Lakshmi, PVM; Prinja, Shankar; Kumar, Rajesh

    2016-01-01

    Abstract Objective To analyse the design and operational status of India’s civil registration and vital statistics system and facilitate the system’s development into an accurate and reliable source of mortality data. Methods We assessed the national civil registration and vital statistics system’s legal framework, administrative structure and design through document review. We did a cross-sectional study for the year 2013 at national level and in Punjab state to assess the quality of the system’s mortality data through analyses of life tables and investigation of the completeness of death registration and the proportion of deaths assigned ill-defined causes. We interviewed registrars, medical officers and coders in Punjab state to assess their knowledge and practice. Findings Although we found the legal framework and system design to be appropriate, data collection was based on complex intersectoral collaborations at state and local level and the collected data were found to be of poor quality. The registration data were inadequate for a robust estimate of mortality at national level. A medically certified cause of death was only recorded for 965 992 (16.8%) of the 5 735 082 deaths registered. Conclusion The data recorded by India’s civil registration and vital statistics system in 2011 were incomplete. If improved, the system could be used to reliably estimate mortality. We recommend improving political support and intersectoral coordination, capacity building, computerization and state-level initiatives to ensure that every death is registered and that reliable causes of death are recorded – at least within an adequate sample of registration units within each state. PMID:26769992

  3. Fighting bias with statistics: Detecting gender differences in responses to items on a preschool science assessment

    NASA Astrophysics Data System (ADS)

    Greenberg, Ariela Caren

    Differential item functioning (DIF) and differential distractor functioning (DDF) are methods used to screen for item bias (Camilli & Shepard, 1994; Penfield, 2008). Using an applied empirical example, this mixed-methods study examined the congruency and relationship of DIF and DDF methods in screening multiple-choice items. Data for Study I were drawn from item responses of 271 female and 236 male low-income children on a preschool science assessment. Item analyses employed a common statistical approach of the Mantel-Haenszel log-odds ratio (MH-LOR) to detect DIF in dichotomously scored items (Holland & Thayer, 1988), and extended the approach to identify DDF (Penfield, 2008). Findings demonstrated that the using MH-LOR to detect DIF and DDF supported the theoretical relationship that the magnitude and form of DIF and are dependent on the DDF effects, and demonstrated the advantages of studying DIF and DDF in multiple-choice items. A total of 4 items with DIF and DDF and 5 items with only DDF were detected. Study II incorporated an item content review, an important but often overlooked and under-published step of DIF and DDF studies (Camilli & Shepard). Interviews with 25 female and 22 male low-income preschool children and an expert review helped to interpret the DIF and DDF results and their comparison, and determined that a content review process of studied items can reveal reasons for potential item bias that are often congruent with the statistical results. Patterns emerged and are discussed in detail. The quantitative and qualitative analyses were conducted in an applied framework of examining the validity of the preschool science assessment scores for evaluating science programs serving low-income children, however, the techniques can be generalized for use with measures across various disciplines of research.

  4. Diagnostic Accuracy of Computer Tomography Angiography and Magnetic Resonance Angiography in the Stenosis Detection of Autologuous Hemodialysis Access: A Meta-Analysis

    PubMed Central

    Liu, Shiyuan

    2013-01-01

    Purpose To compare the diagnostic performances of computer tomography angiography (CTA) and magnetic resonance angiography (MRA) for detection and assessment of stenosis in patients with autologuous hemodialysis access. Materials and Methods Search of PubMed, MEDLINE, EMBASE and Cochrane Library database from January 1984 to May 2013 for studies comparing CTA or MRA with DSA or surgery for autologuous hemodialysis access. Eligible studies were in English language, aimed to detect more than 50% stenosis or occlusion of autologuous vascular access in hemodialysis patients with CTA and MRA technology and provided sufficient data about diagnosis performance. Methodological quality was assessed by the Quality Assessment of Diagnostic Studies (QUADAS) instrument. Sensitivities (SEN), specificities (SPE), positive likelihood ratio (PLR), negative likelihood values (NLR), diagnostic odds ratio (DOR) and areas under the receiver operator characteristic curve (AUC) were pooled statistically. Potential threshold effect, heterogeneity and publication bias was evaluated. The clinical utility of CTA and MRA in detection of stenosis was also investigated. Result Sixteen eligible studies were included, with a total of 500 patients. Both CTA and MRA were accurate modality (sensitivity, 96.2% and 95.4%, respectively; specificity, 97.1 and 96.1%, respectively; DOR [diagnostic odds ratio], 393.69 and 211.47, respectively) for hemodialysis vascular access. No significant difference was detected between the diagnostic performance of CTA (AUC, 0.988) and MRA (AUC, 0.982). Meta-regression analyses and subgroup analyses revealed no statistical difference. The Deek’s funnel plots suggested a publication bias. Conclusion Diagnostic performance of CTA and MRA for detecting stenosis of hemodialysis vascular access had no statistical difference. Both techniques may function as an alternative or an important complement to conventional digital subtraction angiography (DSA) and may be able to help guide medical management. PMID:24194928

  5. MO-G-12A-01: Quantitative Imaging Metrology: What Should Be Assessed and How?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Giger, M; Petrick, N; Obuchowski, N

    The first two symposia in the Quantitative Imaging Track focused on 1) the introduction of quantitative imaging (QI) challenges and opportunities, and QI efforts of agencies and organizations such as the RSNA, NCI, FDA, and NIST, and 2) the techniques, applications, and challenges of QI, with specific examples from CT, PET/CT, and MR. This third symposium in the QI Track will focus on metrology and its importance in successfully advancing the QI field. While the specific focus will be on QI, many of the concepts presented are more broadly applicable to many areas of medical physics research and applications. Asmore » such, the topics discussed should be of interest to medical physicists involved in imaging as well as therapy. The first talk of the session will focus on the introduction to metrology and why it is critically important in QI. The second talk will focus on appropriate methods for technical performance assessment. The third talk will address statistically valid methods for algorithm comparison, a common problem not only in QI but also in other areas of medical physics. The final talk in the session will address strategies for publication of results that will allow statistically valid meta-analyses, which is critical for combining results of individual studies with typically small sample sizes in a manner that can best inform decisions and advance the field. Learning Objectives: Understand the importance of metrology in the QI efforts. Understand appropriate methods for technical performance assessment. Understand methods for comparing algorithms with or without reference data (i.e., “ground truth”). Understand the challenges and importance of reporting results in a manner that allows for statistically valid meta-analyses.« less

  6. Neuropsychological study of IQ scores in offspring of parents with bipolar I disorder.

    PubMed

    Sharma, Aditya; Camilleri, Nigel; Grunze, Heinz; Barron, Evelyn; Le Couteur, James; Close, Andrew; Rushton, Steven; Kelly, Thomas; Ferrier, Ian Nicol; Le Couteur, Ann

    2017-01-01

    Studies comparing IQ in Offspring of Bipolar Parents (OBP) with Offspring of Healthy Controls (OHC) have reported conflicting findings. They have included OBP with mental health/neurodevelopmental disorders and/or pharmacological treatment which could affect results. This UK study aimed to assess IQ in OBP with no mental health/neurodevelopmental disorder and assess the relationship of sociodemographic variables with IQ. IQ data using the Wechsler Abbreviated Scale of Intelligence (WASI) from 24 OBP and 34 OHC from the North East of England was analysed using mixed-effects modelling. All participants had IQ in the average range. OBP differed statistically significantly from OHC on Full Scale IQ (p = .001), Performance IQ (PIQ) (p = .003) and Verbal IQ (VIQ) (p = .001) but not on the PIQ-VIQ split. OBP and OHC groups did not differ on socio-economic status (SES) and gender. SES made a statistically significant contribution to the variance of IQ scores (p = .001). Using a robust statistical model of analysis, the OBP with no current/past history of mental health/neurodevelopmental disorders had lower IQ scores compared to OHC. This finding should be borne in mind when assessing and recommending interventions for OBP.

  7. Cumulative risk assessment for combined health effects from chemical and nonchemical stressors.

    PubMed

    Sexton, Ken; Linder, Stephen H

    2011-12-01

    Cumulative risk assessment is a science policy tool for organizing and analyzing information to examine, characterize, and possibly quantify combined threats from multiple environmental stressors. We briefly survey the state of the art regarding cumulative risk assessment, emphasizing challenges and complexities of moving beyond the current focus on chemical mixtures to incorporate nonchemical stressors, such as poverty and discrimination, into the assessment paradigm. Theoretical frameworks for integrating nonchemical stressors into cumulative risk assessments are discussed, the impact of geospatial issues on interpreting results of statistical analyses is described, and four assessment methods are used to illustrate the diversity of current approaches. Prospects for future progress depend on adequate research support as well as development and verification of appropriate analytic frameworks.

  8. Cumulative Risk Assessment for Combined Health Effects From Chemical and Nonchemical Stressors

    PubMed Central

    Linder, Stephen H.

    2011-01-01

    Cumulative risk assessment is a science policy tool for organizing and analyzing information to examine, characterize, and possibly quantify combined threats from multiple environmental stressors. We briefly survey the state of the art regarding cumulative risk assessment, emphasizing challenges and complexities of moving beyond the current focus on chemical mixtures to incorporate nonchemical stressors, such as poverty and discrimination, into the assessment paradigm. Theoretical frameworks for integrating nonchemical stressors into cumulative risk assessments are discussed, the impact of geospatial issues on interpreting results of statistical analyses is described, and four assessment methods are used to illustrate the diversity of current approaches. Prospects for future progress depend on adequate research support as well as development and verification of appropriate analytic frameworks. PMID:21551386

  9. The Development of Statistical Models for Predicting Surgical Site Infections in Japan: Toward a Statistical Model-Based Standardized Infection Ratio.

    PubMed

    Fukuda, Haruhisa; Kuroki, Manabu

    2016-03-01

    To develop and internally validate a surgical site infection (SSI) prediction model for Japan. Retrospective observational cohort study. We analyzed surveillance data submitted to the Japan Nosocomial Infections Surveillance system for patients who had undergone target surgical procedures from January 1, 2010, through December 31, 2012. Logistic regression analyses were used to develop statistical models for predicting SSIs. An SSI prediction model was constructed for each of the procedure categories by statistically selecting the appropriate risk factors from among the collected surveillance data and determining their optimal categorization. Standard bootstrapping techniques were applied to assess potential overfitting. The C-index was used to compare the predictive performances of the new statistical models with those of models based on conventional risk index variables. The study sample comprised 349,987 cases from 428 participant hospitals throughout Japan, and the overall SSI incidence was 7.0%. The C-indices of the new statistical models were significantly higher than those of the conventional risk index models in 21 (67.7%) of the 31 procedure categories (P<.05). No significant overfitting was detected. Japan-specific SSI prediction models were shown to generally have higher accuracy than conventional risk index models. These new models may have applications in assessing hospital performance and identifying high-risk patients in specific procedure categories.

  10. Enhanced secondary analysis of survival data: reconstructing the data from published Kaplan-Meier survival curves.

    PubMed

    Guyot, Patricia; Ades, A E; Ouwens, Mario J N M; Welton, Nicky J

    2012-02-01

    The results of Randomized Controlled Trials (RCTs) on time-to-event outcomes that are usually reported are median time to events and Cox Hazard Ratio. These do not constitute the sufficient statistics required for meta-analysis or cost-effectiveness analysis, and their use in secondary analyses requires strong assumptions that may not have been adequately tested. In order to enhance the quality of secondary data analyses, we propose a method which derives from the published Kaplan Meier survival curves a close approximation to the original individual patient time-to-event data from which they were generated. We develop an algorithm that maps from digitised curves back to KM data by finding numerical solutions to the inverted KM equations, using where available information on number of events and numbers at risk. The reproducibility and accuracy of survival probabilities, median survival times and hazard ratios based on reconstructed KM data was assessed by comparing published statistics (survival probabilities, medians and hazard ratios) with statistics based on repeated reconstructions by multiple observers. The validation exercise established there was no material systematic error and that there was a high degree of reproducibility for all statistics. Accuracy was excellent for survival probabilities and medians, for hazard ratios reasonable accuracy can only be obtained if at least numbers at risk or total number of events are reported. The algorithm is a reliable tool for meta-analysis and cost-effectiveness analyses of RCTs reporting time-to-event data. It is recommended that all RCTs should report information on numbers at risk and total number of events alongside KM curves.

  11. Association of Personality Traits with Elder Self-Neglect in a Community Dwelling Population

    PubMed Central

    Dong, XinQi; Simon, Melissa; Wilson, Robert; Beck, Todd; McKinell, Kelly; Evans, Denis

    2010-01-01

    Objective Elder self-neglect is an important public health issue. However, little is known about the association between personality traits and risk of elder self-neglect among community-dwelling populations. The objectives of this study are: 1) to examine the association of personality traits with elder self-neglect and 2) to examine the association of personality traits with elder self-neglect severity. Methods Population-based study conducted from 1993–2005 of community-dwelling older adults (N=9,056) participating in the Chicago Health Aging Project (CHAP). Subsets of the CHAP participants (N=1,820) were identified for suspected self-neglect by social services agency, which assessed the severity. Personality traits assessed included neuroticism, extraversion, rigidity and information processing. Logistic and linear regressions were used to assess these associations. Results In the bivariate analyses, personality traits (neuroticism, extraversion, information processing, and rigidity) were significantly associated with increased risk of elder self-neglect. However, after adjusting for potential confounders, the above associations were no longer statistically significant. In addition, personality traits were not associated with increased risk of greater self-neglect severity. Furthermore, interaction term analyses of personality traits with health and psychosocial factors were not statistically significant with elder self-neglect outcomes. Conclusion Neuroticism, extraversion, rigidity and information processing were not associated with significantly increased risk of elder self-neglect after consideration of potential confounders. PMID:21788924

  12. Cognitive predictors of balance in Parkinson's disease.

    PubMed

    Fernandes, Ângela; Mendes, Andreia; Rocha, Nuno; Tavares, João Manuel R S

    2016-06-01

    Postural instability is one of the most incapacitating symptoms of Parkinson's disease (PD) and appears to be related to cognitive deficits. This study aims to determine the cognitive factors that can predict deficits in static and dynamic balance in individuals with PD. A sociodemographic questionnaire characterized 52 individuals with PD for this work. The Trail Making Test, Rule Shift Cards Test, and Digit Span Test assessed the executive functions. The static balance was assessed using a plantar pressure platform, and dynamic balance was based on the Timed Up and Go Test. The results were statistically analysed using SPSS Statistics software through linear regression analysis. The results show that a statistically significant model based on cognitive outcomes was able to explain the variance of motor variables. Also, the explanatory value of the model tended to increase with the addition of individual and clinical variables, although the resulting model was not statistically significant The model explained 25-29% of the variability of the Timed Up and Go Test, while for the anteroposterior displacement it was 23-34%, and for the mediolateral displacement it was 24-39%. From the findings, we conclude that the cognitive performance, especially the executive functions, is a predictor of balance deficit in individuals with PD.

  13. Statistical Primer on Biosimilar Clinical Development.

    PubMed

    Isakov, Leah; Jin, Bo; Jacobs, Ira Allen

    A biosimilar is highly similar to a licensed biological product and has no clinically meaningful differences between the biological product and the reference (originator) product in terms of safety, purity, and potency and is approved under specific regulatory approval processes. Because both the originator and the potential biosimilar are large and structurally complex proteins, biosimilars are not generic equivalents of the originator. Thus, the regulatory approach for a small-molecule generic is not appropriate for a potential biosimilar. As a result, different study designs and statistical approaches are used in the assessment of a potential biosimilar. This review covers concepts and terminology used in statistical analyses in the clinical development of biosimilars so that clinicians can understand how similarity is evaluated. This should allow the clinician to understand the statistical considerations in biosimilar clinical trials and make informed prescribing decisions when an approved biosimilar is available.

  14. Weighting of the data and analytical approaches may account for differences in overcoming the inadequate representativeness of the respondents to the third wave of a cohort study.

    PubMed

    Taylor, Anne W; Dal Grande, Eleonora; Grant, Janet; Appleton, Sarah; Gill, Tiffany K; Shi, Zumin; Adams, Robert J

    2013-04-01

    Attrition in cohort studies can cause the data to be nonreflective of the original population. Although of little concern if intragroup comparisons are being made or cause and effect assessed, the assessment of bias was undertaken in this study so that intergroup or descriptive analyses could be undertaken. The North West Adelaide Health Study is a chronic disease and risk factor cohort study undertaken in Adelaide, South Australia. In the original wave (1999), clinical and self-report data were collected from 4,056 adults. In the third wave (2008-2010), 2,710 adults were still actively involved. Comparisons were made against two other data sources: Australian Bureau of Statistics Estimated Residential Population and a regular conducted chronic disease and risk factor surveillance system. Comparisons of demographics (age, sex, area, education, work status, and income) proved to be statistically significantly different. In addition, smoking status, body mass index, and general health status were statistically significant from the comparison group. No statistically significant differences were found for alcohol risk. Although the third wave of this cohort study is not representative of the broader population on the variables assessed, weighting of the data and analytical approaches can account for differences. Copyright © 2013 Elsevier Inc. All rights reserved.

  15. Development of a Self-Report Physical Function Instrument for Disability Assessment: Item Pool Construction and Factor Analysis

    PubMed Central

    McDonough, Christine M.; Jette, Alan M.; Ni, Pengsheng; Bogusz, Kara; Marfeo, Elizabeth E; Brandt, Diane E; Chan, Leighton; Meterko, Mark; Haley, Stephen M.; Rasch, Elizabeth K.

    2014-01-01

    Objectives To build a comprehensive item pool representing work-relevant physical functioning and to test the factor structure of the item pool. These developmental steps represent initial outcomes of a broader project to develop instruments for the assessment of function within the context of Social Security Administration (SSA) disability programs. Design Comprehensive literature review; gap analysis; item generation with expert panel input; stakeholder interviews; cognitive interviews; cross-sectional survey administration; and exploratory and confirmatory factor analyses to assess item pool structure. Setting In-person and semi-structured interviews; internet and telephone surveys. Participants A sample of 1,017 SSA claimants, and a normative sample of 999 adults from the US general population. Interventions Not Applicable. Main Outcome Measure Model fit statistics Results The final item pool consisted of 139 items. Within the claimant sample 58.7% were white; 31.8% were black; 46.6% were female; and the mean age was 49.7 years. Initial factor analyses revealed a 4-factor solution which included more items and allowed separate characterization of: 1) Changing and Maintaining Body Position, 2) Whole Body Mobility, 3) Upper Body Function and 4) Upper Extremity Fine Motor. The final 4-factor model included 91 items. Confirmatory factor analyses for the 4-factor models for the claimant and the normative samples demonstrated very good fit. Fit statistics for claimant and normative samples respectively were: Comparative Fit Index = 0.93 and 0.98; Tucker-Lewis Index = 0.92 and 0.98; Root Mean Square Error Approximation = 0.05 and 0.04. Conclusions The factor structure of the Physical Function item pool closely resembled the hypothesized content model. The four scales relevant to work activities offer promise for providing reliable information about claimant physical functioning relevant to work disability. PMID:23542402

  16. Quantitative analysis of tympanic membrane perforation: a simple and reliable method.

    PubMed

    Ibekwe, T S; Adeosun, A A; Nwaorgu, O G

    2009-01-01

    Accurate assessment of the features of tympanic membrane perforation, especially size, site, duration and aetiology, is important, as it enables optimum management. To describe a simple, cheap and effective method of quantitatively analysing tympanic membrane perforations. The system described comprises a video-otoscope (capable of generating still and video images of the tympanic membrane), adapted via a universal serial bus box to a computer screen, with images analysed using the Image J geometrical analysis software package. The reproducibility of results and their correlation with conventional otoscopic methods of estimation were tested statistically with the paired t-test and correlational tests, using the Statistical Package for the Social Sciences version 11 software. The following equation was generated: P/T x 100 per cent = percentage perforation, where P is the area (in pixels2) of the tympanic membrane perforation and T is the total area (in pixels2) for the entire tympanic membrane (including the perforation). Illustrations are shown. Comparison of blinded data on tympanic membrane perforation area obtained independently from assessments by two trained otologists, of comparative years of experience, using the video-otoscopy system described, showed similar findings, with strong correlations devoid of inter-observer error (p = 0.000, r = 1). Comparison with conventional otoscopic assessment also indicated significant correlation, comparing results for two trained otologists, but some inter-observer variation was present (p = 0.000, r = 0.896). Correlation between the two methods for each of the otologists was also highly significant (p = 0.000). A computer-adapted video-otoscope, with images analysed by Image J software, represents a cheap, reliable, technology-driven, clinical method of quantitative analysis of tympanic membrane perforations and injuries.

  17. Development of a self-report physical function instrument for disability assessment: item pool construction and factor analysis.

    PubMed

    McDonough, Christine M; Jette, Alan M; Ni, Pengsheng; Bogusz, Kara; Marfeo, Elizabeth E; Brandt, Diane E; Chan, Leighton; Meterko, Mark; Haley, Stephen M; Rasch, Elizabeth K

    2013-09-01

    To build a comprehensive item pool representing work-relevant physical functioning and to test the factor structure of the item pool. These developmental steps represent initial outcomes of a broader project to develop instruments for the assessment of function within the context of Social Security Administration (SSA) disability programs. Comprehensive literature review; gap analysis; item generation with expert panel input; stakeholder interviews; cognitive interviews; cross-sectional survey administration; and exploratory and confirmatory factor analyses to assess item pool structure. In-person and semistructured interviews and Internet and telephone surveys. Sample of SSA claimants (n=1017) and a normative sample of adults from the U.S. general population (n=999). Not applicable. Model fit statistics. The final item pool consisted of 139 items. Within the claimant sample, 58.7% were white; 31.8% were black; 46.6% were women; and the mean age was 49.7 years. Initial factor analyses revealed a 4-factor solution, which included more items and allowed separate characterization of: (1) changing and maintaining body position, (2) whole body mobility, (3) upper body function, and (4) upper extremity fine motor. The final 4-factor model included 91 items. Confirmatory factor analyses for the 4-factor models for the claimant and the normative samples demonstrated very good fit. Fit statistics for claimant and normative samples, respectively, were: Comparative Fit Index=.93 and .98; Tucker-Lewis Index=.92 and .98; and root mean square error approximation=.05 and .04. The factor structure of the physical function item pool closely resembled the hypothesized content model. The 4 scales relevant to work activities offer promise for providing reliable information about claimant physical functioning relevant to work disability. Copyright © 2013 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  18. Grey literature in meta-analyses.

    PubMed

    Conn, Vicki S; Valentine, Jeffrey C; Cooper, Harris M; Rantz, Marilyn J

    2003-01-01

    In meta-analysis, researchers combine the results of individual studies to arrive at cumulative conclusions. Meta-analysts sometimes include "grey literature" in their evidential base, which includes unpublished studies and studies published outside widely available journals. Because grey literature is a source of data that might not employ peer review, critics have questioned the validity of its data and the results of meta-analyses that include it. To examine evidence regarding whether grey literature should be included in meta-analyses and strategies to manage grey literature in quantitative synthesis. This article reviews evidence on whether the results of studies published in peer-reviewed journals are representative of results from broader samplings of research on a topic as a rationale for inclusion of grey literature. Strategies to enhance access to grey literature are addressed. The most consistent and robust difference between published and grey literature is that published research is more likely to contain results that are statistically significant. Effect size estimates of published research are about one-third larger than those of unpublished studies. Unfunded and small sample studies are less likely to be published. Yet, importantly, methodological rigor does not differ between published and grey literature. Meta-analyses that exclude grey literature likely (a) over-represent studies with statistically significant findings, (b) inflate effect size estimates, and (c) provide less precise effect size estimates than meta-analyses including grey literature. Meta-analyses should include grey literature to fully reflect the existing evidential base and should assess the impact of methodological variations through moderator analysis.

  19. "What If" Analyses: Ways to Interpret Statistical Significance Test Results Using EXCEL or "R"

    ERIC Educational Resources Information Center

    Ozturk, Elif

    2012-01-01

    The present paper aims to review two motivations to conduct "what if" analyses using Excel and "R" to understand the statistical significance tests through the sample size context. "What if" analyses can be used to teach students what statistical significance tests really do and in applied research either prospectively to estimate what sample size…

  20. Characterizing uncertainty and variability in physiologically based pharmacokinetic models: state of the science and needs for research and implementation.

    PubMed

    Barton, Hugh A; Chiu, Weihsueh A; Setzer, R Woodrow; Andersen, Melvin E; Bailer, A John; Bois, Frédéric Y; Dewoskin, Robert S; Hays, Sean; Johanson, Gunnar; Jones, Nancy; Loizou, George; Macphail, Robert C; Portier, Christopher J; Spendiff, Martin; Tan, Yu-Mei

    2007-10-01

    Physiologically based pharmacokinetic (PBPK) models are used in mode-of-action based risk and safety assessments to estimate internal dosimetry in animals and humans. When used in risk assessment, these models can provide a basis for extrapolating between species, doses, and exposure routes or for justifying nondefault values for uncertainty factors. Characterization of uncertainty and variability is increasingly recognized as important for risk assessment; this represents a continuing challenge for both PBPK modelers and users. Current practices show significant progress in specifying deterministic biological models and nondeterministic (often statistical) models, estimating parameters using diverse data sets from multiple sources, using them to make predictions, and characterizing uncertainty and variability of model parameters and predictions. The International Workshop on Uncertainty and Variability in PBPK Models, held 31 Oct-2 Nov 2006, identified the state-of-the-science, needed changes in practice and implementation, and research priorities. For the short term, these include (1) multidisciplinary teams to integrate deterministic and nondeterministic/statistical models; (2) broader use of sensitivity analyses, including for structural and global (rather than local) parameter changes; and (3) enhanced transparency and reproducibility through improved documentation of model structure(s), parameter values, sensitivity and other analyses, and supporting, discrepant, or excluded data. Longer-term needs include (1) theoretical and practical methodological improvements for nondeterministic/statistical modeling; (2) better methods for evaluating alternative model structures; (3) peer-reviewed databases of parameters and covariates, and their distributions; (4) expanded coverage of PBPK models across chemicals with different properties; and (5) training and reference materials, such as cases studies, bibliographies/glossaries, model repositories, and enhanced software. The multidisciplinary dialogue initiated by this Workshop will foster the collaboration, research, data collection, and training necessary to make characterizing uncertainty and variability a standard practice in PBPK modeling and risk assessment.

  1. Assessment of metals bioavailability to vegetables under field conditions using DGT, single extractions and multivariate statistics

    PubMed Central

    2012-01-01

    Background The metals bioavailability in soils is commonly assessed by chemical extractions; however a generally accepted method is not yet established. In this study, the effectiveness of Diffusive Gradients in Thin-films (DGT) technique and single extractions in the assessment of metals bioaccumulation in vegetables, and the influence of soil parameters on phytoavailability were evaluated using multivariate statistics. Soil and plants grown in vegetable gardens from mining-affected rural areas, NW Romania, were collected and analysed. Results Pseudo-total metal content of Cu, Zn and Cd in soil ranged between 17.3-146 mg kg-1, 141–833 mg kg-1 and 0.15-2.05 mg kg-1, respectively, showing enriched contents of these elements. High degrees of metals extractability in 1M HCl and even in 1M NH4Cl were observed. Despite the relatively high total metal concentrations in soil, those found in vegetables were comparable to values typically reported for agricultural crops, probably due to the low concentrations of metals in soil solution (Csoln) and low effective concentrations (CE), assessed by DGT technique. Among the analysed vegetables, the highest metal concentrations were found in carrots roots. By applying multivariate statistics, it was found that CE, Csoln and extraction in 1M NH4Cl, were better predictors for metals bioavailability than the acid extractions applied in this study. Copper transfer to vegetables was strongly influenced by soil organic carbon (OC) and cation exchange capacity (CEC), while pH had a higher influence on Cd transfer from soil to plants. Conclusions The results showed that DGT can be used for general evaluation of the risks associated to soil contamination with Cu, Zn and Cd in field conditions. Although quantitative information on metals transfer from soil to vegetables was not observed. PMID:23079133

  2. An empirical comparison of statistical tests for assessing the proportional hazards assumption of Cox's model.

    PubMed

    Ng'andu, N H

    1997-03-30

    In the analysis of survival data using the Cox proportional hazard (PH) model, it is important to verify that the explanatory variables analysed satisfy the proportional hazard assumption of the model. This paper presents results of a simulation study that compares five test statistics to check the proportional hazard assumption of Cox's model. The test statistics were evaluated under proportional hazards and the following types of departures from the proportional hazard assumption: increasing relative hazards; decreasing relative hazards; crossing hazards; diverging hazards, and non-monotonic hazards. The test statistics compared include those based on partitioning of failure time and those that do not require partitioning of failure time. The simulation results demonstrate that the time-dependent covariate test, the weighted residuals score test and the linear correlation test have equally good power for detection of non-proportionality in the varieties of non-proportional hazards studied. Using illustrative data from the literature, these test statistics performed similarly.

  3. Approach for Uncertainty Propagation and Robust Design in CFD Using Sensitivity Derivatives

    NASA Technical Reports Server (NTRS)

    Putko, Michele M.; Newman, Perry A.; Taylor, Arthur C., III; Green, Lawrence L.

    2001-01-01

    This paper presents an implementation of the approximate statistical moment method for uncertainty propagation and robust optimization for a quasi 1-D Euler CFD (computational fluid dynamics) code. Given uncertainties in statistically independent, random, normally distributed input variables, a first- and second-order statistical moment matching procedure is performed to approximate the uncertainty in the CFD output. Efficient calculation of both first- and second-order sensitivity derivatives is required. In order to assess the validity of the approximations, the moments are compared with statistical moments generated through Monte Carlo simulations. The uncertainties in the CFD input variables are also incorporated into a robust optimization procedure. For this optimization, statistical moments involving first-order sensitivity derivatives appear in the objective function and system constraints. Second-order sensitivity derivatives are used in a gradient-based search to successfully execute a robust optimization. The approximate methods used throughout the analyses are found to be valid when considering robustness about input parameter mean values.

  4. Physiotherapy triage assessment of patients referred for orthopaedic consultation - Long-term follow-up of health-related quality of life, pain-related disability and sick leave.

    PubMed

    Samsson, Karin S; Larsson, Maria E H

    2015-02-01

    The literature indicates that physiotherapy triage assessment can be efficient for patients referred for orthopaedic consultation, however long-term follow up of patient reported outcome measures are not available. To report a long-term evaluation of patient-reported health-related quality of life, pain-related disability, and sick leave after a physiotherapy triage assessment of patients referred for orthopaedic consultation compared with standard practice. Patients referred for orthopaedic consultation (n = 208) were randomised to physiotherapy triage assessment or standard practice. The randomised cohort was analysed on an intention-to-treat (ITT) basis. The patient reported outcome measures EuroQol VAS (self-reported health-state), EuroQol 5D-3L (EQ-5D) and Pain Disability Index (PDI) were assessed at baseline and after 3, 6 and 12 months. EQ VAS was analysed using a repeated measure ANOVA. PDI and EQ-5D were analysed using a marginal logistic regression model. Sick leave was analysed for the 12 months following consultation using a Mann-Whitney U-test. The patients rated a significantly better health-state at 3 after physiotherapy triage assessment [mean difference -5.7 (95% CI -11.1; -0.2); p = 0.04]. There were no other statistically significant differences in perceived health-related quality of life or pain related disability between the groups at any of the follow-ups, or sick leave. This study reports that the long-term follow up of the patient related outcome measures health-related quality of life, pain-related disability and sick leave after physiotherapy triage assessment did not differ from standard practice, indicating the possible benefits of implementation of this model of care. Copyright © 2014 Elsevier Ltd. All rights reserved.

  5. Overweight and Obesity: Prevalence and Correlates in a Large Clinical Sample of Children with Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Zuckerman, Katharine E.; Hill, Alison P.; Guion, Kimberly; Voltolina, Lisa; Fombonne, Eric

    2014-01-01

    Autism Spectrum Disorders (ASDs) and childhood obesity (OBY) are rising public health concerns. This study aimed to evaluate the prevalence of overweight (OWT) and OBY in a sample of 376 Oregon children with ASD, and to assess correlates of OWT and OBY in this sample. We used descriptive statistics, bivariate, and focused multivariate analyses to…

  6. Superheavy-element spectroscopy: Correlations along element 115 decay chains

    NASA Astrophysics Data System (ADS)

    Rudolph, D.; Forsberg, U.; Sarmiento, L. G.; Golubev, P.; Fahlander, C.

    2016-05-01

    Following a brief summary of the region of the heaviest atomic nuclei yet created in the laboratory, data on more than hundred α-decay chains associated with the production of element 115 are combined to investigate time and energy correlations along the observed decay chains. Several of these are analysed using a new method for statistical assessments of lifetimes in sets of decay chains.

  7. An Assessment of the Impact of the Department of Defense Very-High-Speed Integrated Circuit Program.

    DTIC Science & Technology

    1982-01-01

    analysis, statistical inference, device physics and other such products of basic research. Examples of such information would be: analyses of properties of...TB , for a n-p-n silicon transitor with 1018 cm- 3 base-doping, TB = Wb 2/2Dw becomes 0.4 ps in this limit so that the base contributes little to delay

  8. Adventures in Uncertainty: An Empirical Investigation of the Use of a Taylor's Series Approximation for the Assessment of Sampling Errors in Educational Research.

    ERIC Educational Resources Information Center

    Wilson, Mark

    This study investigates the accuracy of the Woodruff-Causey technique for estimating sampling errors for complex statistics. The technique may be applied when data are collected by using multistage clustered samples. The technique was chosen for study because of its relevance to the correct use of multivariate analyses in educational survey…

  9. An application of Social Values for Ecosystem Services (SolVES) to three national forests in Colorado and Wyoming

    USGS Publications Warehouse

    Sherrouse, Benson C.; Semmens, Darius J.; Clement, Jessica M.

    2014-01-01

    Despite widespread recognition that social-value information is needed to inform stakeholders and decision makers regarding trade-offs in environmental management, it too often remains absent from ecosystem service assessments. Although quantitative indicators of social values need to be explicitly accounted for in the decision-making process, they need not be monetary. Ongoing efforts to map such values demonstrate how they can also be made spatially explicit and relatable to underlying ecological information. We originally developed Social Values for Ecosystem Services (SolVES) as a tool to assess, map, and quantify nonmarket values perceived by various groups of ecosystem stakeholders. With SolVES 2.0 we have extended the functionality by integrating SolVES with Maxent maximum entropy modeling software to generate more complete social-value maps from available value and preference survey data and to produce more robust models describing the relationship between social values and ecosystems. The current study has two objectives: (1) evaluate how effectively the value index, a quantitative, nonmonetary social-value indicator calculated by SolVES, reproduces results from more common statistical methods of social-survey data analysis and (2) examine how the spatial results produced by SolVES provide additional information that could be used by managers and stakeholders to better understand more complex relationships among stakeholder values, attitudes, and preferences. To achieve these objectives, we applied SolVES to value and preference survey data collected for three national forests, the Pike and San Isabel in Colorado and the Bridger–Teton and the Shoshone in Wyoming. Value index results were generally consistent with results found through more common statistical analyses of the survey data such as frequency, discriminant function, and correlation analyses. In addition, spatial analysis of the social-value maps produced by SolVES provided information that was useful for explaining relationships between stakeholder values and forest uses. Our results suggest that SolVES can effectively reproduce information derived from traditional statistical analyses while adding spatially explicit, social-value information that can contribute to integrated resource assessment, planning, and management of forests and other ecosystems.

  10. Statistical analysis plan of the head position in acute ischemic stroke trial pilot (HEADPOST pilot).

    PubMed

    Olavarría, Verónica V; Arima, Hisatomi; Anderson, Craig S; Brunser, Alejandro; Muñoz-Venturelli, Paula; Billot, Laurent; Lavados, Pablo M

    2017-02-01

    Background The HEADPOST Pilot is a proof-of-concept, open, prospective, multicenter, international, cluster randomized, phase IIb controlled trial, with masked outcome assessment. The trial will test if lying flat head position initiated in patients within 12 h of onset of acute ischemic stroke involving the anterior circulation increases cerebral blood flow in the middle cerebral arteries, as measured by transcranial Doppler. The study will also assess the safety and feasibility of patients lying flat for ≥24 h. The trial was conducted in centers in three countries, with ability to perform early transcranial Doppler. A feature of this trial was that patients were randomized to a certain position according to the month of admission to hospital. Objective To outline in detail the predetermined statistical analysis plan for HEADPOST Pilot study. Methods All data collected by participating researchers will be reviewed and formally assessed. Information pertaining to the baseline characteristics of patients, their process of care, and the delivery of treatments will be classified, and for each item, appropriate descriptive statistical analyses are planned with comparisons made between randomized groups. For the outcomes, statistical comparisons to be made between groups are planned and described. Results This statistical analysis plan was developed for the analysis of the results of the HEADPOST Pilot study to be transparent, available, verifiable, and predetermined before data lock. Conclusions We have developed a statistical analysis plan for the HEADPOST Pilot study which is to be followed to avoid analysis bias arising from prior knowledge of the study findings. Trial registration The study is registered under HEADPOST-Pilot, ClinicalTrials.gov Identifier NCT01706094.

  11. Surgical adverse outcome reporting as part of routine clinical care.

    PubMed

    Kievit, J; Krukerink, M; Marang-van de Mheen, P J

    2010-12-01

    In The Netherlands, health professionals have created a doctor-driven standardised system to report and analyse adverse outcomes (AO). The aim is to improve healthcare by learning from past experiences. The key elements of this system are (1) an unequivocal definition of an adverse outcome, (2) appropriate contextual information and (3) a three-dimensional hierarchical classification system. First, to assess whether routine doctor-driven AO reporting is feasible. Second, to investigate how doctors can learn from AO reporting and analysis to improve the quality of care. Feasibility was assessed by how well doctors reported AO in the surgical department of a Dutch university hospital over a period of 9 years. AO incidence was analysed per patient subgroup and over time, in a time-trend analysis of three equal 3-year periods. AO were analysed case by case and statistically, to learn lessons from past events. In 19,907 surgical admissions, 9189 AOs were reported: one or more AO in 18.2% of admissions. On average, 55 lessons were learnt each year (in 4.3% of AO). More AO were reported in P3 than P1 (OR 1.39 (1.23-1.57)). Although minor AO increased, fatal AO decreased over time (OR 0.59 (0.45-0.77)). Doctor-driven AO reporting is shown to be feasible. Lessons can be learnt from case-by-case analyses of individual AO, as well as by statistical analysis of AO groups and subgroups (illustrated by time-trend analysis), thus contributing to the improvement of the quality of care. Moreover, by standardising AO reporting, data can be compared across departments or hospitals, to generate (confidential) mirror information for professionals cooperating in a peer-review setting.

  12. Surgical adverse outcome reporting as part of routine clinical care

    PubMed Central

    Krukerink, M; Marang-van de Mheen, P J

    2010-01-01

    Background In The Netherlands, health professionals have created a doctor-driven standardised system to report and analyse adverse outcomes (AO). The aim is to improve healthcare by learning from past experiences. The key elements of this system are (1) an unequivocal definition of an adverse outcome, (2) appropriate contextual information and (3) a three-dimensional hierarchical classification system. Objectives First, to assess whether routine doctor-driven AO reporting is feasible. Second, to investigate how doctors can learn from AO reporting and analysis to improve the quality of care. Methods Feasibility was assessed by how well doctors reported AO in the surgical department of a Dutch university hospital over a period of 9 years. AO incidence was analysed per patient subgroup and over time, in a time-trend analysis of three equal 3-year periods. AO were analysed case by case and statistically, to learn lessons from past events. Results In 19 907 surgical admissions, 9189 AOs were reported: one or more AO in 18.2% of admissions. On average, 55 lessons were learnt each year (in 4.3% of AO). More AO were reported in P3 than P1 (OR 1.39 (1.23–1.57)). Although minor AO increased, fatal AO decreased over time (OR 0.59 (0.45–0.77)). Conclusions Doctor-driven AO reporting is shown to be feasible. Lessons can be learnt from case-by-case analyses of individual AO, as well as by statistical analysis of AO groups and subgroups (illustrated by time-trend analysis), thus contributing to the improvement of the quality of care. Moreover, by standardising AO reporting, data can be compared across departments or hospitals, to generate (confidential) mirror information for professionals cooperating in a peer-review setting. PMID:20430928

  13. [Quality assessment in anesthesia].

    PubMed

    Kupperwasser, B

    1996-01-01

    Quality assessment (assurance/improvement) is the set of methods used to measure and improve the delivered care and the department's performance against pre-established criteria or standards. The four stages of the self-maintained quality assessment cycle are: problem identification, problem analysis, problem correction and evaluation of corrective actions. Quality assessment is a measurable entity for which it is necessary to define and calibrate measurement parameters (indicators) from available data gathered from the hospital anaesthesia environment. Problem identification comes from the accumulation of indicators. There are four types of quality indicators: structure, process, outcome and sentinel indicators. The latter signal a quality defect, are independent of outcomes, are easier to analyse by statistical methods and closely related to processes and main targets of quality improvement. The three types of methods to analyse the problems (indicators) are: peer review, quantitative methods and risks management techniques. Peer review is performed by qualified anaesthesiologists. To improve its validity, the review process should be explicited and conclusions based on standards of practice and literature references. The quantitative methods are statistical analyses applied to the collected data and presented in a graphic format (histogram, Pareto diagram, control charts). The risks management techniques include: a) critical incident analysis establishing an objective relationship between a 'critical' event and the associated human behaviours; b) system accident analysis, based on the fact that accidents continue to occur despite safety systems and sophisticated technologies, checks of all the process components leading to the impredictable outcome and not just the human factors; c) cause-effect diagrams facilitate the problem analysis in reducing its causes to four fundamental components (persons, regulations, equipment, process). Definition and implementation of corrective measures, based on the findings of the two previous stages, are the third step of the evaluation cycle. The Hawthorne effect is an outcome improvement, before the implementation of any corrective actions. Verification of the implemented actions is the final and mandatory step closing the evaluation cycle.

  14. Tempo-spatial analysis of Fennoscandian intraplate seismicity

    NASA Astrophysics Data System (ADS)

    Roberts, Roland; Lund, Björn

    2017-04-01

    Coupled spatial-temporal patterns of the occurrence of earthquakes in Fennoscandia are analysed using non-parametric methods. The occurrence of larger events is unambiguously and very strongly temporally clustered, with major implications for the assessment of seismic hazard in areas such as Fennoscandia. In addition, there is a clear pattern of geographical migration of activity. Data from the Swedish National Seismic Network and a collated international catalogue are analysed. Results show consistent patterns on different spatial and temporal scales. We are currently investigating these patterns in order to assess the statistical significance of the tempo-spatial patterns, and to what extent these may be consistent with stress transfer mechanism such as coulomb stress and pore fluid migration. Indications are that some further mechanism is necessary in order to explain the data, perhaps related to post-glacial uplift, which is up to 1cm/year.

  15. Reliability of reference distances used in photogrammetry.

    PubMed

    Aksu, Muge; Kaya, Demet; Kocadereli, Ilken

    2010-07-01

    To determine the reliability of the reference distances used for photogrammetric assessment. The sample consisted of 100 subjects with mean ages of 22.97 +/- 2.98 years. Five lateral and four frontal parameters were measured directly on the subjects' faces. For photogrammetric assessment, two reference distances for the profile view and three reference distances for the frontal view were established. Standardized photographs were taken and all the parameters that had been measured directly on the face were measured on the photographs. The reliability of the reference distances was checked by comparing direct and indirect values of the parameters obtained from the subjects' faces and photographs. Repeated measure analysis of variance (ANOVA) and Bland-Altman analyses were used for statistical assessment. For profile measurements, the indirect values measured were statistically different from the direct values except for Sn-Sto in male subjects and Prn-Sn and Sn-Sto in female subjects. The indirect values of Prn-Sn and Sn-Sto were reliable in both sexes. The poorest results were obtained in the indirect values of the N-Sn parameter for female subjects and the Sn-Me parameter for male subjects according to the Sa-Sba reference distance. For frontal measurements, the indirect values were statistically different from the direct values in both sexes except for one in male subjects. The indirect values measured were not statistically different from the direct values for Go-Go. The indirect values of Ch-Ch were reliable in male subjects. The poorest results were obtained according to the P-P reference distance. For profile assessment, the T-Ex reference distance was reliable for Prn-Sn and Sn-Sto in both sexes. For frontal assessment, Ex-Ex and En-En reference distances were reliable for Ch-Ch in male subjects.

  16. [Comorbidity of different forms of anxiety disorders and depression].

    PubMed

    Małyszczak, Krzysztof; Szechiński, Marcin

    2004-01-01

    Comorbidity of some anxiety disorders and depression were examined in order to compare their statistical closeness. Patients treated in an out-patient care center for psychiatric disorders and/or family medicine were recruited. Persons that have anxiety and depressive symptoms as a consequence of somatic illnesses or consequence of other psychiatric disorders were excluded. Disorders were diagnosed a with diagnostic questionnaire based on Schedule for Assessment in Neuropsychiatry (SCAN), version 2.0, according to ICD-10 criteria. Analyses include selected disorders: generalized anxiety disorder, panic disorder, agoraphobia, specific phobias, social phobia and depression. 104 patients were included. 35 of them (33.7%) had anxiety disorders, 13 persons (12.5%) have depression. Analyses show that in patients with generalized anxiety disorder, depression occurred at least twice as often as in the remaining patients (odds ratio = 7.1), while in patients with agoraphobia the occurrence of panic disorder increased at least by 2.88 times (odds ratio = 11.9). In other disorders the odds ratios was greater than 1, but the differences were not statistically significant. Depression/generalized anxiety disorder and agoraphobia/panic disorder were shown to be statistically closer than other disorders.

  17. Earthquake triggering in southeast Africa following the 2012 Indian Ocean earthquake

    NASA Astrophysics Data System (ADS)

    Neves, Miguel; Custódio, Susana; Peng, Zhigang; Ayorinde, Adebayo

    2018-02-01

    In this paper we present evidence of earthquake dynamic triggering in southeast Africa. We analysed seismic waveforms recorded at 53 broad-band and short-period stations in order to identify possible increases in the rate of microearthquakes and tremor due to the passage of teleseismic waves generated by the Mw8.6 2012 Indian Ocean earthquake. We found evidence of triggered local earthquakes and no evidence of triggered tremor in the region. We assessed the statistical significance of the increase in the number of local earthquakes using β-statistics. Statistically significant dynamic triggering of local earthquakes was observed at 7 out of the 53 analysed stations. Two of these stations are located in the northeast coast of Madagascar and the other five stations are located in the Kaapvaal Craton, southern Africa. We found no evidence of dynamically triggered seismic activity in stations located near the structures of the East African Rift System. Hydrothermal activity exists close to the stations that recorded dynamic triggering, however, it also exists near the East African Rift System structures where no triggering was observed. Our results suggest that factors other than solely tectonic regime and geothermalism are needed to explain the mechanisms that underlie earthquake triggering.

  18. Methodologic quality of meta-analyses and systematic reviews on the Mediterranean diet and cardiovascular disease outcomes: a review.

    PubMed

    Huedo-Medina, Tania B; Garcia, Marissa; Bihuniak, Jessica D; Kenny, Anne; Kerstetter, Jane

    2016-03-01

    Several systematic reviews/meta-analyses published within the past 10 y have examined the associations of Mediterranean-style diets (MedSDs) on cardiovascular disease (CVD) risk. However, these reviews have not been evaluated for satisfying contemporary methodologic quality standards. This study evaluated the quality of recent systematic reviews/meta-analyses on MedSD and CVD risk outcomes by using an established methodologic quality scale. The relation between review quality and impact per publication value of the journal in which the article had been published was also evaluated. To assess compliance with current standards, we applied a modified version of the Assessment of Multiple Systematic Reviews (AMSTARMedSD) quality scale to systematic reviews/meta-analyses retrieved from electronic databases that had met our selection criteria: 1) used systematic or meta-analytic procedures to review the literature, 2) examined MedSD trials, and 3) had MedSD interventions independently or combined with other interventions. Reviews completely satisfied from 8% to 75% of the AMSTARMedSD items (mean ± SD: 31.2% ± 19.4%), with those published in higher-impact journals having greater quality scores. At a minimum, 60% of the 24 reviews did not disclose full search details or apply appropriate statistical methods to combine study findings. Only 5 of the reviews included participant or study characteristics in their analyses, and none evaluated MedSD diet characteristics. These data suggest that current meta-analyses/systematic reviews evaluating the effect of MedSD on CVD risk do not fully comply with contemporary methodologic quality standards. As a result, there are more research questions to answer to enhance our understanding of how MedSD affects CVD risk or how these effects may be modified by the participant or MedSD characteristics. To clarify the associations between MedSD and CVD risk, future meta-analyses and systematic reviews should not only follow methodologic quality standards but also include more statistical modeling results when data allow. © 2016 American Society for Nutrition.

  19. Development and validation of the Learning Disabilities Needs Assessment Tool (LDNAT), a HoNOS-based needs assessment tool for use with people with intellectual disability.

    PubMed

    Painter, J; Trevithick, L; Hastings, R P; Ingham, B; Roy, A

    2016-12-01

    In meeting the needs of individuals with intellectual disabilities (ID) who access health services, a brief, holistic assessment of need is useful. This study outlines the development and testing of the Learning Disabilities Needs Assessment Tool (LDNAT), a tool intended for this purpose. An existing mental health (MH) tool was extended by a multidisciplinary group of ID practitioners. Additional scales were drafted to capture needs across six ID treatment domains that the group identified. LDNAT ratings were analysed for the following: item redundancy, relevance, construct validity and internal consistency (n = 1692); test-retest reliability (n = 27); and concurrent validity (n = 160). All LDNAT scales were deemed clinically relevant with little redundancy apparent. Principal component analysis indicated three components (developmental needs, challenging behaviour, MH and well-being). Internal consistency was good (Cronbach alpha 0.80). Individual item test-retest reliability was substantial-near perfect for 20 scales and slight-fair for three scales. Overall reliability was near perfect (intra-class correlation = 0.91). There were significant associations with five of six condition-specific measures, i.e. the Waisman Activities of Daily Living Scale (general ability/disability), Threshold Assessment Grid (risk), Behaviour Problems Inventory for Individuals with Intellectual Disabilities-Short Form (challenging behaviour) Social Communication Questionnaire (autism) and a bespoke physical health questionnaire. Additionally, the statistically significant correlations between these tools and the LDNAT components made sense clinically. There were no statistically significant correlations with the Psychiatric Assessment Schedules for Adults with Developmental Disabilities (a measure of MH symptoms in people with ID). The LDNAT had clinically utility when rating the needs of people with ID prior to condition-specific assessment(s). Analyses of internal and external validity were promising. Further evaluation of its sensitivity to changes in needs is now required. © 2016 MENCAP and International Association of the Scientific Study of Intellectual and Developmental Disabilities and John Wiley & Sons Ltd.

  20. Stepping inside the niche: microclimate data are critical for accurate assessment of species' vulnerability to climate change.

    PubMed

    Storlie, Collin; Merino-Viteri, Andres; Phillips, Ben; VanDerWal, Jeremy; Welbergen, Justin; Williams, Stephen

    2014-09-01

    To assess a species' vulnerability to climate change, we commonly use mapped environmental data that are coarsely resolved in time and space. Coarsely resolved temperature data are typically inaccurate at predicting temperatures in microhabitats used by an organism and may also exhibit spatial bias in topographically complex areas. One consequence of these inaccuracies is that coarsely resolved layers may predict thermal regimes at a site that exceed species' known thermal limits. In this study, we use statistical downscaling to account for environmental factors and develop high-resolution estimates of daily maximum temperatures for a 36 000 km(2) study area over a 38-year period. We then demonstrate that this statistical downscaling provides temperature estimates that consistently place focal species within their fundamental thermal niche, whereas coarsely resolved layers do not. Our results highlight the need for incorporation of fine-scale weather data into species' vulnerability analyses and demonstrate that a statistical downscaling approach can yield biologically relevant estimates of thermal regimes. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  1. Reporting characteristics of meta-analyses in orthodontics: methodological assessment and statistical recommendations.

    PubMed

    Papageorgiou, Spyridon N; Papadopoulos, Moschos A; Athanasiou, Athanasios E

    2014-02-01

    Ideally meta-analyses (MAs) should consolidate the characteristics of orthodontic research in order to produce an evidence-based answer. However severe flaws are frequently observed in most of them. The aim of this study was to evaluate the statistical methods, the methodology, and the quality characteristics of orthodontic MAs and to assess their reporting quality during the last years. Electronic databases were searched for MAs (with or without a proper systematic review) in the field of orthodontics, indexed up to 2011. The AMSTAR tool was used for quality assessment of the included articles. Data were analyzed with Student's t-test, one-way ANOVA, and generalized linear modelling. Risk ratios with 95% confidence intervals were calculated to represent changes during the years in reporting of key items associated with quality. A total of 80 MAs with 1086 primary studies were included in this evaluation. Using the AMSTAR tool, 25 (27.3%) of the MAs were found to be of low quality, 37 (46.3%) of medium quality, and 18 (22.5%) of high quality. Specific characteristics like explicit protocol definition, extensive searches, and quality assessment of included trials were associated with a higher AMSTAR score. Model selection and dealing with heterogeneity or publication bias were often problematic in the identified reviews. The number of published orthodontic MAs is constantly increasing, while their overall quality is considered to range from low to medium. Although the number of MAs of medium and high level seems lately to rise, several other aspects need improvement to increase their overall quality.

  2. A framework for conducting mechanistic based reliability assessments of components operating in complex systems

    NASA Astrophysics Data System (ADS)

    Wallace, Jon Michael

    2003-10-01

    Reliability prediction of components operating in complex systems has historically been conducted in a statistically isolated manner. Current physics-based, i.e. mechanistic, component reliability approaches focus more on component-specific attributes and mathematical algorithms and not enough on the influence of the system. The result is that significant error can be introduced into the component reliability assessment process. The objective of this study is the development of a framework that infuses the needs and influence of the system into the process of conducting mechanistic-based component reliability assessments. The formulated framework consists of six primary steps. The first three steps, identification, decomposition, and synthesis, are primarily qualitative in nature and employ system reliability and safety engineering principles to construct an appropriate starting point for the component reliability assessment. The following two steps are the most unique. They involve a step to efficiently characterize and quantify the system-driven local parameter space and a subsequent step using this information to guide the reduction of the component parameter space. The local statistical space quantification step is accomplished using two proposed multivariate probability models: Multi-Response First Order Second Moment and Taylor-Based Inverse Transformation. Where existing joint probability models require preliminary distribution and correlation information of the responses, these models combine statistical information of the input parameters with an efficient sampling of the response analyses to produce the multi-response joint probability distribution. Parameter space reduction is accomplished using Approximate Canonical Correlation Analysis (ACCA) employed as a multi-response screening technique. The novelty of this approach is that each individual local parameter and even subsets of parameters representing entire contributing analyses can now be rank ordered with respect to their contribution to not just one response, but the entire vector of component responses simultaneously. The final step of the framework is the actual probabilistic assessment of the component. Although the same multivariate probability tools employed in the characterization step can be used for the component probability assessment, variations of this final step are given to allow for the utilization of existing probabilistic methods such as response surface Monte Carlo and Fast Probability Integration. The overall framework developed in this study is implemented to assess the finite-element based reliability prediction of a gas turbine airfoil involving several failure responses. Results of this implementation are compared to results generated using the conventional 'isolated' approach as well as a validation approach conducted through large sample Monte Carlo simulations. The framework resulted in a considerable improvement to the accuracy of the part reliability assessment and an improved understanding of the component failure behavior. Considerable statistical complexity in the form of joint non-normal behavior was found and accounted for using the framework. Future applications of the framework elements are discussed.

  3. Statistical evaluation of surrogate endpoints with examples from cancer clinical trials.

    PubMed

    Buyse, Marc; Molenberghs, Geert; Paoletti, Xavier; Oba, Koji; Alonso, Ariel; Van der Elst, Wim; Burzykowski, Tomasz

    2016-01-01

    A surrogate endpoint is intended to replace a clinical endpoint for the evaluation of new treatments when it can be measured more cheaply, more conveniently, more frequently, or earlier than that clinical endpoint. A surrogate endpoint is expected to predict clinical benefit, harm, or lack of these. Besides the biological plausibility of a surrogate, a quantitative assessment of the strength of evidence for surrogacy requires the demonstration of the prognostic value of the surrogate for the clinical outcome, and evidence that treatment effects on the surrogate reliably predict treatment effects on the clinical outcome. We focus on these two conditions, and outline the statistical approaches that have been proposed to assess the extent to which these conditions are fulfilled. When data are available from a single trial, one can assess the "individual level association" between the surrogate and the true endpoint. When data are available from several trials, one can additionally assess the "trial level association" between the treatment effect on the surrogate and the treatment effect on the true endpoint. In the latter case, the "surrogate threshold effect" can be estimated as the minimum effect on the surrogate endpoint that predicts a statistically significant effect on the clinical endpoint. All these concepts are discussed in the context of randomized clinical trials in oncology, and illustrated with two meta-analyses in gastric cancer. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. An improved approach for flight readiness certification: Methodology for failure risk assessment and application examples. Volume 3: Structure and listing of programs

    NASA Technical Reports Server (NTRS)

    Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.

    1992-01-01

    An improved methodology for quantitatively evaluating failure risk of spaceflight systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with engineering analysis to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in engineering analyses of failure phenomena, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which engineering analysis models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes. These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. Conventional engineering analysis models currently employed for design of failure prediction are used in this methodology. The PFA methodology is described and examples of its application are presented. Conventional approaches to failure risk evaluation for spaceflight systems are discussed, and the rationale for the approach taken in the PFA methodology is presented. The statistical methods, engineering models, and computer software used in fatigue failure mode applications are thoroughly documented.

  5. Source Evaluation and Trace Metal Contamination in Benthic Sediments from Equatorial Ecosystems Using Multivariate Statistical Techniques

    PubMed Central

    Benson, Nsikak U.; Asuquo, Francis E.; Williams, Akan B.; Essien, Joseph P.; Ekong, Cyril I.; Akpabio, Otobong; Olajire, Abaas A.

    2016-01-01

    Trace metals (Cd, Cr, Cu, Ni and Pb) concentrations in benthic sediments were analyzed through multi-step fractionation scheme to assess the levels and sources of contamination in estuarine, riverine and freshwater ecosystems in Niger Delta (Nigeria). The degree of contamination was assessed using the individual contamination factors (ICF) and global contamination factor (GCF). Multivariate statistical approaches including principal component analysis (PCA), cluster analysis and correlation test were employed to evaluate the interrelationships and associated sources of contamination. The spatial distribution of metal concentrations followed the pattern Pb>Cu>Cr>Cd>Ni. Ecological risk index by ICF showed significant potential mobility and bioavailability for Cu, Cu and Ni. The ICF contamination trend in the benthic sediments at all studied sites was Cu>Cr>Ni>Cd>Pb. The principal component and agglomerative clustering analyses indicate that trace metals contamination in the ecosystems was influenced by multiple pollution sources. PMID:27257934

  6. If You Build (and Moderate) It, They Will Come: The Smokefree Women Facebook Page

    PubMed Central

    2013-01-01

    This analysis explores the impact of modifying the Smokefree Women Facebook social media strategy, from primarily promoting resources to encouraging participation in communications about smoking cessation by posting user-generated content. Analyses were performed using data from the Smokefree Women Facebook page to assess the impact of the revised strategy on reach and engagement. Fan engagement increased 430%, and a strong and statistically significant correlation (P < .05) between the frequency of moderator posts and community engagement was observed. The reach of the page also increased by 420%. Our findings indicate that the strategy shift had a statistically significant and positive effect on the frequency of interactions on the Facebook page, providing an example of an approach that may prove useful for reaching and engaging users in online communities. Additional research is needed to assess the association between engagement in virtual communities and health behavior outcomes. PMID:24395993

  7. Improving DHH students' grammar through an individualized software program.

    PubMed

    Cannon, Joanna E; Easterbrooks, Susan R; Gagné, Phill; Beal-Alvarez, Jennifer

    2011-01-01

    The purpose of this study was to determine if the frequent use of a targeted, computer software grammar instruction program, used as an individualized classroom activity, would influence the comprehension of morphosyntax structures (determiners, tense, and complementizers) in deaf/hard-of-hearing (DHH) participants who use American Sign Language (ASL). Twenty-six students from an urban day school for the deaf participated in this study. Two hierarchical linear modeling growth curve analyses showed that the influence of LanguageLinks: Syntax Assessment and Intervention (LL) resulted in statistically significant gains in participants' comprehension of morphosyntax structures. Two dependent t tests revealed statistically significant results between the pre- and postintervention assessments on the Diagnostic Evaluation of Language Variation-Norm Referenced. The daily use of LL increased the morphosyntax comprehension of the participants in this study and may be a promising practice for DHH students who use ASL.

  8. If you build (and moderate) it, they will come: the Smokefree Women Facebook page.

    PubMed

    Post, Samantha D; Taylor, Shani C; Sanders, Amy E; Goldfarb, Jeffrey M; Hunt, Yvonne M; Augustson, Erik M

    2013-12-01

    This analysis explores the impact of modifying the Smokefree Women Facebook social media strategy, from primarily promoting resources to encouraging participation in communications about smoking cessation by posting user-generated content. Analyses were performed using data from the Smokefree Women Facebook page to assess the impact of the revised strategy on reach and engagement. Fan engagement increased 430%, and a strong and statistically significant correlation (P < .05) between the frequency of moderator posts and community engagement was observed. The reach of the page also increased by 420%. Our findings indicate that the strategy shift had a statistically significant and positive effect on the frequency of interactions on the Facebook page, providing an example of an approach that may prove useful for reaching and engaging users in online communities. Additional research is needed to assess the association between engagement in virtual communities and health behavior outcomes.

  9. The use of logistic regression to enhance risk assessment and decision making by mental health administrators.

    PubMed

    Menditto, Anthony A; Linhorst, Donald M; Coleman, James C; Beck, Niels C

    2006-04-01

    Development of policies and procedures to contend with the risks presented by elopement, aggression, and suicidal behaviors are long-standing challenges for mental health administrators. Guidance in making such judgments can be obtained through the use of a multivariate statistical technique known as logistic regression. This procedure can be used to develop a predictive equation that is mathematically formulated to use the best combination of predictors, rather than considering just one factor at a time. This paper presents an overview of logistic regression and its utility in mental health administrative decision making. A case example of its application is presented using data on elopements from Missouri's long-term state psychiatric hospitals. Ultimately, the use of statistical prediction analyses tempered with differential qualitative weighting of classification errors can augment decision-making processes in a manner that provides guidance and flexibility while wrestling with the complex problem of risk assessment and decision making.

  10. The care of Filipino juvenile offenders in residential facilities evaluated using the risk-need-responsivity model.

    PubMed

    Spruit, Anouk; Wissink, Inge B; Stams, Geert Jan J M

    2016-01-01

    According to the risk-need-responsivity model of offender, assessment and rehabilitation treatment should target specific factors that are related to re-offending. This study evaluates the residential care of Filipino juvenile offenders using the risk-need-responsivity model. Risk analyses and criminogenic needs assessments (parenting style, aggression, relationships with peers, empathy, and moral reasoning) have been conducted using data of 55 juvenile offenders in four residential facilities. The psychological care has been assessed using a checklist. Statistical analyses showed that juvenile offenders had a high risk of re-offending, high aggression, difficulties in making pro-social friends, and a delayed socio-moral development. The psychological programs in the residential facilities were evaluated to be poor. The availability of the psychological care in the facilities fitted poorly with the characteristics of the juvenile offenders and did not comply with the risk-need-responsivity model. Implications for research and practice are discussed. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Thematic mapper data quality and performance assessment in renewable resource/agricultural remote sensing

    NASA Technical Reports Server (NTRS)

    Erickson, J. D.; Macdonald, R. B. (Principal Investigator)

    1982-01-01

    A "quick look" investigation of the initial LANDSAT-4, thematic mapper (TM) scene received from Goddard Space Flight Center was performed to gain early insight into the characteristics of TM data. The initial scene, containing only the first four bands of the seven bands recorded by the TM, was acquired over the Detroit, Michigan, area on July 20, 1982. It yielded abundant information for scientific investigation. A wide variety of studies were conducted to assess all aspects of TM data. They ranged from manual analyses of image products to detect obvious optical, electronic, or mechanical defects to detailed machine analyses of the digital data content for evaluation of spectral separability of vegetative/nonvegetative classes. These studies were applied to several segments extracted from the full scene. No attempt was made to perform end-to-end statistical evaluations. However, the output of these studies do identify a degree of positive performance from the TM and its potential for advancing state-of-the-art crop inventory and condition assessment technology.

  12. A reply to Zigler and Seitz.

    PubMed

    Neman, R

    1975-03-01

    The Zigler and Seitz (1975) critique was carefully examined with respect to the conclusions of the Neman et al. (1975) study. Particular attention was given to the following questions: (a) did experimenter bias or commitment account for the results, (b) were unreliable and invalid psychometric instruments used, (c) were the statistical analyses insufficient or incorrect, (d) did the results reflect no more than the operation of chance, and (e) were the results biased by artifactually inflated profile scores. Experimenter bias and commitment were shown to be insufficient to account for the results; a further review of Buros (1972) showed that there was no need for apprehension about the testing instruments; the statistical analyses were shown to exceed prevailing standards for research reporting; the results were shown to reflect valid findings at the .05 probability level; and the Neman et al. (1975) results for the profile measure were equally significant using either "raw" neurological scores or "scales" neurological age scores. Zigler, Seitz, and I agreed on the needs for (a) using multivariate analyses, where applicable, in studies having more than one dependent variable; (b) defining the population for which sensorimotor training procedures may be appropriately prescribed; and (c) validating the profile measure as a tool to assess neurological disorganization.

  13. Meta-analysis of randomized clinical trials in the era of individual patient data sharing.

    PubMed

    Kawahara, Takuya; Fukuda, Musashi; Oba, Koji; Sakamoto, Junichi; Buyse, Marc

    2018-06-01

    Individual patient data (IPD) meta-analysis is considered to be a gold standard when the results of several randomized trials are combined. Recent initiatives on sharing IPD from clinical trials offer unprecedented opportunities for using such data in IPD meta-analyses. First, we discuss the evidence generated and the benefits obtained by a long-established prospective IPD meta-analysis in early breast cancer. Next, we discuss a data-sharing system that has been adopted by several pharmaceutical sponsors. We review a number of retrospective IPD meta-analyses that have already been proposed using this data-sharing system. Finally, we discuss the role of data sharing in IPD meta-analysis in the future. Treatment effects can be more reliably estimated in both types of IPD meta-analyses than with summary statistics extracted from published papers. Specifically, with rich covariate information available on each patient, prognostic and predictive factors can be identified or confirmed. Also, when several endpoints are available, surrogate endpoints can be assessed statistically. Although there are difficulties in conducting, analyzing, and interpreting retrospective IPD meta-analysis utilizing the currently available data-sharing systems, data sharing will play an important role in IPD meta-analysis in the future.

  14. Targeting intensive versus conventional glycaemic control for type 1 diabetes mellitus: a systematic review with meta-analyses and trial sequential analyses of randomised clinical trials

    PubMed Central

    Kähler, Pernille; Grevstad, Berit; Almdal, Thomas; Gluud, Christian; Wetterslev, Jørn; Vaag, Allan; Hemmingsen, Bianca

    2014-01-01

    Objective To assess the benefits and harms of targeting intensive versus conventional glycaemic control in patients with type 1 diabetes mellitus. Design A systematic review with meta-analyses and trial sequential analyses of randomised clinical trials. Data sources The Cochrane Library, MEDLINE, EMBASE, Science Citation Index Expanded and LILACS to January 2013. Study selection Randomised clinical trials that prespecified different targets of glycaemic control in participants at any age with type 1 diabetes mellitus were included. Data extraction Two authors independently assessed studies for inclusion and extracted data. Results 18 randomised clinical trials included 2254 participants with type 1 diabetes mellitus. All trials had high risk of bias. There was no statistically significant effect of targeting intensive glycaemic control on all-cause mortality (risk ratio 1.16, 95% CI 0.65 to 2.08) or cardiovascular mortality (0.49, 0.19 to 1.24). Targeting intensive glycaemic control reduced the relative risks for the composite macrovascular outcome (0.63, 0.41 to 0.96; p=0.03), and nephropathy (0.37, 0.27 to 0.50; p<0.00001. The effect estimates of retinopathy, ketoacidosis and retinal photocoagulation were not consistently statistically significant between random and fixed effects models. The risk of severe hypoglycaemia was significantly increased with intensive glycaemic targets (1.40, 1.01 to 1.94). Trial sequential analyses showed that the amount of data needed to demonstrate a relative risk reduction of 10% were, in general, inadequate. Conclusions There was no significant effect towards improved all-cause mortality when targeting intensive glycaemic control compared with conventional glycaemic control. However, there may be beneficial effects of targeting intensive glycaemic control on the composite macrovascular outcome and on nephropathy, and detrimental effects on severe hypoglycaemia. Notably, the data for retinopathy and ketoacidosis were inconsistent. There was a severe lack of reporting on patient relevant outcomes, and all trials had poor bias control. PMID:25138801

  15. Regional variation in the severity of pesticide exposure outcomes: applications of geographic information systems and spatial scan statistics.

    PubMed

    Sudakin, Daniel L; Power, Laura E

    2009-03-01

    Geographic information systems and spatial scan statistics have been utilized to assess regional clustering of symptomatic pesticide exposures reported to a state Poison Control Center (PCC) during a single year. In the present study, we analyzed five subsequent years of PCC data to test whether there are significant geographic differences in pesticide exposure incidents resulting in serious (moderate, major, and fatal) medical outcomes. A PCC provided the data on unintentional pesticide exposures for the time period 2001-2005. The geographic location of the caller, the location where the exposure occurred, the exposure route, and the medical outcome were abstracted. There were 273 incidents resulting in moderate effects (n = 261), major effects (n = 10), or fatalities (n = 2). Spatial scan statistics identified a geographic area consisting of two adjacent counties (one urban, one rural), where statistically significant clustering of serious outcomes was observed. The relative risk of moderate, major, and fatal outcomes was 2.0 in this spatial cluster (p = 0.0005). PCC data, geographic information systems, and spatial scan statistics can identify clustering of serious outcomes from human exposure to pesticides. These analyses may be useful for public health officials to target preventive interventions. Further investigation is warranted to understand better the potential explanations for geographical clustering, and to assess whether preventive interventions have an impact on reducing pesticide exposure incidents resulting in serious medical outcomes.

  16. Is everything we eat associated with cancer? A systematic cookbook review.

    PubMed

    Schoenfeld, Jonathan D; Ioannidis, John P A

    2013-01-01

    Nutritional epidemiology is a highly prolific field. Debates on associations of nutrients with disease risk are common in the literature and attract attention in public media. We aimed to examine the conclusions, statistical significance, and reproducibility in the literature on associations between specific foods and cancer risk. We selected 50 common ingredients from random recipes in a cookbook. PubMed queries identified recent studies that evaluated the relation of each ingredient to cancer risk. Information regarding author conclusions and relevant effect estimates were extracted. When >10 articles were found, we focused on the 10 most recent articles. Forty ingredients (80%) had articles reporting on their cancer risk. Of 264 single-study assessments, 191 (72%) concluded that the tested food was associated with an increased (n = 103) or a decreased (n = 88) risk; 75% of the risk estimates had weak (0.05 > P ≥ 0.001) or no statistical (P > 0.05) significance. Statistically significant results were more likely than nonsignificant findings to be published in the study abstract than in only the full text (P < 0.0001). Meta-analyses (n = 36) presented more conservative results; only 13 (26%) reported an increased (n = 4) or a decreased (n = 9) risk (6 had more than weak statistical support). The median RRs (IQRs) for studies that concluded an increased or a decreased risk were 2.20 (1.60, 3.44) and 0.52 (0.39, 0.66), respectively. The RRs from the meta-analyses were on average null (median: 0.96; IQR: 0.85, 1.10). Associations with cancer risk or benefits have been claimed for most food ingredients. Many single studies highlight implausibly large effects, even though evidence is weak. Effect sizes shrink in meta-analyses.

  17. Characteristics of genomic signatures derived using univariate methods and mechanistically anchored functional descriptors for predicting drug- and xenobiotic-induced nephrotoxicity.

    PubMed

    Shi, Weiwei; Bugrim, Andrej; Nikolsky, Yuri; Nikolskya, Tatiana; Brennan, Richard J

    2008-01-01

    ABSTRACT The ideal toxicity biomarker is composed of the properties of prediction (is detected prior to traditional pathological signs of injury), accuracy (high sensitivity and specificity), and mechanistic relationships to the endpoint measured (biological relevance). Gene expression-based toxicity biomarkers ("signatures") have shown good predictive power and accuracy, but are difficult to interpret biologically. We have compared different statistical methods of feature selection with knowledge-based approaches, using GeneGo's database of canonical pathway maps, to generate gene sets for the classification of renal tubule toxicity. The gene set selection algorithms include four univariate analyses: t-statistics, fold-change, B-statistics, and RankProd, and their combination and overlap for the identification of differentially expressed probes. Enrichment analysis following the results of the four univariate analyses, Hotelling T-square test, and, finally out-of-bag selection, a variant of cross-validation, were used to identify canonical pathway maps-sets of genes coordinately involved in key biological processes-with classification power. Differentially expressed genes identified by the different statistical univariate analyses all generated reasonably performing classifiers of tubule toxicity. Maps identified by enrichment analysis or Hotelling T-square had lower classification power, but highlighted perturbed lipid homeostasis as a common discriminator of nephrotoxic treatments. The out-of-bag method yielded the best functionally integrated classifier. The map "ephrins signaling" performed comparably to a classifier derived using sparse linear programming, a machine learning algorithm, and represents a signaling network specifically involved in renal tubule development and integrity. Such functional descriptors of toxicity promise to better integrate predictive toxicogenomics with mechanistic analysis, facilitating the interpretation and risk assessment of predictive genomic investigations.

  18. Cross-population validation of statistical distance as a measure of physiological dysregulation during aging.

    PubMed

    Cohen, Alan A; Milot, Emmanuel; Li, Qing; Legault, Véronique; Fried, Linda P; Ferrucci, Luigi

    2014-09-01

    Measuring physiological dysregulation during aging could be a key tool both to understand underlying aging mechanisms and to predict clinical outcomes in patients. However, most existing indices are either circular or hard to interpret biologically. Recently, we showed that statistical distance of 14 common blood biomarkers (a measure of how strange an individual's biomarker profile is) was associated with age and mortality in the WHAS II data set, validating its use as a measure of physiological dysregulation. Here, we extend the analyses to other data sets (WHAS I and InCHIANTI) to assess the stability of the measure across populations. We found that the statistical criteria used to determine the original 14 biomarkers produced diverging results across populations; in other words, had we started with a different data set, we would have chosen a different set of markers. Nonetheless, the same 14 markers (or the subset of 12 available for InCHIANTI) produced highly similar predictions of age and mortality. We include analyses of all combinatorial subsets of the markers and show that results do not depend much on biomarker choice or data set, but that more markers produce a stronger signal. We conclude that statistical distance as a measure of physiological dysregulation is stable across populations in Europe and North America. Copyright © 2014 Elsevier Inc. All rights reserved.

  19. Accounting for standard errors of vision-specific latent trait in regression models.

    PubMed

    Wong, Wan Ling; Li, Xiang; Li, Jialiang; Wong, Tien Yin; Cheng, Ching-Yu; Lamoureux, Ecosse L

    2014-07-11

    To demonstrate the effectiveness of Hierarchical Bayesian (HB) approach in a modeling framework for association effects that accounts for SEs of vision-specific latent traits assessed using Rasch analysis. A systematic literature review was conducted in four major ophthalmic journals to evaluate Rasch analysis performed on vision-specific instruments. The HB approach was used to synthesize the Rasch model and multiple linear regression model for the assessment of the association effects related to vision-specific latent traits. The effectiveness of this novel HB one-stage "joint-analysis" approach allows all model parameters to be estimated simultaneously and was compared with the frequently used two-stage "separate-analysis" approach in our simulation study (Rasch analysis followed by traditional statistical analyses without adjustment for SE of latent trait). Sixty-six reviewed articles performed evaluation and validation of vision-specific instruments using Rasch analysis, and 86.4% (n = 57) performed further statistical analyses on the Rasch-scaled data using traditional statistical methods; none took into consideration SEs of the estimated Rasch-scaled scores. The two models on real data differed for effect size estimations and the identification of "independent risk factors." Simulation results showed that our proposed HB one-stage "joint-analysis" approach produces greater accuracy (average of 5-fold decrease in bias) with comparable power and precision in estimation of associations when compared with the frequently used two-stage "separate-analysis" procedure despite accounting for greater uncertainty due to the latent trait. Patient-reported data, using Rasch analysis techniques, do not take into account the SE of latent trait in association analyses. The HB one-stage "joint-analysis" is a better approach, producing accurate effect size estimations and information about the independent association of exposure variables with vision-specific latent traits. Copyright 2014 The Association for Research in Vision and Ophthalmology, Inc.

  20. Assessment of Data Assimilation with the Prototype High Resolution Rapid Refresh for Alaska (HRRRAK)

    NASA Technical Reports Server (NTRS)

    Harrison, Kayla; Morton, Don; Zavodsky, Brad; Chou, Shih

    2012-01-01

    The Arctic Region Supercomputing Center has been running a quasi-operational prototype of a High Resolution Rapid Refresh for Alaska (HRRRAK) at 3km resolution, initialized by the 13km Rapid Refresh (RR). Although the RR assimilates a broad range of observations into its analyses, experiments with the HRRRAK suggest that there may be added value in assimilating observations into the 3km initial conditions, downscaled from the 13km RR analyses. The NASA Short-term Prediction Research and Transition (SPoRT) group has been using assimilated data from the Atmospheric Infrared Sounder (AIRS) in WRF and WRF-Var simulations since 2004 with promising results. The sounder is aboard NASA s Aqua satellite, and provides vertical profiles of temperature and humidity. The Gridpoint Statistical Interpolation (GSI) system is then used to assimilate these vertical profiles into WRF forecasts. In this work, we assess the use of AIRS data in combination with other global data assimilation products on non-assimilated HRRRAK case studies. Two separate weather events will be assessed to qualitatively and quantitatively assess the impacts of AIRS data on HRRRAK forecasts.

  1. The methodological quality of systematic reviews of animal studies in dentistry.

    PubMed

    Faggion, C M; Listl, S; Giannakopoulos, N N

    2012-05-01

    Systematic reviews and meta-analyses of animal studies are important for improving estimates of the effects of treatment and for guiding future clinical studies on humans. The purpose of this systematic review was to assess the methodological quality of systematic reviews and meta-analyses of animal studies in dentistry through using a validated checklist. A literature search was conducted independently and in duplicate in the PubMed and LILACS databases. References in selected systematic reviews were assessed to identify other studies not captured by the electronic searches. The methodological quality of studies was assessed independently and in duplicate by using the AMSTAR checklist; the quality was scored as low, moderate, or high. The reviewers were calibrated before the assessment and agreement between them was assessed using Cohen's Kappa statistic. Of 444 studies retrieved, 54 systematic reviews were selected after full-text assessment. Agreement between the reviewers was regarded as excellent. Only two studies were scored as high quality; 17 and 35 studies were scored as medium and low quality, respectively. There is room for improvement of the methodological quality of systematic reviews of animal studies in dentistry. Checklists, such as AMSTAR, can guide researchers in planning and executing systematic reviews and meta-analyses. For determining the need for additional investigations in animals and in order to provide good data for potential application in human, such reviews should be based on animal experiments performed according to sound methodological principles. Copyright © 2011 Elsevier Ltd. All rights reserved.

  2. Effect of exercise on depression in university students: a meta-analysis of randomized controlled trials.

    PubMed

    Yan, Shi; Jin, YinZhe; Oh, YongSeok; Choi, YoungJun

    2016-06-01

    The aim of this study was to assess the effect of exercise on depression in university students. A systematic literature search was conducted in PubMed, EMBASE and the Cochrane library from their inception through December 10, 2014 to identify relevant articles. The heterogeneity across studies was examined by Cochran's Q statistic and the I2 statistic. Standardized mean difference (SMD) and 95% confidence interval (CI) were pooled to evaluate the effect of exercise on depression. Then, sensitivity and subgroup analyses were performed. In addition, publication bias was assessed by drawing a funnel plot. A total of 352 participants (154 cases and 182 controls) from eight included trials were included. Our pooled result showed a significant alleviative depression after exercise (SMD=-0.50, 95% CI: -0.97 to -0.03, P=0.04) with significant heterogeneity (P=0.003, I2=67%). Sensitivity analyses showed that the pooled result may be unstable. Subgroup analysis indicated that sample size may be a source of heterogeneity. Moreover, no publication bias was observed in this study. Exercise may be an effective therapy for treating depression in university students. However, further clinical studies with strict design and large samples focused on this specific population should be warranted in the future.

  3. Statistical Data Analyses of Trace Chemical, Biochemical, and Physical Analytical Signatures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Udey, Ruth Norma

    Analytical and bioanalytical chemistry measurement results are most meaningful when interpreted using rigorous statistical treatments of the data. The same data set may provide many dimensions of information depending on the questions asked through the applied statistical methods. Three principal projects illustrated the wealth of information gained through the application of statistical data analyses to diverse problems.

  4. Estimating population diversity with CatchAll

    PubMed Central

    Bunge, John; Woodard, Linda; Böhning, Dankmar; Foster, James A.; Connolly, Sean; Allen, Heather K.

    2012-01-01

    Motivation: The massive data produced by next-generation sequencing require advanced statistical tools. We address estimating the total diversity or species richness in a population. To date, only relatively simple methods have been implemented in available software. There is a need for software employing modern, computationally intensive statistical analyses including error, goodness-of-fit and robustness assessments. Results: We present CatchAll, a fast, easy-to-use, platform-independent program that computes maximum likelihood estimates for finite-mixture models, weighted linear regression-based analyses and coverage-based non-parametric methods, along with outlier diagnostics. Given sample ‘frequency count’ data, CatchAll computes 12 different diversity estimates and applies a model-selection algorithm. CatchAll also derives discounted diversity estimates to adjust for possibly uncertain low-frequency counts. It is accompanied by an Excel-based graphics program. Availability: Free executable downloads for Linux, Windows and Mac OS, with manual and source code, at www.northeastern.edu/catchall. Contact: jab18@cornell.edu PMID:22333246

  5. Heavy metals found in the breathing zone, toenails and lung function of welders working in an air-conditioned welding workplace.

    PubMed

    Hariri, Azian; Mohamad Noor, Noraishah; Paiman, Nuur Azreen; Ahmad Zaidi, Ahmad Mujahid; Zainal Bakri, Siti Farhana

    2017-09-22

    Welding operations are rarely conducted in an air-conditioned room. However, a company would set its welding operations in an air-conditioned room to maintain the humidity level needed to reduce hydrogen cracks in the specimen being welded. This study intended to assess the exposure to metal elements in the welders' breathing zone and toenail samples. Heavy metal concentration was analysed using inductively coupled plasma mass spectrometry. The lung function test was also conducted and analysed using statistical approaches. Chromium and manganese concentrations in the breathing zone exceeded the permissible exposure limit stipulated by Malaysian regulations. A similar trend was obtained in the concentration of heavy metals in the breathing zone air sampling and in the welders' toenails. Although there was no statistically significant decrease in the lung function of welders, it is suggested that exposure control through engineering and administrative approaches should be considered for workplace safety and health improvement.

  6. The effect of the involvement of the dominant or non-dominant hand on grip/pinch strengths and the Levine score in patients with carpal tunnel syndrome.

    PubMed

    Zyluk, A; Walaszek, I

    2012-06-01

    The Levine questionnaire is a disease-oriented instrument developed for outcome measurement of carpal tunnel syndrome (CTS) management. The objective of this study was to compare Levine scores in patients with unilateral CTS, involving the dominant or non-dominant hand, before and after carpal tunnel release. Records of 144 patients, 126 women (87%) and 18 men (13%) aged a mean of 58 years with unilateral CTS, treated operatively, were analysed. The dominant hand was involved in 100 patients (69%), the non-dominant in 44 (31%). The parameters were analysed pre-operatively, and at 1 and 6 months post-operatively. A comparison of Levine scores in patients with the involvement of the dominant or non-dominant hand showed no statistically significant differences at baseline and any of the follow-up measurements. Statistically significant differences were noted in total grip strength at baseline and at 6 month assessments and in key-pinch strength at 1 and 6 months.

  7. Dependence of drivers affects risks associated with compound events

    NASA Astrophysics Data System (ADS)

    Zscheischler, Jakob; Seneviratne, Sonia I.

    2017-04-01

    Compound climate extremes are receiving increasing attention because of their disproportionate impacts on humans and ecosystems. Risks assessments, however, generally focus on univariate statistics even when multiple stressors are considered. Concurrent extreme droughts and heatwaves have been observed to cause a suite of extreme impacts on natural and human systems alike. For example, they can substantially affect vegetation health, prompting tree mortality, and thereby facilitating insect outbreaks and fires. In addition, hot droughts have the potential to trigger and intensify fires and can cause severe economical damage. By promoting disease spread, extremely hot and dry conditions also strongly affect human health. We analyse the co-occurrence of dry and hot summers and show that these are strongly correlated for many regions, inducing a much higher frequency of concurrent hot and dry summers than what would be assumed from the independent combination of the univariate statistics. Our results demonstrate how the dependence structure between variables affects the occurrence frequency of multivariate extremes. Assessments based on univariate statistics can thus strongly underestimate risks associated with given extremes, if impacts depend on multiple (dependent) variables. We conclude that a multivariate perspective is necessary in order to appropriately assess changes in climate extremes and their impacts, and to design adaptation strategies.

  8. The Protective Role of Resilience in Attenuating Emotional Distress and Aggression Associated with Early-life Stress in Young Enlisted Military Service Candidates.

    PubMed

    Kim, Joohan; Seok, Jeong-Ho; Choi, Kang; Jon, Duk-In; Hong, Hyun Ju; Hong, Narei; Lee, Eunjeong

    2015-11-01

    Early life stress (ELS) may induce long-lasting psychological complications in adulthood. The protective role of resilience against the development of psychopathology is also important. The purpose of this study was to investigate the relationships among ELS, resilience, depression, anxiety, and aggression in young adults. Four hundred sixty-one army inductees gave written informed consent and participated in this study. We assessed psychopathology using the Korea Military Personality Test, ELS using the Childhood Abuse Experience Scale, and resilience with the resilience scale. Analyses of variance, correlation analyses, and hierarchical multiple linear regression analyses were conducted for statistical analyses. The regression model explained 35.8%, 41.0%, and 23.3% of the total variance in the depression, anxiety, and aggression indices, respectively. We can find that even though ELS experience is positively associated with depression, anxiety, and aggression, resilience may have significant attenuating effect against the ELS effect on severity of these psychopathologies. Emotion regulation showed the most beneficial effect among resilience factors on reducing severity of psychopathologies. To improve mental health for young adults, ELS assessment and resilience enhancement program should be considered.

  9. The Protective Role of Resilience in Attenuating Emotional Distress and Aggression Associated with Early-life Stress in Young Enlisted Military Service Candidates

    PubMed Central

    Kim, Joohan; Choi, Kang; Jon, Duk-In; Hong, Hyun Ju; Hong, Narei; Lee, Eunjeong

    2015-01-01

    Early life stress (ELS) may induce long-lasting psychological complications in adulthood. The protective role of resilience against the development of psychopathology is also important. The purpose of this study was to investigate the relationships among ELS, resilience, depression, anxiety, and aggression in young adults. Four hundred sixty-one army inductees gave written informed consent and participated in this study. We assessed psychopathology using the Korea Military Personality Test, ELS using the Childhood Abuse Experience Scale, and resilience with the resilience scale. Analyses of variance, correlation analyses, and hierarchical multiple linear regression analyses were conducted for statistical analyses. The regression model explained 35.8%, 41.0%, and 23.3% of the total variance in the depression, anxiety, and aggression indices, respectively. We can find that even though ELS experience is positively associated with depression, anxiety, and aggression, resilience may have significant attenuating effect against the ELS effect on severity of these psychopathologies. Emotion regulation showed the most beneficial effect among resilience factors on reducing severity of psychopathologies. To improve mental health for young adults, ELS assessment and resilience enhancement program should be considered. PMID:26539013

  10. Quantitative Susceptibility Mapping after Sports-Related Concussion.

    PubMed

    Koch, K M; Meier, T B; Karr, R; Nencka, A S; Muftuler, L T; McCrea, M

    2018-06-07

    Quantitative susceptibility mapping using MR imaging can assess changes in brain tissue structure and composition. This report presents preliminary results demonstrating changes in tissue magnetic susceptibility after sports-related concussion. Longitudinal quantitative susceptibility mapping metrics were produced from imaging data acquired from cohorts of concussed and control football athletes. One hundred thirty-six quantitative susceptibility mapping datasets were analyzed across 3 separate visits (24 hours after injury, 8 days postinjury, and 6 months postinjury). Longitudinal quantitative susceptibility mapping group analyses were performed on stability-thresholded brain tissue compartments and selected subregions. Clinical concussion metrics were also measured longitudinally in both cohorts and compared with the measured quantitative susceptibility mapping. Statistically significant increases in white matter susceptibility were identified in the concussed athlete group during the acute (24 hour) and subacute (day 8) period. These effects were most prominent at the 8-day visit but recovered and showed no significant difference from controls at the 6-month visit. The subcortical gray matter showed no statistically significant group differences. Observed susceptibility changes after concussion appeared to outlast self-reported clinical recovery metrics at a group level. At an individual subject level, susceptibility increases within the white matter showed statistically significant correlations with return-to-play durations. The results of this preliminary investigation suggest that sports-related concussion can induce physiologic changes to brain tissue that can be detected using MR imaging-based magnetic susceptibility estimates. In group analyses, the observed tissue changes appear to persist beyond those detected on clinical outcome assessments and were associated with return-to-play duration after sports-related concussion. © 2018 by American Journal of Neuroradiology.

  11. Histological Validity and Clinical Evidence for Use of Fractional Lasers for Acne Scars

    PubMed Central

    Sardana, Kabir; Garg, Vijay K; Arora, Pooja; Khurana, Nita

    2012-01-01

    Though fractional lasers are widely used for acne scars, very little clinical or histological data based on the objective clinical assessment or the depth of penetration of lasers on in vivo facial tissue are available. The depth probably is the most important aspect that predicts the improvement in acne scars but the studies on histology have little uniformity in terms of substrate (tissue) used, processing and stains used. The variability of the laser setting (dose, pulses and density) makes comparison of the studies difficult. It is easier to compare the end results, histological depth and clinical results. We analysed all the published clinical and histological studies on fractional lasers in acne scars and analysed the data, both clinical and histological, by statistical software to decipher their significance. On statistical analysis, the depth was found to be variable with the 1550-nm lasers achieving a depth of 679 μm versus 10,600 nm (895 μm) and 2940 nm (837 μm) lasers. The mean depth of penetration (in μm) in relation to the energy used, in millijoules (mj), varies depending on the laser studied. This was statistically found to be 12.9–28.5 for Er:glass, 3–54.38 for Er:YAG and 6.28–53.66 for CO2. The subjective clinical improvement was a modest 46%. The lack of objective evaluation of clinical improvement and scar-specific assessment with the lack of appropriate in vivo studies is a case for combining conventional modalities like subcision, punch excision and needling with fractional lasers to achieve optimal results. PMID:23060702

  12. Pesticides and public health: an analysis of the regulatory approach to assessing the carcinogenicity of glyphosate in the European Union.

    PubMed

    Clausing, Peter; Robinson, Claire; Burtscher-Schaden, Helmut

    2018-03-13

    The present paper scrutinises the European authorities' assessment of the carcinogenic hazard posed by glyphosate based on Regulation (EC) 1272/2008. We use the authorities' own criteria as a benchmark to analyse their weight of evidence (WoE) approach. Therefore, our analysis goes beyond the comparison of the assessments made by the European Food Safety Authority and the International Agency for Research on Cancer published by others. We show that not classifying glyphosate as a carcinogen by the European authorities, including the European Chemicals Agency, appears to be not consistent with, and in some instances, a direct violation of the applicable guidance and guideline documents. In particular, we criticise an arbitrary attenuation by the authorities of the power of statistical analyses; their disregard of existing dose-response relationships; their unjustified claim that the doses used in the mouse carcinogenicity studies were too high and their contention that the carcinogenic effects were not reproducible by focusing on quantitative and neglecting qualitative reproducibility. Further aspects incorrectly used were historical control data, multisite responses and progression of lesions to malignancy. Contrary to the authorities' evaluations, proper application of statistical methods and WoE criteria inevitably leads to the conclusion that glyphosate is 'probably carcinogenic' (corresponding to category 1B in the European Union). © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  13. M-TraCE: a new tool for high-resolution computation and statistical elaboration of backward trajectories on the Italian domain

    NASA Astrophysics Data System (ADS)

    Vitali, Lina; Righini, Gaia; Piersanti, Antonio; Cremona, Giuseppe; Pace, Giandomenico; Ciancarella, Luisella

    2017-12-01

    Air backward trajectory calculations are commonly used in a variety of atmospheric analyses, in particular for source attribution evaluation. The accuracy of backward trajectory analysis is mainly determined by the quality and the spatial and temporal resolution of the underlying meteorological data set, especially in the cases of complex terrain. This work describes a new tool for the calculation and the statistical elaboration of backward trajectories. To take advantage of the high-resolution meteorological database of the Italian national air quality model MINNI, a dedicated set of procedures was implemented under the name of M-TraCE (MINNI module for Trajectories Calculation and statistical Elaboration) to calculate and process the backward trajectories of air masses reaching a site of interest. Some outcomes from the application of the developed methodology to the Italian Network of Special Purpose Monitoring Stations are shown to assess its strengths for the meteorological characterization of air quality monitoring stations. M-TraCE has demonstrated its capabilities to provide a detailed statistical assessment of transport patterns and region of influence of the site under investigation, which is fundamental for correctly interpreting pollutants measurements and ascertaining the official classification of the monitoring site based on meta-data information. Moreover, M-TraCE has shown its usefulness in supporting other assessments, i.e., spatial representativeness of a monitoring site, focussing specifically on the analysis of the effects due to meteorological variables.

  14. Statistical Techniques to Analyze Pesticide Data Program Food Residue Observations.

    PubMed

    Szarka, Arpad Z; Hayworth, Carol G; Ramanarayanan, Tharacad S; Joseph, Robert S I

    2018-06-26

    The U.S. EPA conducts dietary-risk assessments to ensure that levels of pesticides on food in the U.S. food supply are safe. Often these assessments utilize conservative residue estimates, maximum residue levels (MRLs), and a high-end estimate derived from registrant-generated field-trial data sets. A more realistic estimate of consumers' pesticide exposure from food may be obtained by utilizing residues from food-monitoring programs, such as the Pesticide Data Program (PDP) of the U.S. Department of Agriculture. A substantial portion of food-residue concentrations in PDP monitoring programs are below the limits of detection (left-censored), which makes the comparison of regulatory-field-trial and PDP residue levels difficult. In this paper, we present a novel adaption of established statistical techniques, the Kaplan-Meier estimator (K-M), the robust regression on ordered statistic (ROS), and the maximum-likelihood estimator (MLE), to quantify the pesticide-residue concentrations in the presence of heavily censored data sets. The examined statistical approaches include the most commonly used parametric and nonparametric methods for handling left-censored data that have been used in the fields of medical and environmental sciences. This work presents a case study in which data of thiamethoxam residue on bell pepper generated from registrant field trials were compared with PDP-monitoring residue values. The results from the statistical techniques were evaluated and compared with commonly used simple substitution methods for the determination of summary statistics. It was found that the maximum-likelihood estimator (MLE) is the most appropriate statistical method to analyze this residue data set. Using the MLE technique, the data analyses showed that the median and mean PDP bell pepper residue levels were approximately 19 and 7 times lower, respectively, than the corresponding statistics of the field-trial residues.

  15. Bayesian model selection techniques as decision support for shaping a statistical analysis plan of a clinical trial: An example from a vertigo phase III study with longitudinal count data as primary endpoint

    PubMed Central

    2012-01-01

    Background A statistical analysis plan (SAP) is a critical link between how a clinical trial is conducted and the clinical study report. To secure objective study results, regulatory bodies expect that the SAP will meet requirements in pre-specifying inferential analyses and other important statistical techniques. To write a good SAP for model-based sensitivity and ancillary analyses involves non-trivial decisions on and justification of many aspects of the chosen setting. In particular, trials with longitudinal count data as primary endpoints pose challenges for model choice and model validation. In the random effects setting, frequentist strategies for model assessment and model diagnosis are complex and not easily implemented and have several limitations. Therefore, it is of interest to explore Bayesian alternatives which provide the needed decision support to finalize a SAP. Methods We focus on generalized linear mixed models (GLMMs) for the analysis of longitudinal count data. A series of distributions with over- and under-dispersion is considered. Additionally, the structure of the variance components is modified. We perform a simulation study to investigate the discriminatory power of Bayesian tools for model criticism in different scenarios derived from the model setting. We apply the findings to the data from an open clinical trial on vertigo attacks. These data are seen as pilot data for an ongoing phase III trial. To fit GLMMs we use a novel Bayesian computational approach based on integrated nested Laplace approximations (INLAs). The INLA methodology enables the direct computation of leave-one-out predictive distributions. These distributions are crucial for Bayesian model assessment. We evaluate competing GLMMs for longitudinal count data according to the deviance information criterion (DIC) or probability integral transform (PIT), and by using proper scoring rules (e.g. the logarithmic score). Results The instruments under study provide excellent tools for preparing decisions within the SAP in a transparent way when structuring the primary analysis, sensitivity or ancillary analyses, and specific analyses for secondary endpoints. The mean logarithmic score and DIC discriminate well between different model scenarios. It becomes obvious that the naive choice of a conventional random effects Poisson model is often inappropriate for real-life count data. The findings are used to specify an appropriate mixed model employed in the sensitivity analyses of an ongoing phase III trial. Conclusions The proposed Bayesian methods are not only appealing for inference but notably provide a sophisticated insight into different aspects of model performance, such as forecast verification or calibration checks, and can be applied within the model selection process. The mean of the logarithmic score is a robust tool for model ranking and is not sensitive to sample size. Therefore, these Bayesian model selection techniques offer helpful decision support for shaping sensitivity and ancillary analyses in a statistical analysis plan of a clinical trial with longitudinal count data as the primary endpoint. PMID:22962944

  16. Bayesian model selection techniques as decision support for shaping a statistical analysis plan of a clinical trial: an example from a vertigo phase III study with longitudinal count data as primary endpoint.

    PubMed

    Adrion, Christine; Mansmann, Ulrich

    2012-09-10

    A statistical analysis plan (SAP) is a critical link between how a clinical trial is conducted and the clinical study report. To secure objective study results, regulatory bodies expect that the SAP will meet requirements in pre-specifying inferential analyses and other important statistical techniques. To write a good SAP for model-based sensitivity and ancillary analyses involves non-trivial decisions on and justification of many aspects of the chosen setting. In particular, trials with longitudinal count data as primary endpoints pose challenges for model choice and model validation. In the random effects setting, frequentist strategies for model assessment and model diagnosis are complex and not easily implemented and have several limitations. Therefore, it is of interest to explore Bayesian alternatives which provide the needed decision support to finalize a SAP. We focus on generalized linear mixed models (GLMMs) for the analysis of longitudinal count data. A series of distributions with over- and under-dispersion is considered. Additionally, the structure of the variance components is modified. We perform a simulation study to investigate the discriminatory power of Bayesian tools for model criticism in different scenarios derived from the model setting. We apply the findings to the data from an open clinical trial on vertigo attacks. These data are seen as pilot data for an ongoing phase III trial. To fit GLMMs we use a novel Bayesian computational approach based on integrated nested Laplace approximations (INLAs). The INLA methodology enables the direct computation of leave-one-out predictive distributions. These distributions are crucial for Bayesian model assessment. We evaluate competing GLMMs for longitudinal count data according to the deviance information criterion (DIC) or probability integral transform (PIT), and by using proper scoring rules (e.g. the logarithmic score). The instruments under study provide excellent tools for preparing decisions within the SAP in a transparent way when structuring the primary analysis, sensitivity or ancillary analyses, and specific analyses for secondary endpoints. The mean logarithmic score and DIC discriminate well between different model scenarios. It becomes obvious that the naive choice of a conventional random effects Poisson model is often inappropriate for real-life count data. The findings are used to specify an appropriate mixed model employed in the sensitivity analyses of an ongoing phase III trial. The proposed Bayesian methods are not only appealing for inference but notably provide a sophisticated insight into different aspects of model performance, such as forecast verification or calibration checks, and can be applied within the model selection process. The mean of the logarithmic score is a robust tool for model ranking and is not sensitive to sample size. Therefore, these Bayesian model selection techniques offer helpful decision support for shaping sensitivity and ancillary analyses in a statistical analysis plan of a clinical trial with longitudinal count data as the primary endpoint.

  17. Early Warning Signs of Suicide in Service Members Who Engage in Unauthorized Acts of Violence

    DTIC Science & Technology

    2016-06-01

    observable to military law enforcement personnel. Statistical analyses tested for differences in warning signs between cases of suicide, violence, or...indicators, (2) Behavioral Change indicators, (3) Social indicators, and (4) Occupational indicators. Statistical analyses were conducted to test for...6 Coding _________________________________________________________________ 7 Statistical

  18. [Statistical analysis using freely-available "EZR (Easy R)" software].

    PubMed

    Kanda, Yoshinobu

    2015-10-01

    Clinicians must often perform statistical analyses for purposes such evaluating preexisting evidence and designing or executing clinical studies. R is a free software environment for statistical computing. R supports many statistical analysis functions, but does not incorporate a statistical graphical user interface (GUI). The R commander provides an easy-to-use basic-statistics GUI for R. However, the statistical function of the R commander is limited, especially in the field of biostatistics. Therefore, the author added several important statistical functions to the R commander and named it "EZR (Easy R)", which is now being distributed on the following website: http://www.jichi.ac.jp/saitama-sct/. EZR allows the application of statistical functions that are frequently used in clinical studies, such as survival analyses, including competing risk analyses and the use of time-dependent covariates and so on, by point-and-click access. In addition, by saving the script automatically created by EZR, users can learn R script writing, maintain the traceability of the analysis, and assure that the statistical process is overseen by a supervisor.

  19. Comparison of two control groups for estimation of oral cholera vaccine effectiveness using a case-control study design.

    PubMed

    Franke, Molly F; Jerome, J Gregory; Matias, Wilfredo R; Ternier, Ralph; Hilaire, Isabelle J; Harris, Jason B; Ivers, Louise C

    2017-10-13

    Case-control studies to quantify oral cholera vaccine effectiveness (VE) often rely on neighbors without diarrhea as community controls. Test-negative controls can be easily recruited and may minimize bias due to differential health-seeking behavior and recall. We compared VE estimates derived from community and test-negative controls and conducted bias-indicator analyses to assess potential bias with community controls. From October 2012 through November 2016, patients with acute watery diarrhea were recruited from cholera treatment centers in rural Haiti. Cholera cases had a positive stool culture. Non-cholera diarrhea cases (test-negative controls and non-cholera diarrhea cases for bias-indicator analyses) had a negative culture and rapid test. Up to four community controls were matched to diarrhea cases by age group, time, and neighborhood. Primary analyses included 181 cholera cases, 157 non-cholera diarrhea cases, 716 VE community controls and 625 bias-indicator community controls. VE for self-reported vaccination with two doses was consistent across the two control groups, with statistically significant VE estimates ranging from 72 to 74%. Sensitivity analyses revealed similar, though somewhat attenuated estimates for self-reported two dose VE. Bias-indicator estimates were consistently less than one, with VE estimates ranging from 19 to 43%, some of which were statistically significant. OCV estimates from case-control analyses using community and test-negative controls were similar. While bias-indicator analyses suggested possible over-estimation of VE estimates using community controls, test-negative analyses suggested this bias, if present, was minimal. Test-negative controls can be a valid low-cost and time-efficient alternative to community controls for OCV effectiveness estimation and may be especially relevant in emergency situations. Copyright © 2017. Published by Elsevier Ltd.

  20. Resolving Risks in Individual Astronauts: A New Paradigm for Critical Path Exposures

    NASA Technical Reports Server (NTRS)

    Richmond, Robert C.

    2005-01-01

    The limited number of astronauts available for risk-assessment prevents classic epidemiologic study, and thereby requires alternative approach to assessing risks within individual astronauts exposed to toxic agents identified within the Bioastronautics Critical Path Roadmap (BCPR). Developing a system of noninvasive real-time biodosimetry that provides large datasets for analyses before, during, and after missions for simultaneously determining 1) the kinds of toxic insult, 2) the degree of that insult, both within tissues absorbing that insult, would be usehl for resolving statistically significant risk-assessment in individual astronauts. Therefore, a currently achievable multiparametric paradigm is presented for use in analyzing gene-expression and protein-expression so as to establish predictive outcomes.

  1. Criterion Validity and Practical Utility of the Minnesota Multiphasic Personality Inventory-2-Restructured Form (MMPI-2-RF) in Assessments of Police Officer Candidates.

    PubMed

    Tarescavage, Anthony M; Corey, David M; Gupton, Herbert M; Ben-Porath, Yossef S

    2015-01-01

    Minnesota Multiphasic Personality Inventory-2-Restructured Form scores for 145 male police officer candidates were compared with supervisor ratings of field performance and problem behaviors during their initial probationary period. Results indicated that the officers produced meaningfully lower and less variant substantive scale scores compared to the general population. After applying a statistical correction for range restriction, substantive scale scores from all domains assessed by the inventory demonstrated moderate to large correlations with performance criteria. The practical significance of these results was assessed with relative risk ratio analyses that examined the utility of specific cutoffs on scales demonstrating associations with performance criteria.

  2. Statistical Methods for Rapid Aerothermal Analysis and Design Technology: Validation

    NASA Technical Reports Server (NTRS)

    DePriest, Douglas; Morgan, Carolyn

    2003-01-01

    The cost and safety goals for NASA s next generation of reusable launch vehicle (RLV) will require that rapid high-fidelity aerothermodynamic design tools be used early in the design cycle. To meet these requirements, it is desirable to identify adequate statistical models that quantify and improve the accuracy, extend the applicability, and enable combined analyses using existing prediction tools. The initial research work focused on establishing suitable candidate models for these purposes. The second phase is focused on assessing the performance of these models to accurately predict the heat rate for a given candidate data set. This validation work compared models and methods that may be useful in predicting the heat rate.

  3. Beyond discrimination: A comparison of calibration methods and clinical usefulness of predictive models of readmission risk.

    PubMed

    Walsh, Colin G; Sharman, Kavya; Hripcsak, George

    2017-12-01

    Prior to implementing predictive models in novel settings, analyses of calibration and clinical usefulness remain as important as discrimination, but they are not frequently discussed. Calibration is a model's reflection of actual outcome prevalence in its predictions. Clinical usefulness refers to the utilities, costs, and harms of using a predictive model in practice. A decision analytic approach to calibrating and selecting an optimal intervention threshold may help maximize the impact of readmission risk and other preventive interventions. To select a pragmatic means of calibrating predictive models that requires a minimum amount of validation data and that performs well in practice. To evaluate the impact of miscalibration on utility and cost via clinical usefulness analyses. Observational, retrospective cohort study with electronic health record data from 120,000 inpatient admissions at an urban, academic center in Manhattan. The primary outcome was thirty-day readmission for three causes: all-cause, congestive heart failure, and chronic coronary atherosclerotic disease. Predictive modeling was performed via L1-regularized logistic regression. Calibration methods were compared including Platt Scaling, Logistic Calibration, and Prevalence Adjustment. Performance of predictive modeling and calibration was assessed via discrimination (c-statistic), calibration (Spiegelhalter Z-statistic, Root Mean Square Error [RMSE] of binned predictions, Sanders and Murphy Resolutions of the Brier Score, Calibration Slope and Intercept), and clinical usefulness (utility terms represented as costs). The amount of validation data necessary to apply each calibration algorithm was also assessed. C-statistics by diagnosis ranged from 0.7 for all-cause readmission to 0.86 (0.78-0.93) for congestive heart failure. Logistic Calibration and Platt Scaling performed best and this difference required analyzing multiple metrics of calibration simultaneously, in particular Calibration Slopes and Intercepts. Clinical usefulness analyses provided optimal risk thresholds, which varied by reason for readmission, outcome prevalence, and calibration algorithm. Utility analyses also suggested maximum tolerable intervention costs, e.g., $1720 for all-cause readmissions based on a published cost of readmission of $11,862. Choice of calibration method depends on availability of validation data and on performance. Improperly calibrated models may contribute to higher costs of intervention as measured via clinical usefulness. Decision-makers must understand underlying utilities or costs inherent in the use-case at hand to assess usefulness and will obtain the optimal risk threshold to trigger intervention with intervention cost limits as a result. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. Analysis of complex environment effect on near-field emission

    NASA Astrophysics Data System (ADS)

    Ravelo, B.; Lalléchère, S.; Bonnet, P.; Paladian, F.

    2014-10-01

    The article is dealing with uncertainty analyses of radiofrequency circuits electromagnetic compatibility emission based on the near-field/near-field (NF/NF) transform combined with stochastic approach. By using 2D data corresponding to electromagnetic (EM) field (X=E or H) scanned in the observation plane placed at the position z0 above the circuit under test (CUT), the X field map was extracted. Then, uncertainty analyses were assessed via the statistical moments from X component. In addition, stochastic collocation based was considered and calculations were applied to planar EM NF radiated by the CUTs as Wilkinson power divider and a microstrip line operating at GHz levels. After Matlab implementation, the mean and standard deviation were assessed. The present study illustrates how the variations of environmental parameters may impact EM fields. The NF uncertainty methodology can be applied to any physical parameter effects in complex environment and useful for printed circuit board (PCBs) design guideline.

  5. Reporting and methodological quality of meta-analyses in urological literature.

    PubMed

    Xia, Leilei; Xu, Jing; Guzzo, Thomas J

    2017-01-01

    To assess the overall quality of published urological meta-analyses and identify predictive factors for high quality. We systematically searched PubMed to identify meta-analyses published from January 1st, 2011 to December 31st, 2015 in 10 predetermined major paper-based urology journals. The characteristics of the included meta-analyses were collected, and their reporting and methodological qualities were assessed by the PRISMA checklist (27 items) and AMSTAR tool (11 items), respectively. Descriptive statistics were used for individual items as a measure of overall compliance, and PRISMA and AMSTAR scores were calculated as the sum of adequately reported domains. Logistic regression was used to identify predictive factors for high qualities. A total of 183 meta-analyses were included. The mean PRISMA and AMSTAR scores were 22.74 ± 2.04 and 7.57 ± 1.41, respectively. PRISMA item 5, protocol and registration, items 15 and 22, risk of bias across studies, items 16 and 23, additional analysis had less than 50% adherence. AMSTAR item 1, " a priori " design, item 5, list of studies and item 10, publication bias had less than 50% adherence. Logistic regression analyses showed that funding support and " a priori " design were associated with superior reporting quality, following PRISMA guideline and " a priori " design were associated with superior methodological quality. Reporting and methodological qualities of recently published meta-analyses in major paper-based urology journals are generally good. Further improvement could potentially be achieved by strictly adhering to PRISMA guideline and having " a priori " protocol.

  6. Measuring anxiety after spinal cord injury: Development and psychometric characteristics of the SCI-QOL Anxiety item bank and linkage with GAD-7.

    PubMed

    Kisala, Pamela A; Tulsky, David S; Kalpakjian, Claire Z; Heinemann, Allen W; Pohlig, Ryan T; Carle, Adam; Choi, Seung W

    2015-05-01

    To develop a calibrated item bank and computer adaptive test to assess anxiety symptoms in individuals with spinal cord injury (SCI), transform scores to the Patient Reported Outcomes Measurement Information System (PROMIS) metric, and create a statistical linkage with the Generalized Anxiety Disorder (GAD)-7, a widely used anxiety measure. Grounded-theory based qualitative item development methods; large-scale item calibration field testing; confirmatory factor analysis; graded response model item response theory analyses; statistical linking techniques to transform scores to a PROMIS metric; and linkage with the GAD-7. Setting Five SCI Model System centers and one Department of Veterans Affairs medical center in the United States. Participants Adults with traumatic SCI. Spinal Cord Injury-Quality of Life (SCI-QOL) Anxiety Item Bank Seven hundred sixteen individuals with traumatic SCI completed 38 items assessing anxiety, 17 of which were PROMIS items. After 13 items (including 2 PROMIS items) were removed, factor analyses confirmed unidimensionality. Item response theory analyses were used to estimate slopes and thresholds for the final 25 items (15 from PROMIS). The observed Pearson correlation between the SCI-QOL Anxiety and GAD-7 scores was 0.67. The SCI-QOL Anxiety item bank demonstrates excellent psychometric properties and is available as a computer adaptive test or short form for research and clinical applications. SCI-QOL Anxiety scores have been transformed to the PROMIS metric and we provide a method to link SCI-QOL Anxiety scores with those of the GAD-7.

  7. Organizational downsizing and age discrimination litigation: the influence of personnel practices and statistical evidence on litigation outcomes.

    PubMed

    Wingate, Peter H; Thornton, George C; McIntyre, Kelly S; Frame, Jennifer H

    2003-02-01

    The present study examined relationships between reduction-in-force (RIF) personnel practices, presentation of statistical evidence, and litigation outcomes. Policy capturing methods were utilized to analyze the components of 115 federal district court opinions involving age discrimination disparate treatment allegations and organizational downsizing. Univariate analyses revealed meaningful links between RIF personnel practices, use of statistical evidence, and judicial verdict. The defendant organization was awarded summary judgment in 73% of the claims included in the study. Judicial decisions in favor of the defendant organization were found to be significantly related to such variables as formal performance appraisal systems, termination decision review within the organization, methods of employee assessment and selection for termination, and the presence of a concrete layoff policy. The use of statistical evidence in ADEA disparate treatment litigation was investigated and found to be a potentially persuasive type of indirect evidence. Legal, personnel, and evidentiary ramifications are reviewed, and a framework of downsizing mechanics emphasizing legal defensibility is presented.

  8. An innovative statistical approach for analysing non-continuous variables in environmental monitoring: assessing temporal trends of TBT pollution.

    PubMed

    Santos, José António; Galante-Oliveira, Susana; Barroso, Carlos

    2011-03-01

    The current work presents an innovative statistical approach to model ordinal variables in environmental monitoring studies. An ordinal variable has values that can only be compared as "less", "equal" or "greater" and it is not possible to have information about the size of the difference between two particular values. The example of ordinal variable under this study is the vas deferens sequence (VDS) used in imposex (superimposition of male sexual characters onto prosobranch females) field assessment programmes for monitoring tributyltin (TBT) pollution. The statistical methodology presented here is the ordered logit regression model. It assumes that the VDS is an ordinal variable whose values match up a process of imposex development that can be considered continuous in both biological and statistical senses and can be described by a latent non-observable continuous variable. This model was applied to the case study of Nucella lapillus imposex monitoring surveys conducted in the Portuguese coast between 2003 and 2008 to evaluate the temporal evolution of TBT pollution in this country. In order to produce more reliable conclusions, the proposed model includes covariates that may influence the imposex response besides TBT (e.g. the shell size). The model also provides an analysis of the environmental risk associated to TBT pollution by estimating the probability of the occurrence of females with VDS ≥ 2 in each year, according to OSPAR criteria. We consider that the proposed application of this statistical methodology has a great potential in environmental monitoring whenever there is the need to model variables that can only be assessed through an ordinal scale of values.

  9. Statistical approaches in published ophthalmic clinical science papers: a comparison to statistical practice two decades ago.

    PubMed

    Zhang, Harrison G; Ying, Gui-Shuang

    2018-02-09

    The aim of this study is to evaluate the current practice of statistical analysis of eye data in clinical science papers published in British Journal of Ophthalmology ( BJO ) and to determine whether the practice of statistical analysis has improved in the past two decades. All clinical science papers (n=125) published in BJO in January-June 2017 were reviewed for their statistical analysis approaches for analysing primary ocular measure. We compared our findings to the results from a previous paper that reviewed BJO papers in 1995. Of 112 papers eligible for analysis, half of the studies analysed the data at an individual level because of the nature of observation, 16 (14%) studies analysed data from one eye only, 36 (32%) studies analysed data from both eyes at ocular level, one study (1%) analysed the overall summary of ocular finding per individual and three (3%) studies used the paired comparison. Among studies with data available from both eyes, 50 (89%) of 56 papers in 2017 did not analyse data from both eyes or ignored the intereye correlation, as compared with in 60 (90%) of 67 papers in 1995 (P=0.96). Among studies that analysed data from both eyes at an ocular level, 33 (92%) of 36 studies completely ignored the intereye correlation in 2017, as compared with in 16 (89%) of 18 studies in 1995 (P=0.40). A majority of studies did not analyse the data properly when data from both eyes were available. The practice of statistical analysis did not improve in the past two decades. Collaborative efforts should be made in the vision research community to improve the practice of statistical analysis for ocular data. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  10. Quantitative Analysis of Repertoire Scale Immunoglobulin properties in Vaccine Induced B cell Responses

    DTIC Science & Technology

    Immunosequencing now readily generates 103105 sequences per sample ; however, statistical analysis of these repertoires is challenging because of the high genetic...diversity of BCRs and the elaborate clonal relationships among them. To date, most immunosequencing analyses have focused on reporting qualitative ...repertoire differences, (2) identifying how two repertoires differ, and (3) determining appropriate confidence intervals for assessing the size of the differences and their potential biological relevance.

  11. Using R-Project for Free Statistical Analysis in Extension Research

    ERIC Educational Resources Information Center

    Mangiafico, Salvatore S.

    2013-01-01

    One option for Extension professionals wishing to use free statistical software is to use online calculators, which are useful for common, simple analyses. A second option is to use a free computing environment capable of performing statistical analyses, like R-project. R-project is free, cross-platform, powerful, and respected, but may be…

  12. Assessment of disinfection of hospital surfaces using different monitoring methods1

    PubMed Central

    Ferreira, Adriano Menis; de Andrade, Denise; Rigotti, Marcelo Alessandro; de Almeida, Margarete Teresa Gottardo; Guerra, Odanir Garcia; dos Santos, Aires Garcia

    2015-01-01

    OBJECTIVE: to assess the efficiency of cleaning/disinfection of surfaces of an Intensive Care Unit. METHOD: descriptive-exploratory study with quantitative approach conducted over the course of four weeks. Visual inspection, bioluminescence adenosine triphosphate and microbiological indicators were used to indicate cleanliness/disinfection. Five surfaces (bed rails, bedside tables, infusion pumps, nurses' counter, and medical prescription table) were assessed before and after the use of rubbing alcohol at 70% (w/v), totaling 160 samples for each method. Non-parametric tests were used considering statistically significant differences at p<0.05. RESULTS: after the cleaning/disinfection process, 87.5, 79.4 and 87.5% of the surfaces were considered clean using the visual inspection, bioluminescence adenosine triphosphate and microbiological analyses, respectively. A statistically significant decrease was observed in the disapproval rates after the cleaning process considering the three assessment methods; the visual inspection was the least reliable. CONCLUSION: the cleaning/disinfection method was efficient in reducing microbial load and organic matter of surfaces, however, these findings require further study to clarify aspects related to the efficiency of friction, its frequency, and whether or not there is association with other inputs to achieve improved results of the cleaning/disinfection process. PMID:26312634

  13. Assessment of disinfection of hospital surfaces using different monitoring methods.

    PubMed

    Ferreira, Adriano Menis; de Andrade, Denise; Rigotti, Marcelo Alessandro; de Almeida, Margarete Teresa Gottardo; Guerra, Odanir Garcia; dos Santos Junior, Aires Garcia

    2015-01-01

    to assess the efficiency of cleaning/disinfection of surfaces of an Intensive Care Unit. descriptive-exploratory study with quantitative approach conducted over the course of four weeks. Visual inspection, bioluminescence adenosine triphosphate and microbiological indicators were used to indicate cleanliness/disinfection. Five surfaces (bed rails, bedside tables, infusion pumps, nurses' counter, and medical prescription table) were assessed before and after the use of rubbing alcohol at 70% (w/v), totaling 160 samples for each method. Non-parametric tests were used considering statistically significant differences at p<0.05. after the cleaning/disinfection process, 87.5, 79.4 and 87.5% of the surfaces were considered clean using the visual inspection, bioluminescence adenosine triphosphate and microbiological analyses, respectively. A statistically significant decrease was observed in the disapproval rates after the cleaning process considering the three assessment methods; the visual inspection was the least reliable. the cleaning/disinfection method was efficient in reducing microbial load and organic matter of surfaces, however, these findings require further study to clarify aspects related to the efficiency of friction, its frequency, and whether or not there is association with other inputs to achieve improved results of the cleaning/disinfection process.

  14. Using transportation accident databases to investigate ignition and explosion probabilities of flammable spills.

    PubMed

    Ronza, A; Vílchez, J A; Casal, J

    2007-07-19

    Risk assessment of hazardous material spill scenarios, and quantitative risk assessment in particular, make use of event trees to account for the possible outcomes of hazardous releases. Using event trees entails the definition of probabilities of occurrence for events such as spill ignition and blast formation. This study comprises an extensive analysis of ignition and explosion probability data proposed in previous work. Subsequently, the results of the survey of two vast US federal spill databases (HMIRS, by the Department of Transportation, and MINMOD, by the US Coast Guard) are reported and commented on. Some tens of thousands of records of hydrocarbon spills were analysed. The general pattern of statistical ignition and explosion probabilities as a function of the amount and the substance spilled is discussed. Equations are proposed based on statistical data that predict the ignition probability of hydrocarbon spills as a function of the amount and the substance spilled. Explosion probabilities are put forth as well. Two sets of probability data are proposed: it is suggested that figures deduced from HMIRS be used in land transportation risk assessment, and MINMOD results with maritime scenarios assessment. Results are discussed and compared with previous technical literature.

  15. Assessment of the quality of reporting in abstracts of systematic reviews with meta-analyses in periodontology and implant dentistry.

    PubMed

    Faggion, C M; Liu, J; Huda, F; Atieh, M

    2014-04-01

    Proper scientific reporting is necessary to ensure the correct interpretation of study results by readers. The main objective of this study was to assess the quality of reporting in abstracts of systematic reviews (SRs) with meta-analyses in periodontology and implant dentistry. Differences in reporting of abstracts in Cochrane and paper-based reviews were also assessed. The PubMed electronic database and the Cochrane database for SRs were searched on November 11, 2012, independently and in duplicate, for SRs with meta-analyses related to interventions in periodontology and implant dentistry. Assessment of the quality of reporting was performed independently and in duplicate, taking into account items related to the effect direction, numerical estimates of effect size, measures of precision, probability and consistency. We initially screened 433 papers and included 146 (127 paper-based and 19 Cochrane reviews, respectively). The direction of evidence was reported in two-thirds of the abstracts while strength of evidence and measure of precision (i.e., confidence interval) were reported in less than half the selected abstracts. Measures of consistency such as I(2) statistics were reported in only 5% of the selected sample of abstracts. Cochrane abstracts reported the limitations of evidence and precision better than paper-based ones. Two items ("meta-analysis" in title and abstract, respectively), were nevertheless better reported in paper-based abstracts. Abstracts of SRs with meta-analyses in periodontology and implant dentistry currently have no uniform standard of reporting, which may hinder readers' understanding of study outcomes. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  16. Evaluation of the impact of interdisciplinarity in cancer care

    PubMed Central

    2011-01-01

    Background Teamwork is a key component of the health care renewal strategy emphasized in Quebec, elsewhere in Canada and in other countries to enhance the quality of oncology services. While this innovation would appear beneficial in theory, empirical evidences of its impact are limited. Current efforts in Quebec to encourage the development of local interdisciplinary teams in all hospitals offer a unique opportunity to assess the anticipated benefits. These teams working in hospital outpatient clinics are responsible for treatment, follow-up and patient support. The study objective is to assess the impact of interdisciplinarity on cancer patients and health professionals. Methods/Design This is a quasi-experimental study with three comparison groups distinguished by intensity of interdisciplinarity: strong, moderate and weak. The study will use a random sample of 12 local teams in Quebec, stratified by intensity of interdisciplinarity. The instrument to measure the intensity of the interdisciplinarity, developed in collaboration with experts, encompasses five dimensions referring to aspects of team structure and process. Self-administered questionnaires will be used to measure the impact of interdisciplinarity on patients (health care utilization, continuity of care and cancer services responsiveness) and on professionals (professional well-being, assessment of teamwork and perception of teamwork climate). Approximately 100 health professionals working on the selected teams and 2000 patients will be recruited. Statistical analyses will include descriptive statistics and comparative analysis of the impact observed according to the strata of interdisciplinarity. Fixed and random multivariate statistical models (multilevel analyses) will also be used. Discussion This study will pinpoint to what extent interdisciplinarity is linked to quality of care and meets the complex and varied needs of cancer patients. It will ascertain to what extent interdisciplinary teamwork facilitated the work of professionals. Such findings are important given the growing prevalence of cancer and the importance of attracting and retaining health professionals to work with cancer patients. PMID:21639897

  17. Development and psychometric testing of a new geriatric spiritual well-being scale.

    PubMed

    Dunn, Karen S

    2008-09-01

    Aims and objectives.  Assess the psychometric properties of a new geriatric spiritual well-being scale (GSWS), specifically designed for older adults. Background.  Religiosity and spiritual wellness must be measured as two distinct concepts to prevent confounding them as synonymous among atheist and agnostic population. Design.  A test-retest survey design was used to estimate the psychometric properties. Methods.  A convenience sample of 138 community-dwelling older adults was drawn from the inner city of Detroit. Data were collected using telephone survey interviews. Data analyses included descriptive statistics, structural equation modelling, reliability analyses, and point-biserial correlations. Results.  The factorial validity of the proposed model was not supported by the data. Fit indices were χ(2)  = 185.98, d.f. = 98, P < 0.00, goodness-of-fit index of 0.85, comparative fit index of 0.87 and root mean error of approximation of 0.08, indicating a mediocre fit. Reliability statistics for the subscales ranged from being poor (0.36) to good (0.84) with an acceptable overall scale alpha of 0.76. Participants' performance stability and criterion-related validity were also supported. Conclusions.  The GSWS is an age-specific assessment tool that was developed specifically to address a population's cultural diversity. Future research endeavors will be to test the psychometric properties of this scale in culturally diverse older adult populations for further instrument development. Relevance to clinical practice.  Nurses need to recognize that agnostics/atheists have spiritual needs that do not include religious beliefs or practices. Thus, assessing patients' religious beliefs and practices prior to assessing spiritual well-being is essential to prevent bias. © 2008 The Author. Journal compilation © 2008 Blackwell Publishing Ltd.

  18. Topographic relationships for design rainfalls over Australia

    NASA Astrophysics Data System (ADS)

    Johnson, F.; Hutchinson, M. F.; The, C.; Beesley, C.; Green, J.

    2016-02-01

    Design rainfall statistics are the primary inputs used to assess flood risk across river catchments. These statistics normally take the form of Intensity-Duration-Frequency (IDF) curves that are derived from extreme value probability distributions fitted to observed daily, and sub-daily, rainfall data. The design rainfall relationships are often required for catchments where there are limited rainfall records, particularly catchments in remote areas with high topographic relief and hence some form of interpolation is required to provide estimates in these areas. This paper assesses the topographic dependence of rainfall extremes by using elevation-dependent thin plate smoothing splines to interpolate the mean annual maximum rainfall, for periods from one to seven days, across Australia. The analyses confirm the important impact of topography in explaining the spatial patterns of these extreme rainfall statistics. Continent-wide residual and cross validation statistics are used to demonstrate the 100-fold impact of elevation in relation to horizontal coordinates in explaining the spatial patterns, consistent with previous rainfall scaling studies and observational evidence. The impact of the complexity of the fitted spline surfaces, as defined by the number of knots, and the impact of applying variance stabilising transformations to the data, were also assessed. It was found that a relatively large number of 3570 knots, suitably chosen from 8619 gauge locations, was required to minimise the summary error statistics. Square root and log data transformations were found to deliver marginally superior continent-wide cross validation statistics, in comparison to applying no data transformation, but detailed assessments of residuals in complex high rainfall regions with high topographic relief showed that no data transformation gave superior performance in these regions. These results are consistent with the understanding that in areas with modest topographic relief, as for most of the Australian continent, extreme rainfall is closely aligned with elevation, but in areas with high topographic relief the impacts of topography on rainfall extremes are more complex. The interpolated extreme rainfall statistics, using no data transformation, have been used by the Australian Bureau of Meteorology to produce new IDF data for the Australian continent. The comprehensive methods presented for the evaluation of gridded design rainfall statistics will be useful for similar studies, in particular the importance of balancing the need for a continentally-optimum solution that maintains sufficient definition at the local scale.

  19. The relationship between the behavior problems and motor skills of students with intellectual disability.

    PubMed

    Lee, Yangchool; Jeoung, Bogja

    2016-12-01

    The purpose of this study was to determine the relationship between the motor skills and the behavior problems of students with intellectual disabilities. The study participants were 117 students with intellectual disabilities who were between 7 and 25 years old (male, n=79; female, n=38) and attending special education schools in South Korea. Motor skill abilities were assessed by using the second version of the Bruininks-Oseretsky test of motor proficiency, which includes subtests in fine motor control, manual coordination, body coordination, strength, and agility. Data were analyzed with SPSS IBM 21 by using correlation and regression analyses, and the significance level was set at P <0.05. The results showed that fine motor precision and integration had a statistically significant influence on aggressive behavior. Manual dexterity showed a statistically significant influence on somatic complaint and anxiety/depression, and bilateral coordination had a statistically significant influence on social problems, attention problem, and aggressive behavior. Our results showed that balance had a statistically significant influence on social problems and aggressive behavior, and speed and agility had a statistically significant influence on social problems and aggressive behavior. Upper limb coordination and strength had a statistically significant influence on social problems.

  20. Cervical vertebral maturation and dental age in coeliac patients.

    PubMed

    Costacurta, M; Condò, R; Sicuro, L; Perugia, C; Docimo, R

    2011-07-01

    The aim of the study was to evaluate the cervical vertebral maturation and dental age, in group of patients with coelic disease (CD), in comparison with a control group of healthy subjects. At the Paediatric Dentistry Unit of PTV Hospital, "Tor Vergata" University of Rome, 120 female patients, age range 12.0-12.9 years were recruited. Among them, 60 subjects (Group 1) were affected by CD, while the control group (Group 2) consisted of 60 healthy subjects, sex and age-matched. The Group 1 was subdivided, according to the period of CD diagnosis, in Group A (early diagnosis) and Group B (late diagnosis). The skeletal age was determined by assessing the cervical vertebral maturation, while the dental age has been determined using the method codified by Demirjiyan. STATISTICS.: The analyses were performed using the SPSS software (version 16; SPSS Inc., Chicago IL, USA). In all the assessments a significant level of alpha = 0.05 was considered. There are no statistically significant differences between Group 1 and Group 2 as for chronological age (p=0.122). Instead, from the assessment of skeletal-dental age, there are statistically significant differences between Group 1 - Group 2 (p<0.001) and Group A - Group B (p<0.001). The statistical analysis carried out to assess the differences between chronological and skeletal-dental age within the single groups, show a statistically significant difference in Group 1 (p<0.001) and in Group B (p<0.001), while there are no statistically significant differences in Group 2 (p=0.538) and in Group A (p=0.475). A correlation between skeletal and dental age was registered; for Groups 1-2 and for Groups A-B the Pearson coefficient was respectively equal to 0.967 and 0.969, with p<0.001. Through the analysis of data it is possible to assess that the percentage of subjects with skeletal and dental age delay corresponds to 20% in healthy subjects, 56.7% in coeliac subjects, 23% in coeliac subjects with early diagnosis and 90% in coeliac subjects with late diagnosis. From the comparison between Group 2 and Group A there are no statistically significant differences (p=0.951). Conclusions. The skeletal age and dental age delay may be two predictive indexes for a CD diagnosis. The dental age and cervical vertebral maturity can be assessed with a low cost, non invasive, easy to perform exam carried out through the routine radiographic examinations such as orthopanoramic and lateral teleradiography.

  1. Statistical modeling implicates neuroanatomical circuit mediating stress relief by ‘comfort’ food

    PubMed Central

    Ulrich-Lai, Yvonne M.; Christiansen, Anne M.; Wang, Xia; Song, Seongho; Herman, James P.

    2015-01-01

    A history of eating highly-palatable foods reduces physiological and emotional responses to stress. For instance, we have previously shown that limited sucrose intake (4 ml of 30% sucrose twice daily for 14 days) reduces hypothalamic-pituitary-adrenocortical (HPA) axis responses to stress. However, the neural mechanisms underlying stress relief by such ‘comfort’ foods are unclear, and could reveal an endogenous brain pathway for stress mitigation. As such, the present work assessed the expression of several proteins related to neuronal activation and/or plasticity in multiple stress- and reward-regulatory brain regions of rats after limited sucrose (vs. water control) intake. These data were then subjected to a series of statistical analyses, including Bayesian modeling, to identify the most likely neurocircuit mediating stress relief by sucrose. The analyses suggest that sucrose reduces HPA activation by dampening an excitatory basolateral amygdala - medial amygdala circuit, while also potentiating an inhibitory bed nucleus of the stria terminalis principle subdivision-mediated circuit, resulting in reduced HPA activation after stress. Collectively, the results support the hypothesis that sucrose limits stress responses via plastic changes to the structure and function of stress-regulatory neural circuits. The work also illustrates that advanced statistical methods are useful approaches to identify potentially novel and important underlying relationships in biological data sets. PMID:26246177

  2. Statistical modeling implicates neuroanatomical circuit mediating stress relief by 'comfort' food.

    PubMed

    Ulrich-Lai, Yvonne M; Christiansen, Anne M; Wang, Xia; Song, Seongho; Herman, James P

    2016-07-01

    A history of eating highly palatable foods reduces physiological and emotional responses to stress. For instance, we have previously shown that limited sucrose intake (4 ml of 30 % sucrose twice daily for 14 days) reduces hypothalamic-pituitary-adrenocortical (HPA) axis responses to stress. However, the neural mechanisms underlying stress relief by such 'comfort' foods are unclear, and could reveal an endogenous brain pathway for stress mitigation. As such, the present work assessed the expression of several proteins related to neuronal activation and/or plasticity in multiple stress- and reward-regulatory brain regions of rats after limited sucrose (vs. water control) intake. These data were then subjected to a series of statistical analyses, including Bayesian modeling, to identify the most likely neurocircuit mediating stress relief by sucrose. The analyses suggest that sucrose reduces HPA activation by dampening an excitatory basolateral amygdala-medial amygdala circuit, while also potentiating an inhibitory bed nucleus of the stria terminalis principle subdivision-mediated circuit, resulting in reduced HPA activation after stress. Collectively, the results support the hypothesis that sucrose limits stress responses via plastic changes to the structure and function of stress-regulatory neural circuits. The work also illustrates that advanced statistical methods are useful approaches to identify potentially novel and important underlying relationships in biological datasets.

  3. Increased skills usage statistically mediates symptom reduction in self-guided internet-delivered cognitive-behavioural therapy for depression and anxiety: a randomised controlled trial.

    PubMed

    Terides, Matthew D; Dear, Blake F; Fogliati, Vincent J; Gandy, Milena; Karin, Eyal; Jones, Michael P; Titov, Nickolai

    2018-01-01

    Cognitive-behavioural therapy (CBT) is an effective treatment for clinical and subclinical symptoms of depression and general anxiety, and increases life satisfaction. Patients' usage of CBT skills is a core aspect of treatment but there is insufficient empirical evidence suggesting that skills usage behaviours are a mechanism of clinical change. This study investigated if an internet-delivered CBT (iCBT) intervention increased the frequency of CBT skills usage behaviours and if this statistically mediated reductions in symptoms and increased life satisfaction. A two-group randomised controlled trial was conducted comparing internet-delivered CBT (n = 65) with a waitlist control group (n = 75). Participants were individuals experiencing clinically significant symptoms of depression or general anxiety. Mixed-linear models analyses revealed that the treatment group reported a significantly higher frequency of skills usage, lower symptoms, and higher life satisfaction by the end of treatment compared with the control group. Results from bootstrapping mediation analyses revealed that the increased skills usage behaviours statistically mediated symptom reductions and increased life satisfaction. Although skills usage and symptom outcomes were assessed concurrently, these findings support the notion that iCBT increases the frequency of skills usage behaviours and suggest that this may be an important mechanism of change.

  4. Assessment and statistics of surgically induced astigmatism.

    PubMed

    Naeser, Kristian

    2008-05-01

    The aim of the thesis was to develop methods for assessment of surgically induced astigmatism (SIA) in individual eyes, and in groups of eyes. The thesis is based on 12 peer-reviewed publications, published over a period of 16 years. In these publications older and contemporary literature was reviewed(1). A new method (the polar system) for analysis of SIA was developed. Multivariate statistical analysis of refractive data was described(2-4). Clinical validation studies were performed. The description of a cylinder surface with polar values and differential geometry was compared. The main results were: refractive data in the form of sphere, cylinder and axis may define an individual patient or data set, but are unsuited for mathematical and statistical analyses(1). The polar value system converts net astigmatisms to orthonormal components in dioptric space. A polar value is the difference in meridional power between two orthogonal meridians(5,6). Any pair of polar values, separated by an arch of 45 degrees, characterizes a net astigmatism completely(7). The two polar values represent the net curvital and net torsional power over the chosen meridian(8). The spherical component is described by the spherical equivalent power. Several clinical studies demonstrated the efficiency of multivariate statistical analysis of refractive data(4,9-11). Polar values and formal differential geometry describe astigmatic surfaces with similar concepts and mathematical functions(8). Other contemporary methods, such as Long's power matrix, Holladay's and Alpins' methods, Zernike(12) and Fourier analyses(8), are correlated to the polar value system. In conclusion, analysis of SIA should be performed with polar values or other contemporary component systems. The study was supported by Statens Sundhedsvidenskabeligt Forskningsråd, Cykelhandler P. Th. Rasmussen og Hustrus Mindelegat, Hotelejer Carl Larsen og Hustru Nicoline Larsens Mindelegat, Landsforeningen til Vaern om Synet, Forskningsinitiativet for Arhus Amt, Alcon Denmark, and Desirée and Niels Ydes Fond.

  5. Association factor analysis between osteoporosis with cerebral artery disease: The STROBE study.

    PubMed

    Jin, Eun-Sun; Jeong, Je Hoon; Lee, Bora; Im, Soo Bin

    2017-03-01

    The purpose of this study was to determine the clinical association factors between osteoporosis and cerebral artery disease in Korean population. Two hundred nineteen postmenopausal women and men undergoing cerebral computed tomography angiography were enrolled in this study to evaluate the cerebral artery disease by cross-sectional study. Cerebral artery disease was diagnosed if there was narrowing of 50% higher diameter in one or more cerebral vessel artery or presence of vascular calcification. History of osteoporotic fracture was assessed using medical record, and radiographic data such as simple radiography, MRI, and bone scan. Bone mineral density was checked by dual-energy x-ray absorptiometry. We reviewed clinical characteristics in all patients and also performed subgroup analysis for total or extracranial/ intracranial cerebral artery disease group retrospectively. We performed statistical analysis by means of chi-square test or Fisher's exact test for categorical variables and Student's t-test or Wilcoxon's rank sum test for continuous variables. We also used univariate and multivariate logistic regression analyses were conducted to assess the factors associated with the prevalence of cerebral artery disease. A two-tailed p-value of less than 0.05 was considered as statistically significant. All statistical analyses were performed using R (version 3.1.3; The R Foundation for Statistical Computing, Vienna, Austria) and SPSS (version 14.0; SPSS, Inc, Chicago, Ill, USA). Of the 219 patients, 142 had cerebral artery disease. All vertebral fracture was observed in 29 (13.24%) patients. There was significant difference in hip fracture according to the presence or absence of cerebral artery disease. In logistic regression analysis, osteoporotic hip fracture was significantly associated with extracranial cerebral artery disease after adjusting for multiple risk factors. Females with osteoporotic hip fracture were associated with total calcified cerebral artery disease. Some clinical factors such as age, hypertension, and osteoporotic hip fracture, smoking history and anti-osteoporosis drug use were associated with cerebral artery disease.

  6. Improving the Prognostic Ability through Better Use of Standard Clinical Data - The Nottingham Prognostic Index as an Example

    PubMed Central

    Winzer, Klaus-Jürgen; Buchholz, Anika; Schumacher, Martin; Sauerbrei, Willi

    2016-01-01

    Background Prognostic factors and prognostic models play a key role in medical research and patient management. The Nottingham Prognostic Index (NPI) is a well-established prognostic classification scheme for patients with breast cancer. In a very simple way, it combines the information from tumor size, lymph node stage and tumor grade. For the resulting index cutpoints are proposed to classify it into three to six groups with different prognosis. As not all prognostic information from the three and other standard factors is used, we will consider improvement of the prognostic ability using suitable analysis approaches. Methods and Findings Reanalyzing overall survival data of 1560 patients from a clinical database by using multivariable fractional polynomials and further modern statistical methods we illustrate suitable multivariable modelling and methods to derive and assess the prognostic ability of an index. Using a REMARK type profile we summarize relevant steps of the analysis. Adding the information from hormonal receptor status and using the full information from the three NPI components, specifically concerning the number of positive lymph nodes, an extended NPI with improved prognostic ability is derived. Conclusions The prognostic ability of even one of the best established prognostic index in medicine can be improved by using suitable statistical methodology to extract the full information from standard clinical data. This extended version of the NPI can serve as a benchmark to assess the added value of new information, ranging from a new single clinical marker to a derived index from omics data. An established benchmark would also help to harmonize the statistical analyses of such studies and protect against the propagation of many false promises concerning the prognostic value of new measurements. Statistical methods used are generally available and can be used for similar analyses in other diseases. PMID:26938061

  7. Statistical Approaches Used to Assess the Equity of Access to Food Outlets: A Systematic Review

    PubMed Central

    Lamb, Karen E.; Thornton, Lukar E.; Cerin, Ester; Ball, Kylie

    2015-01-01

    Background Inequalities in eating behaviours are often linked to the types of food retailers accessible in neighbourhood environments. Numerous studies have aimed to identify if access to healthy and unhealthy food retailers is socioeconomically patterned across neighbourhoods, and thus a potential risk factor for dietary inequalities. Existing reviews have examined differences between methodologies, particularly focussing on neighbourhood and food outlet access measure definitions. However, no review has informatively discussed the suitability of the statistical methodologies employed; a key issue determining the validity of study findings. Our aim was to examine the suitability of statistical approaches adopted in these analyses. Methods Searches were conducted for articles published from 2000–2014. Eligible studies included objective measures of the neighbourhood food environment and neighbourhood-level socio-economic status, with a statistical analysis of the association between food outlet access and socio-economic status. Results Fifty-four papers were included. Outlet accessibility was typically defined as the distance to the nearest outlet from the neighbourhood centroid, or as the number of food outlets within a neighbourhood (or buffer). To assess if these measures were linked to neighbourhood disadvantage, common statistical methods included ANOVA, correlation, and Poisson or negative binomial regression. Although all studies involved spatial data, few considered spatial analysis techniques or spatial autocorrelation. Conclusions With advances in GIS software, sophisticated measures of neighbourhood outlet accessibility can be considered. However, approaches to statistical analysis often appear less sophisticated. Care should be taken to consider assumptions underlying the analysis and the possibility of spatially correlated residuals which could affect the results. PMID:29546115

  8. Characterizing Sub-Daily Flow Regimes: Implications of Hydrologic Resolution on Ecohydrology Studies

    DOE PAGES

    Bevelhimer, Mark S.; McManamay, Ryan A.; O'Connor, B.

    2014-05-26

    Natural variability in flow is a primary factor controlling geomorphic and ecological processes in riverine ecosystems. Within the hydropower industry, there is growing pressure from environmental groups and natural resource managers to change reservoir releases from daily peaking to run-of-river operations on the basis of the assumption that downstream biological communities will improve under a more natural flow regime. In this paper, we discuss the importance of assessing sub-daily flows for understanding the physical and ecological dynamics within river systems. We present a variety of metrics for characterizing sub-daily flow variation and use these metrics to evaluate general trends amongmore » streams affected by peaking hydroelectric projects, run-of-river projects and streams that are largely unaffected by flow altering activities. Univariate and multivariate techniques were used to assess similarity among different stream types on the basis of these sub-daily metrics. For comparison, similar analyses were performed using analogous metrics calculated with mean daily flow values. Our results confirm that sub-daily flow metrics reveal variation among and within streams that are not captured by daily flow statistics. Using sub-daily flow statistics, we were able to quantify the degree of difference between unaltered and peaking streams and the amount of similarity between unaltered and run-of-river streams. The sub-daily statistics were largely uncorrelated with daily statistics of similar scope. Furthermore, on short temporal scales, sub-daily statistics reveal the relatively constant nature of unaltered streamreaches and the highly variable nature of hydropower-affected streams, whereas daily statistics show just the opposite over longer temporal scales.« less

  9. The Problem of Auto-Correlation in Parasitology

    PubMed Central

    Pollitt, Laura C.; Reece, Sarah E.; Mideo, Nicole; Nussey, Daniel H.; Colegrave, Nick

    2012-01-01

    Explaining the contribution of host and pathogen factors in driving infection dynamics is a major ambition in parasitology. There is increasing recognition that analyses based on single summary measures of an infection (e.g., peak parasitaemia) do not adequately capture infection dynamics and so, the appropriate use of statistical techniques to analyse dynamics is necessary to understand infections and, ultimately, control parasites. However, the complexities of within-host environments mean that tracking and analysing pathogen dynamics within infections and among hosts poses considerable statistical challenges. Simple statistical models make assumptions that will rarely be satisfied in data collected on host and parasite parameters. In particular, model residuals (unexplained variance in the data) should not be correlated in time or space. Here we demonstrate how failure to account for such correlations can result in incorrect biological inference from statistical analysis. We then show how mixed effects models can be used as a powerful tool to analyse such repeated measures data in the hope that this will encourage better statistical practices in parasitology. PMID:22511865

  10. Statistical Emulation of Climate Model Projections Based on Precomputed GCM Runs*

    DOE PAGES

    Castruccio, Stefano; McInerney, David J.; Stein, Michael L.; ...

    2014-02-24

    The authors describe a new approach for emulating the output of a fully coupled climate model under arbitrary forcing scenarios that is based on a small set of precomputed runs from the model. Temperature and precipitation are expressed as simple functions of the past trajectory of atmospheric CO 2 concentrations, and a statistical model is fit using a limited set of training runs. The approach is demonstrated to be a useful and computationally efficient alternative to pattern scaling and captures the nonlinear evolution of spatial patterns of climate anomalies inherent in transient climates. The approach does as well as patternmore » scaling in all circumstances and substantially better in many; it is not computationally demanding; and, once the statistical model is fit, it produces emulated climate output effectively instantaneously. In conclusion, it may therefore find wide application in climate impacts assessments and other policy analyses requiring rapid climate projections.« less

  11. Communication about patient pain in primary care: development of the Physician-Patient Communication about Pain scale (PCAP).

    PubMed

    Haskard-Zolnierek, Kelly B

    2012-01-01

    This paper describes the development of the 47-item Physician-Patient Communication about Pain (PCAP) scale for use with audiotaped medical visit interactions. Patient pain was assessed with the Medical Outcomes Study SF-36 Bodily Pain Scale. Four raters assessed 181 audiotaped patient interactions with 68 physicians. Descriptive statistics of PCAP items were computed. Principal components analyses with 20 scale items were used to reduce the scale to composite variables for analyses. Validity was assessed through (1) comparing PCAP composite scores for patients with high versus low pain and (2) correlating PCAP composites with a separate communication rating scale. Principal components analyses yielded four physician and five patient communication composites (mean alpha=.77). Some evidence for concurrent validity was provided (5 of 18 correlations with communication validation rating scale were significant). Paired-sample t tests showed significant differences for 4 patient PCAP composites, showing the PCAP scale discriminates between high and low pain patients' communication. The PCAP scale shows partial evidence of reliability and two forms of validity. More research with this scale (developing more reliable and valid composites) is needed to extend these preliminary findings before this scale is applicable for use in practice. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  12. Prognostic factors in patients with advanced cancer: use of the patient-generated subjective global assessment in survival prediction.

    PubMed

    Martin, Lisa; Watanabe, Sharon; Fainsinger, Robin; Lau, Francis; Ghosh, Sunita; Quan, Hue; Atkins, Marlis; Fassbender, Konrad; Downing, G Michael; Baracos, Vickie

    2010-10-01

    To determine whether elements of a standard nutritional screening assessment are independently prognostic of survival in patients with advanced cancer. A prospective nested cohort of patients with metastatic cancer were accrued from different units of a Regional Palliative Care Program. Patients completed a nutritional screen on admission. Data included age, sex, cancer site, height, weight history, dietary intake, 13 nutrition impact symptoms, and patient- and physician-reported performance status (PS). Univariate and multivariate survival analyses were conducted. Concordance statistics (c-statistics) were used to test the predictive accuracy of models based on training and validation sets; a c-statistic of 0.5 indicates the model predicts the outcome as well as chance; perfect prediction has a c-statistic of 1.0. A training set of patients in palliative home care (n = 1,164) was used to identify prognostic variables. Primary disease site, PS, short-term weight change (either gain or loss), dietary intake, and dysphagia predicted survival in multivariate analysis (P < .05). A model including only patients separated by disease site and PS with high c-statistics between predicted and observed responses for survival in the training set (0.90) and validation set (0.88; n = 603). The addition of weight change, dietary intake, and dysphagia did not further improve the c-statistic of the model. The c-statistic was also not altered by substituting physician-rated palliative PS for patient-reported PS. We demonstrate a high probability of concordance between predicted and observed survival for patients in distinct palliative care settings (home care, tertiary inpatient, ambulatory outpatient) based on patient-reported information.

  13. SPSS and SAS programs for generalizability theory analyses.

    PubMed

    Mushquash, Christopher; O'Connor, Brian P

    2006-08-01

    The identification and reduction of measurement errors is a major challenge in psychological testing. Most investigators rely solely on classical test theory for assessing reliability, whereas most experts have long recommended using generalizability theory instead. One reason for the common neglect of generalizability theory is the absence of analytic facilities for this purpose in popular statistical software packages. This article provides a brief introduction to generalizability theory, describes easy to use SPSS, SAS, and MATLAB programs for conducting the recommended analyses, and provides an illustrative example, using data (N = 329) for the Rosenberg Self-Esteem Scale. Program output includes variance components, relative and absolute errors and generalizability coefficients, coefficients for D studies, and graphs of D study results.

  14. Inconsistency between direct and indirect comparisons of competing interventions: meta-epidemiological study.

    PubMed

    Song, Fujian; Xiong, Tengbin; Parekh-Bhurke, Sheetal; Loke, Yoon K; Sutton, Alex J; Eastwood, Alison J; Holland, Richard; Chen, Yen-Fu; Glenny, Anne-Marie; Deeks, Jonathan J; Altman, Doug G

    2011-08-16

    To investigate the agreement between direct and indirect comparisons of competing healthcare interventions. Meta-epidemiological study based on sample of meta-analyses of randomised controlled trials. Data sources Cochrane Database of Systematic Reviews and PubMed. Inclusion criteria Systematic reviews that provided sufficient data for both direct comparison and independent indirect comparisons of two interventions on the basis of a common comparator and in which the odds ratio could be used as the outcome statistic. Inconsistency measured by the difference in the log odds ratio between the direct and indirect methods. The study included 112 independent trial networks (including 1552 trials with 478,775 patients in total) that allowed both direct and indirect comparison of two interventions. Indirect comparison had already been explicitly done in only 13 of the 85 Cochrane reviews included. The inconsistency between the direct and indirect comparison was statistically significant in 16 cases (14%, 95% confidence interval 9% to 22%). The statistically significant inconsistency was associated with fewer trials, subjectively assessed outcomes, and statistically significant effects of treatment in either direct or indirect comparisons. Owing to considerable inconsistency, many (14/39) of the statistically significant effects by direct comparison became non-significant when the direct and indirect estimates were combined. Significant inconsistency between direct and indirect comparisons may be more prevalent than previously observed. Direct and indirect estimates should be combined in mixed treatment comparisons only after adequate assessment of the consistency of the evidence.

  15. Inconsistency between direct and indirect comparisons of competing interventions: meta-epidemiological study

    PubMed Central

    Xiong, Tengbin; Parekh-Bhurke, Sheetal; Loke, Yoon K; Sutton, Alex J; Eastwood, Alison J; Holland, Richard; Chen, Yen-Fu; Glenny, Anne-Marie; Deeks, Jonathan J; Altman, Doug G

    2011-01-01

    Objective To investigate the agreement between direct and indirect comparisons of competing healthcare interventions. Design Meta-epidemiological study based on sample of meta-analyses of randomised controlled trials. Data sources Cochrane Database of Systematic Reviews and PubMed. Inclusion criteria Systematic reviews that provided sufficient data for both direct comparison and independent indirect comparisons of two interventions on the basis of a common comparator and in which the odds ratio could be used as the outcome statistic. Main outcome measure Inconsistency measured by the difference in the log odds ratio between the direct and indirect methods. Results The study included 112 independent trial networks (including 1552 trials with 478 775 patients in total) that allowed both direct and indirect comparison of two interventions. Indirect comparison had already been explicitly done in only 13 of the 85 Cochrane reviews included. The inconsistency between the direct and indirect comparison was statistically significant in 16 cases (14%, 95% confidence interval 9% to 22%). The statistically significant inconsistency was associated with fewer trials, subjectively assessed outcomes, and statistically significant effects of treatment in either direct or indirect comparisons. Owing to considerable inconsistency, many (14/39) of the statistically significant effects by direct comparison became non-significant when the direct and indirect estimates were combined. Conclusions Significant inconsistency between direct and indirect comparisons may be more prevalent than previously observed. Direct and indirect estimates should be combined in mixed treatment comparisons only after adequate assessment of the consistency of the evidence. PMID:21846695

  16. Periodontal disease and carotid atherosclerosis: A meta-analysis of 17,330 participants.

    PubMed

    Zeng, Xian-Tao; Leng, Wei-Dong; Lam, Yat-Yin; Yan, Bryan P; Wei, Xue-Mei; Weng, Hong; Kwong, Joey S W

    2016-01-15

    The association between periodontal disease and carotid atherosclerosis has been evaluated primarily in single-center studies, and whether periodontal disease is an independent risk factor of carotid atherosclerosis remains uncertain. This meta-analysis aimed to evaluate the association between periodontal disease and carotid atherosclerosis. We searched PubMed and Embase for relevant observational studies up to February 20, 2015. Two authors independently extracted data from included studies, and odds ratios (ORs) with 95% confidence intervals (CIs) were calculated for overall and subgroup meta-analyses. Statistical heterogeneity was assessed by the chi-squared test (P<0.1 for statistical significance) and quantified by the I(2) statistic. Data analysis was conducted using the Comprehensive Meta-Analysis (CMA) software. Fifteen observational studies involving 17,330 participants were included in the meta-analysis. The overall pooled result showed that periodontal disease was associated with carotid atherosclerosis (OR: 1.27, 95% CI: 1.14-1.41; P<0.001) but statistical heterogeneity was substantial (I(2)=78.90%). Subgroup analysis of adjusted smoking and diabetes mellitus showed borderline significance (OR: 1.08; 95% CI: 1.00-1.18; P=0.05). Sensitivity and cumulative analyses both indicated that our results were robust. Findings of our meta-analysis indicated that the presence of periodontal disease was associated with carotid atherosclerosis; however, further large-scale, well-conducted clinical studies are needed to explore the precise risk of developing carotid atherosclerosis in patients with periodontal disease. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  17. Statistical modelling for recurrent events: an application to sports injuries

    PubMed Central

    Ullah, Shahid; Gabbett, Tim J; Finch, Caroline F

    2014-01-01

    Background Injuries are often recurrent, with subsequent injuries influenced by previous occurrences and hence correlation between events needs to be taken into account when analysing such data. Objective This paper compares five different survival models (Cox proportional hazards (CoxPH) model and the following generalisations to recurrent event data: Andersen-Gill (A-G), frailty, Wei-Lin-Weissfeld total time (WLW-TT) marginal, Prentice-Williams-Peterson gap time (PWP-GT) conditional models) for the analysis of recurrent injury data. Methods Empirical evaluation and comparison of different models were performed using model selection criteria and goodness-of-fit statistics. Simulation studies assessed the size and power of each model fit. Results The modelling approach is demonstrated through direct application to Australian National Rugby League recurrent injury data collected over the 2008 playing season. Of the 35 players analysed, 14 (40%) players had more than 1 injury and 47 contact injuries were sustained over 29 matches. The CoxPH model provided the poorest fit to the recurrent sports injury data. The fit was improved with the A-G and frailty models, compared to WLW-TT and PWP-GT models. Conclusions Despite little difference in model fit between the A-G and frailty models, in the interest of fewer statistical assumptions it is recommended that, where relevant, future studies involving modelling of recurrent sports injury data use the frailty model in preference to the CoxPH model or its other generalisations. The paper provides a rationale for future statistical modelling approaches for recurrent sports injury. PMID:22872683

  18. Causal modelling applied to the risk assessment of a wastewater discharge.

    PubMed

    Paul, Warren L; Rokahr, Pat A; Webb, Jeff M; Rees, Gavin N; Clune, Tim S

    2016-03-01

    Bayesian networks (BNs), or causal Bayesian networks, have become quite popular in ecological risk assessment and natural resource management because of their utility as a communication and decision-support tool. Since their development in the field of artificial intelligence in the 1980s, however, Bayesian networks have evolved and merged with structural equation modelling (SEM). Unlike BNs, which are constrained to encode causal knowledge in conditional probability tables, SEMs encode this knowledge in structural equations, which is thought to be a more natural language for expressing causal information. This merger has clarified the causal content of SEMs and generalised the method such that it can now be performed using standard statistical techniques. As it was with BNs, the utility of this new generation of SEM in ecological risk assessment will need to be demonstrated with examples to foster an understanding and acceptance of the method. Here, we applied SEM to the risk assessment of a wastewater discharge to a stream, with a particular focus on the process of translating a causal diagram (conceptual model) into a statistical model which might then be used in the decision-making and evaluation stages of the risk assessment. The process of building and testing a spatial causal model is demonstrated using data from a spatial sampling design, and the implications of the resulting model are discussed in terms of the risk assessment. It is argued that a spatiotemporal causal model would have greater external validity than the spatial model, enabling broader generalisations to be made regarding the impact of a discharge, and greater value as a tool for evaluating the effects of potential treatment plant upgrades. Suggestions are made on how the causal model could be augmented to include temporal as well as spatial information, including suggestions for appropriate statistical models and analyses.

  19. Cluster-level statistical inference in fMRI datasets: The unexpected behavior of random fields in high dimensions.

    PubMed

    Bansal, Ravi; Peterson, Bradley S

    2018-06-01

    Identifying regional effects of interest in MRI datasets usually entails testing a priori hypotheses across many thousands of brain voxels, requiring control for false positive findings in these multiple hypotheses testing. Recent studies have suggested that parametric statistical methods may have incorrectly modeled functional MRI data, thereby leading to higher false positive rates than their nominal rates. Nonparametric methods for statistical inference when conducting multiple statistical tests, in contrast, are thought to produce false positives at the nominal rate, which has thus led to the suggestion that previously reported studies should reanalyze their fMRI data using nonparametric tools. To understand better why parametric methods may yield excessive false positives, we assessed their performance when applied both to simulated datasets of 1D, 2D, and 3D Gaussian Random Fields (GRFs) and to 710 real-world, resting-state fMRI datasets. We showed that both the simulated 2D and 3D GRFs and the real-world data contain a small percentage (<6%) of very large clusters (on average 60 times larger than the average cluster size), which were not present in 1D GRFs. These unexpectedly large clusters were deemed statistically significant using parametric methods, leading to empirical familywise error rates (FWERs) as high as 65%: the high empirical FWERs were not a consequence of parametric methods failing to model spatial smoothness accurately, but rather of these very large clusters that are inherently present in smooth, high-dimensional random fields. In fact, when discounting these very large clusters, the empirical FWER for parametric methods was 3.24%. Furthermore, even an empirical FWER of 65% would yield on average less than one of those very large clusters in each brain-wide analysis. Nonparametric methods, in contrast, estimated distributions from those large clusters, and therefore, by construct rejected the large clusters as false positives at the nominal FWERs. Those rejected clusters were outlying values in the distribution of cluster size but cannot be distinguished from true positive findings without further analyses, including assessing whether fMRI signal in those regions correlates with other clinical, behavioral, or cognitive measures. Rejecting the large clusters, however, significantly reduced the statistical power of nonparametric methods in detecting true findings compared with parametric methods, which would have detected most true findings that are essential for making valid biological inferences in MRI data. Parametric analyses, in contrast, detected most true findings while generating relatively few false positives: on average, less than one of those very large clusters would be deemed a true finding in each brain-wide analysis. We therefore recommend the continued use of parametric methods that model nonstationary smoothness for cluster-level, familywise control of false positives, particularly when using a Cluster Defining Threshold of 2.5 or higher, and subsequently assessing rigorously the biological plausibility of the findings, even for large clusters. Finally, because nonparametric methods yielded a large reduction in statistical power to detect true positive findings, we conclude that the modest reduction in false positive findings that nonparametric analyses afford does not warrant a re-analysis of previously published fMRI studies using nonparametric techniques. Copyright © 2018 Elsevier Inc. All rights reserved.

  20. Reduction of Fasting Blood Glucose and Hemoglobin A1c Using Oral Aloe Vera: A Meta-Analysis.

    PubMed

    Dick, William R; Fletcher, Emily A; Shah, Sachin A

    2016-06-01

    Diabetes mellitus is a global epidemic and one of the leading causes of morbidity and mortality. Additional medications that are novel, affordable, and efficacious are needed to treat this rampant disease. This meta-analysis was performed to ascertain the effectiveness of oral aloe vera consumption on the reduction of fasting blood glucose (FBG) and hemoglobin A1c (HbA1c). PubMed, CINAHL, Natural Medicines Comprehensive Database, and Natural Standard databases were searched. Studies of aloe vera's effect on FBG, HbA1c, homeostasis model assessment-estimated insulin resistance (HOMA-IR), fasting serum insulin, fructosamine, and oral glucose tolerance test (OGTT) in prediabetic and diabetic populations were examined. After data extraction, the parameters of FBG and HbA1c had appropriate data for meta-analyses. Extracted data were verified and then analyzed by StatsDirect Statistical Software. Reductions of FBG and HbA1c were reported as the weighted mean differences from baseline, calculated by a random-effects model with 95% confidence intervals. Subgroup analyses to determine clinical and statistical heterogeneity were also performed. Publication bias was assessed by using the Egger bias statistic. Nine studies were included in the FBG parameter (n = 283); 5 of these studies included HbA1c data (n = 89). Aloe vera decreased FBG by 46.6 mg/dL (p < 0.0001) and HbA1c by 1.05% (p = 0.004). Significant reductions of both endpoints were maintained in all subgroup analyses. Additionally, the data suggest that patients with an FBG ≥200 mg/dL may see a greater benefit. A mean FBG reduction of 109.9 mg/dL was observed in this population (p ≤ 0.0001). The Egger statistic showed publication bias with FBG but not with HbA1c (p = 0.010 and p = 0.602, respectively). These results support the use of oral aloe vera for significantly reducing FBG (46.6 mg/dL) and HbA1c (1.05%). Further clinical studies that are more robust and better controlled are warranted to further explore these findings.

  1. Peri-implant assessment via cone beam computed tomography and digital periapical radiography: an ex vivo study.

    PubMed

    Silveira-Neto, Nicolau; Flores, Mateus Ericson; De Carli, João Paulo; Costa, Max Dória; Matos, Felipe de Souza; Paranhos, Luiz Renato; Linden, Maria Salete Sandini

    2017-11-01

    This research evaluated detail registration in peri-implant bone using two different cone beam computer tomography systems and a digital periapical radiograph. Three different image acquisition protocols were established for each cone beam computer tomography apparatus, and three clinical situations were simulated in an ex vivo fresh pig mandible: buccal bone defect, peri-implant bone defect, and bone contact. Data were subjected to two analyses: quantitative and qualitative. The quantitative analyses involved a comparison of real specimen measures using a digital caliper in three regions of the preserved buccal bone - A, B and E (control group) - to cone beam computer tomography images obtained with different protocols (kp1, kp2, kp3, ip1, ip2, and ip3). In the qualitative analyses, the ability to register peri-implant details via tomography and digital periapical radiography was verified, as indicated by twelve evaluators. Data were analyzed with ANOVA and Tukey's test (α=0.05). The quantitative assessment showed means statistically equal to those of the control group under the following conditions: buccal bone defect B and E with kp1 and ip1, peri-implant bone defect E with kp2 and kp3, and bone contact A with kp1, kp2, kp3, and ip2. Qualitatively, only bone contacts were significantly different among the assessments, and the p3 results differed from the p1 and p2 results. The other results were statistically equivalent. The registration of peri-implant details was influenced by the image acquisition protocol, although metal artifacts were produced in all situations. The evaluators preferred the Kodak 9000 3D cone beam computer tomography in most cases. The evaluators identified buccal bone defects better with cone beam computer tomography and identified peri-implant bone defects better with digital periapical radiography.

  2. A large-area, spatially continuous assessment of land cover map error and its impact on downstream analyses.

    PubMed

    Estes, Lyndon; Chen, Peng; Debats, Stephanie; Evans, Tom; Ferreira, Stefanus; Kuemmerle, Tobias; Ragazzo, Gabrielle; Sheffield, Justin; Wolf, Adam; Wood, Eric; Caylor, Kelly

    2018-01-01

    Land cover maps increasingly underlie research into socioeconomic and environmental patterns and processes, including global change. It is known that map errors impact our understanding of these phenomena, but quantifying these impacts is difficult because many areas lack adequate reference data. We used a highly accurate, high-resolution map of South African cropland to assess (1) the magnitude of error in several current generation land cover maps, and (2) how these errors propagate in downstream studies. We first quantified pixel-wise errors in the cropland classes of four widely used land cover maps at resolutions ranging from 1 to 100 km, and then calculated errors in several representative "downstream" (map-based) analyses, including assessments of vegetative carbon stocks, evapotranspiration, crop production, and household food security. We also evaluated maps' spatial accuracy based on how precisely they could be used to locate specific landscape features. We found that cropland maps can have substantial biases and poor accuracy at all resolutions (e.g., at 1 km resolution, up to ∼45% underestimates of cropland (bias) and nearly 50% mean absolute error (MAE, describing accuracy); at 100 km, up to 15% underestimates and nearly 20% MAE). National-scale maps derived from higher-resolution imagery were most accurate, followed by multi-map fusion products. Constraining mapped values to match survey statistics may be effective at minimizing bias (provided the statistics are accurate). Errors in downstream analyses could be substantially amplified or muted, depending on the values ascribed to cropland-adjacent covers (e.g., with forest as adjacent cover, carbon map error was 200%-500% greater than in input cropland maps, but ∼40% less for sparse cover types). The average locational error was 6 km (600%). These findings provide deeper insight into the causes and potential consequences of land cover map error, and suggest several recommendations for land cover map users. © 2017 John Wiley & Sons Ltd.

  3. Impact of covariate models on the assessment of the air pollution-mortality association in a single- and multipollutant context.

    PubMed

    Sacks, Jason D; Ito, Kazuhiko; Wilson, William E; Neas, Lucas M

    2012-10-01

    With the advent of multicity studies, uniform statistical approaches have been developed to examine air pollution-mortality associations across cities. To assess the sensitivity of the air pollution-mortality association to different model specifications in a single and multipollutant context, the authors applied various regression models developed in previous multicity time-series studies of air pollution and mortality to data from Philadelphia, Pennsylvania (May 1992-September 1995). Single-pollutant analyses used daily cardiovascular mortality, fine particulate matter (particles with an aerodynamic diameter ≤2.5 µm; PM(2.5)), speciated PM(2.5), and gaseous pollutant data, while multipollutant analyses used source factors identified through principal component analysis. In single-pollutant analyses, risk estimates were relatively consistent across models for most PM(2.5) components and gaseous pollutants. However, risk estimates were inconsistent for ozone in all-year and warm-season analyses. Principal component analysis yielded factors with species associated with traffic, crustal material, residual oil, and coal. Risk estimates for these factors exhibited less sensitivity to alternative regression models compared with single-pollutant models. Factors associated with traffic and crustal material showed consistently positive associations in the warm season, while the coal combustion factor showed consistently positive associations in the cold season. Overall, mortality risk estimates examined using a source-oriented approach yielded more stable and precise risk estimates, compared with single-pollutant analyses.

  4. Metaprop: a Stata command to perform meta-analysis of binomial data.

    PubMed

    Nyaga, Victoria N; Arbyn, Marc; Aerts, Marc

    2014-01-01

    Meta-analyses have become an essential tool in synthesizing evidence on clinical and epidemiological questions derived from a multitude of similar studies assessing the particular issue. Appropriate and accessible statistical software is needed to produce the summary statistic of interest. Metaprop is a statistical program implemented to perform meta-analyses of proportions in Stata. It builds further on the existing Stata procedure metan which is typically used to pool effects (risk ratios, odds ratios, differences of risks or means) but which is also used to pool proportions. Metaprop implements procedures which are specific to binomial data and allows computation of exact binomial and score test-based confidence intervals. It provides appropriate methods for dealing with proportions close to or at the margins where the normal approximation procedures often break down, by use of the binomial distribution to model the within-study variability or by allowing Freeman-Tukey double arcsine transformation to stabilize the variances. Metaprop was applied on two published meta-analyses: 1) prevalence of HPV-infection in women with a Pap smear showing ASC-US; 2) cure rate after treatment for cervical precancer using cold coagulation. The first meta-analysis showed a pooled HPV-prevalence of 43% (95% CI: 38%-48%). In the second meta-analysis, the pooled percentage of cured women was 94% (95% CI: 86%-97%). By using metaprop, no studies with 0% or 100% proportions were excluded from the meta-analysis. Furthermore, study specific and pooled confidence intervals always were within admissible values, contrary to the original publication, where metan was used.

  5. A risk-based statistical investigation of the quantification of polymorphic purity of a pharmaceutical candidate by solid-state 19F NMR.

    PubMed

    Barry, Samantha J; Pham, Tran N; Borman, Phil J; Edwards, Andrew J; Watson, Simon A

    2012-01-27

    The DMAIC (Define, Measure, Analyse, Improve and Control) framework and associated statistical tools have been applied to both identify and reduce variability observed in a quantitative (19)F solid-state NMR (SSNMR) analytical method. The method had been developed to quantify levels of an additional polymorph (Form 3) in batches of an active pharmaceutical ingredient (API), where Form 1 is the predominant polymorph. In order to validate analyses of the polymorphic form, a single batch of API was used as a standard each time the method was used. The level of Form 3 in this standard was observed to gradually increase over time, the effect not being immediately apparent due to method variability. In order to determine the cause of this unexpected increase and to reduce method variability, a risk-based statistical investigation was performed to identify potential factors which could be responsible for these effects. Factors identified by the risk assessment were investigated using a series of designed experiments to gain a greater understanding of the method. The increase of the level of Form 3 in the standard was primarily found to correlate with the number of repeat analyses, an effect not previously reported in SSNMR literature. Differences in data processing (phasing and linewidth) were found to be responsible for the variability in the method. After implementing corrective actions the variability was reduced such that the level of Form 3 was within an acceptable range of ±1% ww(-1) in fresh samples of API. Copyright © 2011. Published by Elsevier B.V.

  6. Comparative statistical component analysis of transgenic, cyanophycin-producing potatoes in greenhouse and field trials.

    PubMed

    Schmidt, Kerstin; Schmidtke, Jörg; Mast, Yvonne; Waldvogel, Eva; Wohlleben, Wolfgang; Klemke, Friederike; Lockau, Wolfgang; Hausmann, Tina; Hühns, Maja; Broer, Inge

    2017-08-01

    Potatoes are a promising system for industrial production of the biopolymer cyanophycin as a second compound in addition to starch. To assess the efficiency in the field, we analysed the stability of the system, specifically its sensitivity to environmental factors. Field and greenhouse trials with transgenic potatoes (two independent events) were carried out for three years. The influence of environmental factors was measured and target compounds in the transgenic plants (cyanophycin, amino acids) were analysed for differences to control plants. Furthermore, non-target parameters (starch content, number, weight and size of tubers) were analysed for equivalence with control plants. The huge amount of data received was handled using modern statistical approaches to model the correlation between influencing environmental factors (year of cultivation, nitrogen fertilization, origin of plants, greenhouse or field cultivation) and key components (starch, amino acids, cyanophycin) and agronomic characteristics. General linear models were used for modelling, and standard effect sizes were applied to compare conventional and genetically modified plants. Altogether, the field trials prove that significant cyanophycin production is possible without reduction of starch content. Non-target compound composition seems to be equivalent under varying environmental conditions. Additionally, a quick test to measure cyanophycin content gives similar results compared to the extensive enzymatic test. This work facilitates the commercial cultivation of cyanophycin potatoes.

  7. Genetic structure and demographic history of the endangered and endemic schizothoracine fish Gymnodiptychus pachycheilus in Qinghai-Tibetan Plateau.

    PubMed

    Su, Junhu; Ji, Weihong; Wei, Yanming; Zhang, Yanping; Gleeson, Dianne M; Lou, Zhongyu; Ren, Jing

    2014-08-01

    The endangered schizothoracine fish Gymnodiptychus pachycheilus is endemic to the Qinghai-Tibetan Plateau (QTP), but very little genetic information is available for this species. Here, we accessed the current genetic divergence of G. pachycheilus population to evaluate their distributions modulated by contemporary and historical processes. Population structure and demographic history were assessed by analyzing 1811-base pairs of mitochondrial DNA from 61 individuals across a large proportion of its geographic range. Our results revealed low nucleotide diversity, suggesting severe historical bottleneck events. Analyses of molecular variance and the conventional population statistic FST (0.0435, P = 0.0215) confirmed weak genetic structure. The monophyly of G. pachycheilus was statistically well-supported, while two divergent evolutionary clusters were identified by phylogenetic analyses, suggesting a microgeographic population structure. The consistent scenario of recent population expansion of two clusters was identified based on several complementary analyses of demographic history (0.096 Ma and 0.15 Ma). This genetic divergence and evolutionary process are likely to have resulted from a series of drainage arrangements triggered by the historical tectonic events of the region. The results obtained here provide the first insights into the evolutionary history and genetic status of this little-known fish.

  8. Bibliographic study showed improving statistical methodology of network meta-analyses published between 1999 and 2015.

    PubMed

    Petropoulou, Maria; Nikolakopoulou, Adriani; Veroniki, Areti-Angeliki; Rios, Patricia; Vafaei, Afshin; Zarin, Wasifa; Giannatsi, Myrsini; Sullivan, Shannon; Tricco, Andrea C; Chaimani, Anna; Egger, Matthias; Salanti, Georgia

    2017-02-01

    To assess the characteristics and core statistical methodology specific to network meta-analyses (NMAs) in clinical research articles. We searched MEDLINE, EMBASE, and the Cochrane Database of Systematic Reviews from inception until April 14, 2015, for NMAs of randomized controlled trials including at least four different interventions. Two reviewers independently screened potential studies, whereas data abstraction was performed by a single reviewer and verified by a second. A total of 456 NMAs, which included a median (interquartile range) of 21 (13-40) studies and 7 (5-9) treatment nodes, were assessed. A total of 125 NMAs (27%) were star networks; this proportion declined from 100% in 2005 to 19% in 2015 (P = 0.01 by test of trend). An increasing number of NMAs discussed transitivity or inconsistency (0% in 2005, 86% in 2015, P < 0.01) and 150 (45%) used appropriate methods to test for inconsistency (14% in 2006, 74% in 2015, P < 0.01). Heterogeneity was explored in 256 NMAs (56%), with no change over time (P = 0.10). All pairwise effects were reported in 234 NMAs (51%), with some increase over time (P = 0.02). The hierarchy of treatments was presented in 195 NMAs (43%), the probability of being best was most commonly reported (137 NMAs, 70%), but use of surface under the cumulative ranking curves increased steeply (0% in 2005, 33% in 2015, P < 0.01). Many NMAs published in the medical literature have significant limitations in both the conduct and reporting of the statistical analysis and numerical results. The situation has, however, improved in recent years, in particular with respect to the evaluation of the underlying assumptions, but considerable room for further improvements remains. Copyright © 2016 Elsevier Inc. All rights reserved.

  9. Assessment of synthetic image fidelity

    NASA Astrophysics Data System (ADS)

    Mitchell, Kevin D.; Moorhead, Ian R.; Gilmore, Marilyn A.; Watson, Graham H.; Thomson, Mitch; Yates, T.; Troscianko, Tomasz; Tolhurst, David J.

    2000-07-01

    Computer generated imagery is increasingly used for a wide variety of purposes ranging from computer games to flight simulators to camouflage and sensor assessment. The fidelity required for this imagery is dependent on the anticipated use - for example when used for camouflage design it must be physically correct spectrally and spatially. The rendering techniques used will also depend upon the waveband being simulated, spatial resolution of the sensor and the required frame rate. Rendering of natural outdoor scenes is particularly demanding, because of the statistical variation in materials and illumination, atmospheric effects and the complex geometric structures of objects such as trees. The accuracy of the simulated imagery has tended to be assessed subjectively in the past. First and second order statistics do not capture many of the essential characteristics of natural scenes. Direct pixel comparison would impose an unachievable demand on the synthetic imagery. For many applications, such as camouflage design, it is important that nay metrics used will work in both visible and infrared wavebands. We are investigating a variety of different methods of comparing real and synthetic imagery and comparing synthetic imagery rendered to different levels of fidelity. These techniques will include neural networks (ICA), higher order statistics and models of human contrast perception. This paper will present an overview of the analyses we have carried out and some initial results along with some preliminary conclusions regarding the fidelity of synthetic imagery.

  10. Assessing compositional variability through graphical analysis and Bayesian statistical approaches: case studies on transgenic crops.

    PubMed

    Harrigan, George G; Harrison, Jay M

    2012-01-01

    New transgenic (GM) crops are subjected to extensive safety assessments that include compositional comparisons with conventional counterparts as a cornerstone of the process. The influence of germplasm, location, environment, and agronomic treatments on compositional variability is, however, often obscured in these pair-wise comparisons. Furthermore, classical statistical significance testing can often provide an incomplete and over-simplified summary of highly responsive variables such as crop composition. In order to more clearly describe the influence of the numerous sources of compositional variation we present an introduction to two alternative but complementary approaches to data analysis and interpretation. These include i) exploratory data analysis (EDA) with its emphasis on visualization and graphics-based approaches and ii) Bayesian statistical methodology that provides easily interpretable and meaningful evaluations of data in terms of probability distributions. The EDA case-studies include analyses of herbicide-tolerant GM soybean and insect-protected GM maize and soybean. Bayesian approaches are presented in an analysis of herbicide-tolerant GM soybean. Advantages of these approaches over classical frequentist significance testing include the more direct interpretation of results in terms of probabilities pertaining to quantities of interest and no confusion over the application of corrections for multiple comparisons. It is concluded that a standardized framework for these methodologies could provide specific advantages through enhanced clarity of presentation and interpretation in comparative assessments of crop composition.

  11. An Evidence-Based Practice Protocol: Back to Basics Bundle of Nursing Care

    DTIC Science & Technology

    2015-05-31

    statistics for the Staff Knowledge Score. The BTBI was offered on two separate occasions; thus, there were two intervention groups for analyses. The...000 Note. a Test for normality. b SK= Staff Knowledge Assessment. c G1 = Intervention group one (1). d G2 = Intervention group two. Principal...Because there were only complete pre and posttest data on 22 Train-the-Trainer participants, the Wilcoxon Matched-Pairs Signed Rank Test was used to

  12. Reassessment of Occupational Health Among U.S. Air Force Remotely Piloted Aircraft (Drone) Operators

    DTIC Science & Technology

    2017-04-05

    As a result, the U.S. Air Force (USAF) School of Aerospace Medicine was requested to conduct a field survey to assess for general areas of health...services; and reasons for increased prescription and over-the-counter medication usage ). The purpose of this study was to reevaluate for changes in...major commands within the continental United States completed the web-based survey , resulting in an estimated 40% response rate. Statistical analyses

  13. Defining the Transfer Functions of the PCAD Model in North Atlantic Right Whales (Eubalaena glacialis) -- Retrospective Analyses of Existing Data

    DTIC Science & Technology

    2011-09-30

    by Rosalind M. Rolland, Susan E. Parks, Kathleen E. Hunt, Manuel Castellote, Peter J. Corkeron, Douglas P. Nowacek, Samuel K. Wasser and Scott D...Partitioning. Journal of Computational and Graphical Statistics. 15(3): 651-674. Hunt KE, Rolland RM, Kraus SD, Wasser SK. 2006. Analysis of fecal...KE, Kraus SD, Wasser SK. 2005. Assessing reproductive status of right whales (Eubalaena glacialis) using fecal hormone metabolites. General and

  14. Landscape structure affects distribution of potential disease vectors (Diptera: Culicidae).

    PubMed

    Zittra, Carina; Vitecek, Simon; Obwaller, Adelheid G; Rossiter, Heidemarie; Eigner, Barbara; Zechmeister, Thomas; Waringer, Johann; Fuehrer, Hans-Peter

    2017-04-26

    Vector-pathogen dynamics are controlled by fluctuations of potential vector communities, such as the Culicidae. Assessment of mosquito community diversity and, in particular, identification of environmental parameters shaping these communities is therefore of key importance for the design of adequate surveillance approaches. In this study, we assess effects of climatic parameters and habitat structure on mosquito communities in eastern Austria to deliver these highly relevant baseline data. Female mosquitoes were sampled twice a month from April to October 2014 and 2015 at 35 permanent and 23 non-permanent trapping sites using carbon dioxide-baited traps. Differences in spatial and seasonal abundance patterns of Culicidae taxa were identified using likelihood ratio tests; possible effects of environmental parameters on seasonal and spatial mosquito distribution were analysed using multivariate statistical methods. We assessed community responses to environmental parameters based on 14-day-average values that affect ontogenesis. Altogether 29,734 female mosquitoes were collected, and 21 of 42 native as well as two of four non-native mosquito species were reconfirmed in eastern Austria. Statistical analyses revealed significant differences in mosquito abundance between sampling years and provinces. Incidence and abundance patterns were found to be linked to 14-day mean sunshine duration, humidity, water-level maxima and the amount of precipitation. However, land cover classes were found to be the most important factor, effectively assigning both indigenous and non-native mosquito species to various communities, which responded differentially to environmental variables. These findings thus underline the significance of non-climatic variables for future mosquito prediction models and the necessity to consider these in mosquito surveillance programmes.

  15. The predictive value of mean serum uric acid levels for developing prediabetes.

    PubMed

    Zhang, Qing; Bao, Xue; Meng, Ge; Liu, Li; Wu, Hongmei; Du, Huanmin; Shi, Hongbin; Xia, Yang; Guo, Xiaoyan; Liu, Xing; Li, Chunlei; Su, Qian; Gu, Yeqing; Fang, Liyun; Yu, Fei; Yang, Huijun; Yu, Bin; Sun, Shaomei; Wang, Xing; Zhou, Ming; Jia, Qiyu; Zhao, Honglin; Huang, Guowei; Song, Kun; Niu, Kaijun

    2016-08-01

    We aimed to assess the predictive value of mean serum uric acid (SUA) levels for incident prediabetes. Normoglycemic adults (n=39,353) were followed for a median of 3.0years. Prediabetes is defined as impaired fasting glucose (IFG), impaired glucose tolerance (IGT), or impaired HbA1c (IA1c), based on the American Diabetes Association criteria. Serum SUA levels were measured annually. Four diagnostic strategies were used to detect prediabetes in four separate analyses (Analysis 1: IFG. Analysis 2: IFG+IGT. Analysis 3: IFG+IA1c. Analysis 4: IFG+IGT+IA1c). Cox proportional hazards regression models were used to assess the relationship between SUA quintiles and prediabetes. C-statistic was additionally used in the final analysis to assess the accuracy of predictions based upon baseline SUA and mean SUA, respectively. After adjustment for potential confounders, the hazard ratios (95% confidence interval) of prediabetes for the highest versus lowest quintile of mean SUA were 1.22 (1.10, 1.36) in analysis 1; 1.59 (1.23, 2.05) in analysis 2; 1.62 (1.34, 1.95) in analysis 3 and 1.67 (1.31, 2.13) in analysis 4. In contrast, for baseline SUA, significance was only reached in analyses 3 and 4. Moreover, compared with baseline SUA, mean SUA value was associated with a significant increase in the C-statistic (P<0.001). Mean SUA value was strongly and positively related to prediabetes risk, and showed better predictive ability for prediabetes than baseline SUA. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  16. Soil heavy metal pollution and risk assessment associated with the Zn-Pb mining region in Yunnan, Southwest China.

    PubMed

    Cheng, Xianfeng; Danek, Tomas; Drozdova, Jarmila; Huang, Qianrui; Qi, Wufu; Zou, Liling; Yang, Shuran; Zhao, Xinliang; Xiang, Yungang

    2018-03-07

    The environmental assessment and identification of sources of heavy metals in Zn-Pb ore deposits are important steps for the effective prevention of subsequent contamination and for the development of corrective measures. The concentrations of eight heavy metals (As, Cd, Cr, Cu, Hg, Ni, Pb, and Zn) in soils from 40 sampling points around the Jinding Zn-Pb mine in Yunnan, China, were analyzed. An environmental quality assessment of the obtained data was performed using five different contamination and pollution indexes. Statistical analyses were performed to identify the relations among the heavy metals and the pH in soils and possible sources of pollution. The concentrations of As, Cd, Pb, and Zn were extremely high, and 23, 95, 25, and 35% of the samples, respectively, exceeded the heavy metal limits set in the Chinese Environmental Quality Standard for Soils (GB15618-1995, grade III). According to the contamination and pollution indexes, environmental risks in the area are high or extremely high. The highest risk is represented by Cd contamination, the median concentration of which exceeds the GB15618-1995 limit. Based on the combination of statistical analyses and geostatistical mapping, we identified three groups of heavy metals that originate from different sources. The main sources of As, Cd, Pb, Zn, and Cu are mining activities, airborne particulates from smelters, and the weathering of tailings. The main sources of Hg are dust fallout and gaseous emissions from smelters and tailing dams. Cr and Ni originate from lithogenic sources.

  17. Integrated Data Collection Analysis (IDCA) Program - Statistical Analysis of RDX Standard Data Sets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sandstrom, Mary M.; Brown, Geoffrey W.; Preston, Daniel N.

    2015-10-30

    The Integrated Data Collection Analysis (IDCA) program is conducting a Proficiency Test for Small- Scale Safety and Thermal (SSST) testing of homemade explosives (HMEs). Described here are statistical analyses of the results for impact, friction, electrostatic discharge, and differential scanning calorimetry analysis of the RDX Type II Class 5 standard. The material was tested as a well-characterized standard several times during the proficiency study to assess differences among participants and the range of results that may arise for well-behaved explosive materials. The analyses show that there are detectable differences among the results from IDCA participants. While these differences are statisticallymore » significant, most of them can be disregarded for comparison purposes to assess potential variability when laboratories attempt to measure identical samples using methods assumed to be nominally the same. The results presented in this report include the average sensitivity results for the IDCA participants and the ranges of values obtained. The ranges represent variation about the mean values of the tests of between 26% and 42%. The magnitude of this variation is attributed to differences in operator, method, and environment as well as the use of different instruments that are also of varying age. The results appear to be a good representation of the broader safety testing community based on the range of methods, instruments, and environments included in the IDCA Proficiency Test.« less

  18. Meta-analyses evaluating surrogate endpoints for overall survival in cancer randomized trials: A critical review.

    PubMed

    Savina, Marion; Gourgou, Sophie; Italiano, Antoine; Dinart, Derek; Rondeau, Virginie; Penel, Nicolas; Mathoulin-Pelissier, Simone; Bellera, Carine

    2018-03-01

    In cancer randomized controlled trials (RCT), alternative endpoints are increasingly being used in place of overall survival (OS) to reduce sample size, duration and cost of trials. It is necessary to ensure that these endpoints are valid surrogates for OS. Our aim was to identify meta-analyses that evaluated surrogate endpoints for OS and assess the strength of evidence for each meta-analysis (MA). We performed a systematic review to identify MA of cancer RCTs assessing surrogate endpoints for OS. We evaluated the strength of the association between the endpoints based on (i) the German Institute of Quality and Efficiency in Health Care guidelines and (ii) the Biomarker-Surrogate Evaluation Schema. Fifty-three publications reported on 164 MA, with heterogeneous statistical methods Disease-free survival (DFS) and progression-free survival (PFS) showed good surrogacy properties for OS in colorectal, lung and head and neck cancers. DFS was highly correlated to OS in gastric cancer. The statistical methodology used to evaluate surrogate endpoints requires consistency in order to facilitate the accurate interpretation of the results. Despite the limited number of clinical settings with validated surrogate endpoints for OS, there is evidence of good surrogacy for DFS and PFS in tumor types that account for a large proportion of cancer cases. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Enhancing nurses' ethical practice: development of a clinical ethics program.

    PubMed

    McDaniel, C

    1998-06-01

    There is increasing attention paid to ethics under managed care; however, few clinical-based ethics programs are reported. This paper reports the assessment and outcomes of one such program. A quasi-experimental research design with t-tests is used to assess the outcome differences between participants and control groups. There are twenty nurses in each; they are assessed for comparability. Differences are predicted on two outcomes using reliable and valid measures: nurses' time with their patients in ethics discussions, and nurses' opinions regarding their clinical ethics environments. Results reveal a statistically significant difference (p <.05) between the two groups, with modest positive change in the participants. Additional exploratory analyses are reported on variables influential in health care services.

  20. [The National Registry of Occupational Exposures to Carcinogens (SIREP): information system and results].

    PubMed

    Scarselli, Alberto

    2011-01-01

    The recording of occupational exposure to carcinogens is a fundamental step in order to assess exposure risk factors in workplaces. The aim of this paper is to describe the characteristics of the Italian register of occupational exposures to carcinogen agents (SIREP). The core data collected in the system are: firm characteristics, worker demographics, and exposure information. Statistical descriptive analyses were performed by economic activity sector, carcinogen agent and geographic location. Currently, the information recorded regard: 12,300 firms, 130,000 workers, and 250,000 exposures. The SIREP database has been set up in order to assess, control and reduce the carcinogen risk at workplace.

  1. It's Not a Big Sky After All: Justification for a Close Approach Prediction and Risk Assessment Process

    NASA Technical Reports Server (NTRS)

    Newman, Lauri Kraft; Frigm, Ryan; McKinley, David

    2009-01-01

    There is often skepticism about the need for Conjunction Assessment from mission operators that invest in the "big sky theory", which states that the likelihood of a collision is so small that it can be neglected. On 10 February 2009, the collision between Iridium 3; and Cosmos 2251 provided an indication that this theory is invalid and that a CA process should be considered for all missions. This paper presents statistics of the effect of the Iridium/Cosmos collision on NASA's Earth Science Constellation as well as results of analyses which characterize the debris environment for NASA's robotic missions.

  2. A model to predict accommodations needed by disabled persons.

    PubMed

    Babski-Reeves, Kari; Williams, Sabrina; Waters, Tzer Nan; Crumpton-Young, Lesia L; McCauley-Bell, Pamela

    2005-09-01

    In this paper, several approaches to assist employers in the accommodation process for disabled employees are discussed and a mathematical model is proposed to assist employers in predicting the accommodation level needed by an individual with a mobility-related disability. This study investigates the validity and reliability of this model in assessing the accommodation level needed by individuals utilizing data collected from twelve individuals with mobility-related disabilities. Based on the results of the statistical analyses, this proposed model produces a feasible preliminary measure for assessing the accommodation level needed for persons with mobility-related disabilities. Suggestions for practical application of this model in an industrial setting are addressed.

  3. Presence, concentrations and risk assessment of selected antibiotic residues in sediments and near-bottom waters collected from the Polish coastal zone in the southern Baltic Sea - Summary of 3years of studies.

    PubMed

    Siedlewicz, Grzegorz; Białk-Bielińska, Anna; Borecka, Marta; Winogradow, Aleksandra; Stepnowski, Piotr; Pazdro, Ksenia

    2018-04-01

    Concentrations of selected antibiotic compounds from different groups were measured in sediment samples (14 analytes) and in near-bottom water samples (12 analytes) collected in 2011-2013 from the southern Baltic Sea (Polish coastal zone). Antibiotics were determined at concentration levels of a few to hundreds of ng g -1 d.w. in sediments and ng L -1 in near-bottom waters. The most frequently detected compounds were sulfamethoxazole, trimethoprim, oxytetracycline in sediments and sulfamethoxazole and trimethoprim in near-bottom waters. The occurrence of the identified antibiotics was characterized by spatial and temporal variability. A statistically important correlation was observed between sediment organic matter content and the concentrations of sulfachloropyridazine and oxytetracycline. Risk assessment analyses revealed a potential high risk of sulfamethoxazole contamination in near-bottom waters and of contamination by sulfamethoxazole, trimethoprim and tetracyclines in sediments. Both chemical and risk assessment analyses show that the coastal area of the southern Baltic Sea is highly exposed to antibiotic residues. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Less label, more free: approaches in label-free quantitative mass spectrometry.

    PubMed

    Neilson, Karlie A; Ali, Naveid A; Muralidharan, Sridevi; Mirzaei, Mehdi; Mariani, Michael; Assadourian, Gariné; Lee, Albert; van Sluyter, Steven C; Haynes, Paul A

    2011-02-01

    In this review we examine techniques, software, and statistical analyses used in label-free quantitative proteomics studies for area under the curve and spectral counting approaches. Recent advances in the field are discussed in an order that reflects a logical workflow design. Examples of studies that follow this design are presented to highlight the requirement for statistical assessment and further experiments to validate results from label-free quantitation. Limitations of label-free approaches are considered, label-free approaches are compared with labelling techniques, and forward-looking applications for label-free quantitative data are presented. We conclude that label-free quantitative proteomics is a reliable, versatile, and cost-effective alternative to labelled quantitation. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Healthy Worker Effect Phenomenon: Revisited with Emphasis on Statistical Methods – A Review

    PubMed Central

    Chowdhury, Ritam; Shah, Divyang; Payal, Abhishek R.

    2017-01-01

    Known since 1885 but studied systematically only in the past four decades, the healthy worker effect (HWE) is a special form of selection bias common to occupational cohort studies. The phenomenon has been under debate for many years with respect to its impact, conceptual approach (confounding, selection bias, or both), and ways to resolve or account for its effect. The effect is not uniform across age groups, gender, race, and types of occupations and nor is it constant over time. Hence, assessing HWE and accounting for it in statistical analyses is complicated and requires sophisticated methods. Here, we review the HWE, factors affecting it, and methods developed so far to deal with it. PMID:29391741

  6. Signal Processing Methods for Liquid Rocket Engine Combustion Stability Assessments

    NASA Technical Reports Server (NTRS)

    Kenny, R. Jeremy; Lee, Erik; Hulka, James R.; Casiano, Matthew

    2011-01-01

    The J2X Gas Generator engine design specifications include dynamic, spontaneous, and broadband combustion stability requirements. These requirements are verified empirically based high frequency chamber pressure measurements and analyses. Dynamic stability is determined with the dynamic pressure response due to an artificial perturbation of the combustion chamber pressure (bomb testing), and spontaneous and broadband stability are determined from the dynamic pressure responses during steady operation starting at specified power levels. J2X Workhorse Gas Generator testing included bomb tests with multiple hardware configurations and operating conditions, including a configuration used explicitly for engine verification test series. This work covers signal processing techniques developed at Marshall Space Flight Center (MSFC) to help assess engine design stability requirements. Dynamic stability assessments were performed following both the CPIA 655 guidelines and a MSFC in-house developed statistical-based approach. The statistical approach was developed to better verify when the dynamic pressure amplitudes corresponding to a particular frequency returned back to pre-bomb characteristics. This was accomplished by first determining the statistical characteristics of the pre-bomb dynamic levels. The pre-bomb statistical characterization provided 95% coverage bounds; these bounds were used as a quantitative measure to determine when the post-bomb signal returned to pre-bomb conditions. The time for post-bomb levels to acceptably return to pre-bomb levels was compared to the dominant frequency-dependent time recommended by CPIA 655. Results for multiple test configurations, including stable and unstable configurations, were reviewed. Spontaneous stability was assessed using two processes: 1) characterization of the ratio of the peak response amplitudes to the excited chamber acoustic mode amplitudes and 2) characterization of the variability of the peak response's frequency over the test duration. This characterization process assists in evaluating the discreteness of a signal as well as the stability of the chamber response. Broadband stability was assessed using a running root-mean-square evaluation. These techniques were also employed, in a comparative analysis, on available Fastrac data, and these results are presented here.

  7. GIS and statistical analysis for landslide susceptibility mapping in the Daunia area, Italy

    NASA Astrophysics Data System (ADS)

    Mancini, F.; Ceppi, C.; Ritrovato, G.

    2010-09-01

    This study focuses on landslide susceptibility mapping in the Daunia area (Apulian Apennines, Italy) and achieves this by using a multivariate statistical method and data processing in a Geographical Information System (GIS). The Logistic Regression (hereafter LR) method was chosen to produce a susceptibility map over an area of 130 000 ha where small settlements are historically threatened by landslide phenomena. By means of LR analysis, the tendency to landslide occurrences was, therefore, assessed by relating a landslide inventory (dependent variable) to a series of causal factors (independent variables) which were managed in the GIS, while the statistical analyses were performed by means of the SPSS (Statistical Package for the Social Sciences) software. The LR analysis produced a reliable susceptibility map of the investigated area and the probability level of landslide occurrence was ranked in four classes. The overall performance achieved by the LR analysis was assessed by local comparison between the expected susceptibility and an independent dataset extrapolated from the landslide inventory. Of the samples classified as susceptible to landslide occurrences, 85% correspond to areas where landslide phenomena have actually occurred. In addition, the consideration of the regression coefficients provided by the analysis demonstrated that a major role is played by the "land cover" and "lithology" causal factors in determining the occurrence and distribution of landslide phenomena in the Apulian Apennines.

  8. Reporting and methodological quality of meta-analyses in urological literature

    PubMed Central

    Xu, Jing

    2017-01-01

    Purpose To assess the overall quality of published urological meta-analyses and identify predictive factors for high quality. Materials and Methods We systematically searched PubMed to identify meta-analyses published from January 1st, 2011 to December 31st, 2015 in 10 predetermined major paper-based urology journals. The characteristics of the included meta-analyses were collected, and their reporting and methodological qualities were assessed by the PRISMA checklist (27 items) and AMSTAR tool (11 items), respectively. Descriptive statistics were used for individual items as a measure of overall compliance, and PRISMA and AMSTAR scores were calculated as the sum of adequately reported domains. Logistic regression was used to identify predictive factors for high qualities. Results A total of 183 meta-analyses were included. The mean PRISMA and AMSTAR scores were 22.74 ± 2.04 and 7.57 ± 1.41, respectively. PRISMA item 5, protocol and registration, items 15 and 22, risk of bias across studies, items 16 and 23, additional analysis had less than 50% adherence. AMSTAR item 1, “a priori” design, item 5, list of studies and item 10, publication bias had less than 50% adherence. Logistic regression analyses showed that funding support and “a priori” design were associated with superior reporting quality, following PRISMA guideline and “a priori” design were associated with superior methodological quality. Conclusions Reporting and methodological qualities of recently published meta-analyses in major paper-based urology journals are generally good. Further improvement could potentially be achieved by strictly adhering to PRISMA guideline and having “a priori” protocol. PMID:28439452

  9. Assessing knowledge on fibromyalgia among Internet users.

    PubMed

    Moretti, Felipe Azevedo; Heymann, Roberto Ezequiel; Marvulle, Valdecir; Pollak, Daniel Feldman; Riera, Rachel

    2011-01-01

    To assess knowledge on fibromyalgia in a sample of patients, their families, and professionals interested on the theme from some Brazilian states. Analysis of the results of an electronic fibromyalgia knowledge questionnaire completed by 362 adults who had access to the the support group for fibromyalgia site (www.unifesp.br/grupos/fibromialgia). The answers were grouped according to age, sex, years of schooling, and type of interest in the condition. 92% of the responders were women and 62% had higher educational level. The worst results were observed in the "joint protection and energy conservation" domain, followed by the "medication in fibromyalgia" domain. The best results were recorded in the "exercises in fibromyalgia" domain. The answers differed significantly between sexes, and women achieved a higher percentage of correct answers. The female sex accounted for a statistically superior result in five statistical analyses (four questions and one domain). The study suggests the need for a strategic planning for an educational approach to fibromyalgia in Brazil.

  10. Environmental Justice Assessment for Transportation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mills, G.S.; Neuhauser, K.S.

    1999-04-05

    Application of Executive Order 12898 to risk assessment of highway or rail transport of hazardous materials has proven difficult; the location and conditions affecting the propagation of a plume of hazardous material released in a potential accident are unknown, in general. Therefore, analyses have only been possible in geographically broad or approximate manner. The advent of geographic information systems and development of software enhancements at Sandia National Laboratories have made kilometer-by-kilometer analysis of populations tallied by U.S. Census Blocks along entire routes practicable. Tabulations of total, or racially/ethnically distinct, populations close to a route, its alternatives, or the broader surroundingmore » area, can then be compared and differences evaluated statistically. This paper presents methods of comparing populations and their racial/ethnic compositions using simple tabulations, histograms and Chi Squared tests for statistical significance of differences found. Two examples of these methods are presented: comparison of two routes and comparison of a route with its surroundings.« less

  11. On vital aid: the why, what and how of validation

    PubMed Central

    Kleywegt, Gerard J.

    2009-01-01

    Limitations to the data and subjectivity in the structure-determination process may cause errors in macromolecular crystal structures. Appropriate validation techniques may be used to reveal problems in structures, ideally before they are analysed, published or deposited. Additionally, such tech­niques may be used a posteriori to assess the (relative) merits of a model by potential users. Weak validation methods and statistics assess how well a model reproduces the information that was used in its construction (i.e. experimental data and prior knowledge). Strong methods and statistics, on the other hand, test how well a model predicts data or information that were not used in the structure-determination process. These may be data that were excluded from the process on purpose, general knowledge about macromolecular structure, information about the biological role and biochemical activity of the molecule under study or its mutants or complexes and predictions that are based on the model and that can be tested experimentally. PMID:19171968

  12. Relationships Between the Way Students Are Assessed in Science Classrooms and Science Achievement Across Canada

    NASA Astrophysics Data System (ADS)

    Chu, Man-Wai; Fung, Karen

    2018-04-01

    Canadian students experience many different assessments throughout their schooling (O'Connor 2011). There are many benefits to using a variety of assessment types, item formats, and science-based performance tasks in the classroom to measure the many dimensions of science education. Although using a variety of assessments is beneficial, it is unclear exactly what types, format, and tasks are used in Canadian science classrooms. Additionally, since assessments are often administered to help improve student learning, this study identified assessments that may improve student learning as measured using achievement scores on a standardized test. Secondary analyses of the students' and teachers' responses to the questionnaire items asked in the Pan-Canadian Assessment Program were performed. The results of the hierarchical linear modeling analyses indicated that both students and teachers identified teacher-developed classroom tests or quizzes as the most common types of assessments used. Although this ranking was similar across the country, statistically significant differences in terms of the assessments that are used in science classrooms among the provinces were also identified. The investigation of which assessment best predicted student achievement scores indicated that minds-on science performance-based tasks significantly explained 4.21% of the variance in student scores. However, mixed results were observed between the student and teacher responses towards tasks that required students to choose their own investigation and design their own experience or investigation. Additionally, teachers that indicated that they conducted more demonstrations of an experiment or investigation resulted in students with lower scores.

  13. The Structure of Diagnostic and Statistical Manual of Mental Disorders (4th Edition, Text Revision) Personality Disorder Symptoms in a Large National Sample

    PubMed Central

    Trull, Timothy J.; Vergés, Alvaro; Wood, Phillip K.; Jahng, Seungmin; Sher, Kenneth J.

    2013-01-01

    We examined the latent structure underlying the criteria for DSM–IV–TR (American Psychiatric Association, 2000, Diagnostic and statistical manual of mental disorders (4th ed., text revision). Washington, DC: Author.) personality disorders in a large nationally representative sample of U.S. adults. Personality disorder symptom data were collected using a structured diagnostic interview from approximately 35,000 adults assessed over two waves of data collection in the National Epidemiologic Survey on Alcohol and Related Conditions. Our analyses suggested that a seven-factor solution provided the best fit for the data, and these factors were marked primarily by one or at most two personality disorder criteria sets. A series of regression analyses that used external validators tapping Axis I psychopathology, treatment for mental health problems, functioning scores, interpersonal conflict, and suicidal ideation and behavior provided support for the seven-factor solution. We discuss these findings in the context of previous studies that have examined the structure underlying the personality disorder criteria as well as the current proposals for DSM-5 personality disorders. PMID:22506626

  14. Sieve analysis in HIV-1 vaccine efficacy trials

    PubMed Central

    Edlefsen, Paul T.; Gilbert, Peter B.; Rolland, Morgane

    2013-01-01

    Purpose of review The genetic characterization of HIV-1 breakthrough infections in vaccine and placebo recipients offers new ways to assess vaccine efficacy trials. Statistical and sequence analysis methods provide opportunities to mine the mechanisms behind the effect of an HIV vaccine. Recent findings The release of results from two HIV-1 vaccine efficacy trials, Step/HVTN-502 and RV144, led to numerous studies in the last five years, including efforts to sequence HIV-1 breakthrough infections and compare viral characteristics between the vaccine and placebo groups. Novel genetic and statistical analysis methods uncovered features that distinguished founder viruses isolated from vaccinees from those isolated from placebo recipients, and identified HIV-1 genetic targets of vaccine-induced immune responses. Summary Studies of HIV-1 breakthrough infections in vaccine efficacy trials can provide an independent confirmation to correlates of risk studies, as they take advantage of vaccine/placebo comparisons while correlates of risk analyses are limited to vaccine recipients. Through the identification of viral determinants impacted by vaccine-mediated host immune responses, sieve analyses can shed light on potential mechanisms of vaccine protection. PMID:23719202

  15. Sieve analysis in HIV-1 vaccine efficacy trials.

    PubMed

    Edlefsen, Paul T; Gilbert, Peter B; Rolland, Morgane

    2013-09-01

    The genetic characterization of HIV-1 breakthrough infections in vaccine and placebo recipients offers new ways to assess vaccine efficacy trials. Statistical and sequence analysis methods provide opportunities to mine the mechanisms behind the effect of an HIV vaccine. The release of results from two HIV-1 vaccine efficacy trials, Step/HVTN-502 (HIV Vaccine Trials Network-502) and RV144, led to numerous studies in the last 5 years, including efforts to sequence HIV-1 breakthrough infections and compare viral characteristics between the vaccine and placebo groups. Novel genetic and statistical analysis methods uncovered features that distinguished founder viruses isolated from vaccinees from those isolated from placebo recipients, and identified HIV-1 genetic targets of vaccine-induced immune responses. Studies of HIV-1 breakthrough infections in vaccine efficacy trials can provide an independent confirmation to correlates of risk studies, as they take advantage of vaccine/placebo comparisons, whereas correlates of risk analyses are limited to vaccine recipients. Through the identification of viral determinants impacted by vaccine-mediated host immune responses, sieve analyses can shed light on potential mechanisms of vaccine protection.

  16. [Core factors of schizophrenia structure based on PANSS and SAPS/SANS results. Discerning and head-to-head comparisson of PANSS and SASPS/SANS validity].

    PubMed

    Masiak, Marek; Loza, Bartosz

    2004-01-01

    A lot of inconsistencies across dimensional studies of schizophrenia(s) are being unveiled. These problems are strongly related to the methodological aspects of collecting data and specific statistical analyses. Psychiatrists have developed lots of psychopathological models derived from analytic studies based on SAPS/SANS (the Scale for the Assessment of Positive Symptoms/the Scale for the Assessment of Negative Symptoms) and PANSS (The Positive and Negative Syndrome Scale). The unique validation of parallel two independent factor models was performed--ascribed to the same illness and based on different diagnostic scales--to investigate indirect methodological causes of clinical discrepancies. 100 newly admitted patients (mean age--33.5, 18-45, males--64, females--36, hospitalised on average 5.15 times) with paranoid schizophrenia (according to ICD-10) were scored and analysed using PANSS and SAPS/SANS during psychotic exacerbation. All patients were treated with neuroleptics of various kinds with 410mg equivalents of chlorpromazine (atypicals:typicals --> 41:59). Factor analyses were applied to basic results (with principal component analysis, normalised varimax rotation). Investing the cross-model validity, canonical analysis was applied. Models of schizophrenia varied from 3 to 5 factors. PANSS model included: positive, negative, disorganisation, cognitive and depressive components and SAPS/SANS model was dominated by positive, negative and disorganisation factors. The SAPS/SANS accounted for merely 48% of the PANSS common variances. The SAPS/SANS combined measurement preferentially (67% of canonical variance) targeted positive-negative dichotomy. Respectively, PANSS shared positive-negative phenomenology in 35% of its own variance. The general concept of five-dimensionality in paranoid schizophrenia looks clinically more heuristic and statistically more stabilised.

  17. Plaque Echolucency and Stroke Risk in Asymptomatic Carotid Stenosis: A Systematic Review and Meta-Analysis

    PubMed Central

    Gupta, Ajay; Kesavabhotla, Kartik; Baradaran, Hediyeh; Kamel, Hooman; Pandya, Ankur; Giambrone, Ashley E.; Wright, Drew; Pain, Kevin J.; Mtui, Edward E.; Suri, Jasjit S.; Sanelli, Pina C.; Mushlin, Alvin I.

    2014-01-01

    Background and Purpose Ultrasonographic plaque echolucency has been studied as a stroke risk marker in carotid atherosclerotic disease. We performed a systematic review and meta-analysis to summarize the association between ultrasound determined carotid plaque echolucency and future ipsilateral stroke risk. Methods We searched the medical literature for studies evaluating the association between carotid plaque echolucency and future stroke in asymptomatic patients. We included prospective observational studies with stroke outcome ascertainment after baseline carotid plaque echolucency assessment. We performed a meta-analysis and assessed study heterogeneity and publication bias. We also performed subgroup analyses limited to patients with stenosis ≥50%, studies in which plaque echolucency was determined via subjective visual interpretation, studies with a relatively lower risk of bias, and studies published after the year 2000. Results We analyzed data from 7 studies on 7557 subjects with a mean follow up of 37.2 months. We found a significant positive relationship between predominantly echolucent (compared to predominantly echogenic) plaques and the risk of future ipsilateral stroke across all stenosis severities (0-99%) (relative risk [RR], 2.31, 95% CI, 1.58-3.39, P<.001) and in subjects with ≥50% stenosis (RR, 2.61 95% CI, 1.47-4.63, P=.001). A statistically significant increased RR for future stroke was preserved in all additional subgroup analyses. No statistically significant heterogeneity or publication bias was present in any of the meta-analyses. Conclusions The presence of ultrasound-determined carotid plaque echolucency provides predictive information in asymptomatic carotid artery stenosis beyond luminal stenosis. However, the magnitude of the increased risk is not sufficient on its own to identify patients likely to benefit from surgical revascularization. PMID:25406150

  18. Rockslide susceptibility and hazard assessment for mitigation works design along vertical rocky cliffs: workflow proposal based on a real case-study conducted in Sacco (Campania), Italy

    NASA Astrophysics Data System (ADS)

    Pignalosa, Antonio; Di Crescenzo, Giuseppe; Marino, Ermanno; Terracciano, Rosario; Santo, Antonio

    2015-04-01

    The work here presented concerns a case study in which a complete multidisciplinary workflow has been applied for an extensive assessment of the rockslide susceptibility and hazard in a common scenario such as a vertical and fractured rocky cliffs. The studied area is located in a high-relief zone in Southern Italy (Sacco, Salerno, Campania), characterized by wide vertical rocky cliffs formed by tectonized thick successions of shallow-water limestones. The study concerned the following phases: a) topographic surveying integrating of 3d laser scanning, photogrammetry and GNSS; b) gelogical surveying, characterization of single instabilities and geomecanichal surveying, conducted by geologists rock climbers; c) processing of 3d data and reconstruction of high resolution geometrical models; d) structural and geomechanical analyses; e) data filing in a GIS-based spatial database; f) geo-statistical and spatial analyses and mapping of the whole set of data; g) 3D rockfall analysis; The main goals of the study have been a) to set-up an investigation method to achieve a complete and thorough characterization of the slope stability conditions and b) to provide a detailed base for an accurate definition of the reinforcement and mitigation systems. For this purposes the most up-to-date methods of field surveying, remote sensing, 3d modelling and geospatial data analysis have been integrated in a systematic workflow, accounting of the economic sustainability of the whole project. A novel integrated approach have been applied both fusing deterministic and statistical surveying methods. This approach enabled to deal with the wide extension of the studied area (near to 200.000 m2), without compromising an high accuracy of the results. The deterministic phase, based on a field characterization of single instabilities and their further analyses on 3d models, has been applied for delineating the peculiarity of each single feature. The statistical approach, based on geostructural field mapping and on punctual geomechanical data from scan-line surveying, allowed the rock mass partitioning in homogeneous geomechanical sectors and data interpolation through bounded geostatistical analyses on 3d models. All data, resulting from both approaches, have been referenced and filed in a single spatial database and considered in global geo-statistical analyses for deriving a fully modelled and comprehensive evaluation of the rockslide susceptibility. The described workflow yielded the following innovative results: a) a detailed census of single potential instabilities, through a spatial database recording the geometrical, geological and mechanical features, along with the expected failure modes; b) an high resolution characterization of the whole slope rockslide susceptibility, based on the partitioning of the area according to the stability and mechanical conditions which can be directly related to specific hazard mitigation systems; c) the exact extension of the area exposed to the rockslide hazard, along with the dynamic parameters of expected phenomena; d) an intervention design for hazard mitigation.

  19. Selection and Reporting of Statistical Methods to Assess Reliability of a Diagnostic Test: Conformity to Recommended Methods in a Peer-Reviewed Journal

    PubMed Central

    Park, Ji Eun; Han, Kyunghwa; Sung, Yu Sub; Chung, Mi Sun; Koo, Hyun Jung; Yoon, Hee Mang; Choi, Young Jun; Lee, Seung Soo; Kim, Kyung Won; Shin, Youngbin; An, Suah; Cho, Hyo-Min

    2017-01-01

    Objective To evaluate the frequency and adequacy of statistical analyses in a general radiology journal when reporting a reliability analysis for a diagnostic test. Materials and Methods Sixty-three studies of diagnostic test accuracy (DTA) and 36 studies reporting reliability analyses published in the Korean Journal of Radiology between 2012 and 2016 were analyzed. Studies were judged using the methodological guidelines of the Radiological Society of North America-Quantitative Imaging Biomarkers Alliance (RSNA-QIBA), and COnsensus-based Standards for the selection of health Measurement INstruments (COSMIN) initiative. DTA studies were evaluated by nine editorial board members of the journal. Reliability studies were evaluated by study reviewers experienced with reliability analysis. Results Thirty-one (49.2%) of the 63 DTA studies did not include a reliability analysis when deemed necessary. Among the 36 reliability studies, proper statistical methods were used in all (5/5) studies dealing with dichotomous/nominal data, 46.7% (7/15) of studies dealing with ordinal data, and 95.2% (20/21) of studies dealing with continuous data. Statistical methods were described in sufficient detail regarding weighted kappa in 28.6% (2/7) of studies and regarding the model and assumptions of intraclass correlation coefficient in 35.3% (6/17) and 29.4% (5/17) of studies, respectively. Reliability parameters were used as if they were agreement parameters in 23.1% (3/13) of studies. Reproducibility and repeatability were used incorrectly in 20% (3/15) of studies. Conclusion Greater attention to the importance of reporting reliability, thorough description of the related statistical methods, efforts not to neglect agreement parameters, and better use of relevant terminology is necessary. PMID:29089821

  20. Study design and statistical analysis of data in human population studies with the micronucleus assay.

    PubMed

    Ceppi, Marcello; Gallo, Fabio; Bonassi, Stefano

    2011-01-01

    The most common study design performed in population studies based on the micronucleus (MN) assay, is the cross-sectional study, which is largely performed to evaluate the DNA damaging effects of exposure to genotoxic agents in the workplace, in the environment, as well as from diet or lifestyle factors. Sample size is still a critical issue in the design of MN studies since most recent studies considering gene-environment interaction, often require a sample size of several hundred subjects, which is in many cases difficult to achieve. The control of confounding is another major threat to the validity of causal inference. The most popular confounders considered in population studies using MN are age, gender and smoking habit. Extensive attention is given to the assessment of effect modification, given the increasing inclusion of biomarkers of genetic susceptibility in the study design. Selected issues concerning the statistical treatment of data have been addressed in this mini-review, starting from data description, which is a critical step of statistical analysis, since it allows to detect possible errors in the dataset to be analysed and to check the validity of assumptions required for more complex analyses. Basic issues dealing with statistical analysis of biomarkers are extensively evaluated, including methods to explore the dose-response relationship among two continuous variables and inferential analysis. A critical approach to the use of parametric and non-parametric methods is presented, before addressing the issue of most suitable multivariate models to fit MN data. In the last decade, the quality of statistical analysis of MN data has certainly evolved, although even nowadays only a small number of studies apply the Poisson model, which is the most suitable method for the analysis of MN data.

  1. 40 CFR 91.512 - Request for public hearing.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... plans and statistical analyses have been properly applied (specifically, whether sampling procedures and statistical analyses specified in this subpart were followed and whether there exists a basis for... will be made available to the public during Agency business hours. ...

  2. A retrospective survey of research design and statistical analyses in selected Chinese medical journals in 1998 and 2008.

    PubMed

    Jin, Zhichao; Yu, Danghui; Zhang, Luoman; Meng, Hong; Lu, Jian; Gao, Qingbin; Cao, Yang; Ma, Xiuqiang; Wu, Cheng; He, Qian; Wang, Rui; He, Jia

    2010-05-25

    High quality clinical research not only requires advanced professional knowledge, but also needs sound study design and correct statistical analyses. The number of clinical research articles published in Chinese medical journals has increased immensely in the past decade, but study design quality and statistical analyses have remained suboptimal. The aim of this investigation was to gather evidence on the quality of study design and statistical analyses in clinical researches conducted in China for the first decade of the new millennium. Ten (10) leading Chinese medical journals were selected and all original articles published in 1998 (N = 1,335) and 2008 (N = 1,578) were thoroughly categorized and reviewed. A well-defined and validated checklist on study design, statistical analyses, results presentation, and interpretation was used for review and evaluation. Main outcomes were the frequencies of different types of study design, error/defect proportion in design and statistical analyses, and implementation of CONSORT in randomized clinical trials. From 1998 to 2008: The error/defect proportion in statistical analyses decreased significantly ( = 12.03, p<0.001), 59.8% (545/1,335) in 1998 compared to 52.2% (664/1,578) in 2008. The overall error/defect proportion of study design also decreased ( = 21.22, p<0.001), 50.9% (680/1,335) compared to 42.40% (669/1,578). In 2008, design with randomized clinical trials remained low in single digit (3.8%, 60/1,578) with two-third showed poor results reporting (defects in 44 papers, 73.3%). Nearly half of the published studies were retrospective in nature, 49.3% (658/1,335) in 1998 compared to 48.2% (761/1,578) in 2008. Decreases in defect proportions were observed in both results presentation ( = 93.26, p<0.001), 92.7% (945/1,019) compared to 78.2% (1023/1,309) and interpretation ( = 27.26, p<0.001), 9.7% (99/1,019) compared to 4.3% (56/1,309), some serious ones persisted. Chinese medical research seems to have made significant progress regarding statistical analyses, but there remains ample room for improvement regarding study designs. Retrospective clinical studies are the most often used design, whereas randomized clinical trials are rare and often show methodological weaknesses. Urgent implementation of the CONSORT statement is imperative.

  3. Health Benefits of Dietary Whole Grains: An Umbrella Review of Meta-analyses.

    PubMed

    McRae, Marc P

    2017-03-01

    The purpose of this study is to review the effectiveness of the role of whole grain as a therapeutic agent in type 2 diabetes, cardiovascular disease, cancer, and obesity. An umbrella review of all published meta-analyses was performed. A PubMed search from January 1, 1980, to May 31, 2016, was conducted using the following search strategy: (whole grain OR whole grains) AND (meta-analysis OR systematic review). Only English language publications that provided quantitative statistical analysis on type 2 diabetes, cardiovascular disease, cancer, and weight loss were retrieved. Twenty-one meta-analyses were retrieved for inclusion in this umbrella review, and all the meta-analyses reported statistically significant positive benefits for reducing the incidence of type 2 diabetes (relative risk [RR] = 0.68-0.80), cardiovascular disease (RR = 0.63-0.79), and colorectal, pancreatic, and gastric cancers (RR = 0.57-0.94) and a modest effect on body weight, waist circumference, and body fat mass. Significant reductions in cardiovascular and cancer mortality were also observed (RR = 0.82 and 0.89, respectively). Some problems of heterogeneity, publication bias, and quality assessment were found among the studies. This review suggests that there is some evidence for dietary whole grain intake to be beneficial in the prevention of type 2 diabetes, cardiovascular disease, and colorectal, pancreatic, and gastric cancers. The potential benefits of these findings suggest that the consumption of 2 to 3 servings per day (~45 g) of whole grains may be a justifiable public health goal.

  4. A Meta-Meta-Analysis: Empirical Review of Statistical Power, Type I Error Rates, Effect Sizes, and Model Selection of Meta-Analyses Published in Psychology

    ERIC Educational Resources Information Center

    Cafri, Guy; Kromrey, Jeffrey D.; Brannick, Michael T.

    2010-01-01

    This article uses meta-analyses published in "Psychological Bulletin" from 1995 to 2005 to describe meta-analyses in psychology, including examination of statistical power, Type I errors resulting from multiple comparisons, and model choice. Retrospective power estimates indicated that univariate categorical and continuous moderators, individual…

  5. How Efficacious is Danshen (Salvia miltiorrhiza) Dripping Pill in Treating Angina Pectoris? Evidence Assessment for Meta-Analysis of Randomized Controlled Trials.

    PubMed

    Jia, Yongliang; Leung, Siu-Wai

    2017-09-01

    More than 230 randomized controlled trials (RCTs) of danshen dripping pill (DSP) and isosorbide dinitrate (ISDN) in treating angina pectoris after the first preferred reporting items for systematic reviews and meta-analyses-compliant comprehensive meta-analysis were published in 2010. Other meta-analyses had flaws in study selection, statistical meta-analysis, and evidence assessment. This study completed the meta-analysis with an extensive assessment of the evidence. RCTs published from 1994 to 2016 on DSP and ISDN in treating angina pectoris for at least 4 weeks were included. The risk of bias (RoB) of included RCTs was assessed with the Cochrane's tool for assessing RoB. Meta-analyses based on a random-effects model were performed on two outcome measures: symptomatic (SYM) and electrocardiography (ECG) improvements. Subgroup analysis, sensitivity analysis, metaregression, and publication bias analysis were also conducted. The evidence strength was evaluated with the Grades of Recommendation, Assessment, Development, and Evaluation (GRADE) method. Among the included 109 RCTs with 11,973 participants, 49 RCTs and 5042 participants were new (after 2010). The RoB of included RCTs was high in randomization and blinding. Overall effect sizes in odds ratios for DSP over ISDN were 2.94 (95% confidence interval [CI]: 2.53-3.41) on SYM (n = 108) and 2.37 (95% CI: 2.08-2.69) by ECG (n = 81) with significant heterogeneities (I 2  = 41%, p < 0.0001 on SYM and I 2  = 44%, p < 0.0001 on ECG). Subgroup, sensitivity, and metaregression analyses showed consistent results without publication bias. However, the evidence strength was low in GRADE. The efficacy of DSP was still better than ISDN in treating angina pectoris, but the confidence decreased due to high RoB and heterogeneities.

  6. Algorithm for Identifying Erroneous Rain-Gauge Readings

    NASA Technical Reports Server (NTRS)

    Rickman, Doug

    2005-01-01

    An algorithm analyzes rain-gauge data to identify statistical outliers that could be deemed to be erroneous readings. Heretofore, analyses of this type have been performed in burdensome manual procedures that have involved subjective judgements. Sometimes, the analyses have included computational assistance for detecting values falling outside of arbitrary limits. The analyses have been performed without statistically valid knowledge of the spatial and temporal variations of precipitation within rain events. In contrast, the present algorithm makes it possible to automate such an analysis, makes the analysis objective, takes account of the spatial distribution of rain gauges in conjunction with the statistical nature of spatial variations in rainfall readings, and minimizes the use of arbitrary criteria. The algorithm implements an iterative process that involves nonparametric statistics.

  7. Statistical methods and errors in family medicine articles between 2010 and 2014-Suez Canal University, Egypt: A cross-sectional study.

    PubMed

    Nour-Eldein, Hebatallah

    2016-01-01

    With limited statistical knowledge of most physicians it is not uncommon to find statistical errors in research articles. To determine the statistical methods and to assess the statistical errors in family medicine (FM) research articles that were published between 2010 and 2014. This was a cross-sectional study. All 66 FM research articles that were published over 5 years by FM authors with affiliation to Suez Canal University were screened by the researcher between May and August 2015. Types and frequencies of statistical methods were reviewed in all 66 FM articles. All 60 articles with identified inferential statistics were examined for statistical errors and deficiencies. A comprehensive 58-item checklist based on statistical guidelines was used to evaluate the statistical quality of FM articles. Inferential methods were recorded in 62/66 (93.9%) of FM articles. Advanced analyses were used in 29/66 (43.9%). Contingency tables 38/66 (57.6%), regression (logistic, linear) 26/66 (39.4%), and t-test 17/66 (25.8%) were the most commonly used inferential tests. Within 60 FM articles with identified inferential statistics, no prior sample size 19/60 (31.7%), application of wrong statistical tests 17/60 (28.3%), incomplete documentation of statistics 59/60 (98.3%), reporting P value without test statistics 32/60 (53.3%), no reporting confidence interval with effect size measures 12/60 (20.0%), use of mean (standard deviation) to describe ordinal/nonnormal data 8/60 (13.3%), and errors related to interpretation were mainly for conclusions without support by the study data 5/60 (8.3%). Inferential statistics were used in the majority of FM articles. Data analysis and reporting statistics are areas for improvement in FM research articles.

  8. Statistical methods and errors in family medicine articles between 2010 and 2014-Suez Canal University, Egypt: A cross-sectional study

    PubMed Central

    Nour-Eldein, Hebatallah

    2016-01-01

    Background: With limited statistical knowledge of most physicians it is not uncommon to find statistical errors in research articles. Objectives: To determine the statistical methods and to assess the statistical errors in family medicine (FM) research articles that were published between 2010 and 2014. Methods: This was a cross-sectional study. All 66 FM research articles that were published over 5 years by FM authors with affiliation to Suez Canal University were screened by the researcher between May and August 2015. Types and frequencies of statistical methods were reviewed in all 66 FM articles. All 60 articles with identified inferential statistics were examined for statistical errors and deficiencies. A comprehensive 58-item checklist based on statistical guidelines was used to evaluate the statistical quality of FM articles. Results: Inferential methods were recorded in 62/66 (93.9%) of FM articles. Advanced analyses were used in 29/66 (43.9%). Contingency tables 38/66 (57.6%), regression (logistic, linear) 26/66 (39.4%), and t-test 17/66 (25.8%) were the most commonly used inferential tests. Within 60 FM articles with identified inferential statistics, no prior sample size 19/60 (31.7%), application of wrong statistical tests 17/60 (28.3%), incomplete documentation of statistics 59/60 (98.3%), reporting P value without test statistics 32/60 (53.3%), no reporting confidence interval with effect size measures 12/60 (20.0%), use of mean (standard deviation) to describe ordinal/nonnormal data 8/60 (13.3%), and errors related to interpretation were mainly for conclusions without support by the study data 5/60 (8.3%). Conclusion: Inferential statistics were used in the majority of FM articles. Data analysis and reporting statistics are areas for improvement in FM research articles. PMID:27453839

  9. [Histologic assessment of tissue healing of hyaline cartilage by use of semiquantitative evaluation scale].

    PubMed

    Vukasović, Andreja; Ivković, Alan; Jezek, Davor; Cerovecki, Ivan; Vnuk, Drazen; Kreszinger, Mario; Hudetz, Damir; Pećina, Marko

    2011-01-01

    Articular cartilage is an avascular and aneural tissue lacking lymph drainage, hence its inability of spontaneous repair following injury. Thus, it offers an interesting model for scientific research. A number of methods have been suggested to enhance cartilage repair, but none has yet produced significant success. The possible application of the aforementioned methods has brought about the necessity to evaluate their results. The objective of this study was to analyze results of a study of the effects of the use of TGF-beta gene transduced bone marrow clot on articular cartilage defects using ICRS visual histological assessment scale. The research was conducted on 28 skeletally mature sheep that were randomly assigned to four groups and surgically inflicted femoral chondral defects. The articular surfaces were then treated with TGF-beta1 gene transduced bone marrow clot (TGF group), GFP transduced bone marrow clot (GFP group), untransduced bone marrow clot (BM group) or left untreated (NC group). The analysis was performed by visual examination of cartilage samples and results were obtained using ICRS visual histological assessment scale. The results were subsequently subjected to statistical assessment using Kruskal-Wallis and Mann-Whitney tests. Kruskal-Wallis test yielded statistically significant difference with respect to cell distribution. Mann-Whitney test showed statistically significant difference between TGF and NC groups (P = 0.002), as well as between BM and NC groups (P = 0.002 with Bonferroni correction). Twenty-six of the twenty-eight samples were subjected to histologic and subsequent statistical analysis; two were discarded due to faulty histology technique. Our results indicated a level of certainty as to the positive effect of TGF-beta1 gene transduced bone marrow clot in restoration of articular cartilage defects. However, additional research is necessary in the field. One of the significant drawbacks on histologic assessment of cartilage samples were the errors in histologic preparation, for which some samples had to be discarded and significantly impaired the analytical quality of the others. Defects of structures surrounding the articular cartilage, e.g., subchondral bone or connective tissue, might also impair the quality of histologic analysis. Additional analyses, i.e. polarizing microscopy should be performed to determine the degree of integration of the newly formed tissue with the surrounding cartilage. The semiquantitative ICRS scale, although of great practical value, has limitations as to the objectivity of the assessment, taking into account the analytical ability of the evaluator, as well as the accuracy of semiquantitative analysis in comparison to the methods of quantitative analysis. Overall results of histologic analysis indicated that the application of TGF-beta1 gene transduced bone marrow clot could have measurable clinical effects on articular cartilage repair. The ICRS visual histological assessment scale is a valuable analytical method for cartilage repair evaluation. In this respect, further analyses of the method value would be of great importance.

  10. Targeting intensive versus conventional glycaemic control for type 1 diabetes mellitus: a systematic review with meta-analyses and trial sequential analyses of randomised clinical trials.

    PubMed

    Kähler, Pernille; Grevstad, Berit; Almdal, Thomas; Gluud, Christian; Wetterslev, Jørn; Lund, Søren Søgaard; Vaag, Allan; Hemmingsen, Bianca

    2014-08-19

    To assess the benefits and harms of targeting intensive versus conventional glycaemic control in patients with type 1 diabetes mellitus. A systematic review with meta-analyses and trial sequential analyses of randomised clinical trials. The Cochrane Library, MEDLINE, EMBASE, Science Citation Index Expanded and LILACS to January 2013. Randomised clinical trials that prespecified different targets of glycaemic control in participants at any age with type 1 diabetes mellitus were included. Two authors independently assessed studies for inclusion and extracted data. 18 randomised clinical trials included 2254 participants with type 1 diabetes mellitus. All trials had high risk of bias. There was no statistically significant effect of targeting intensive glycaemic control on all-cause mortality (risk ratio 1.16, 95% CI 0.65 to 2.08) or cardiovascular mortality (0.49, 0.19 to 1.24). Targeting intensive glycaemic control reduced the relative risks for the composite macrovascular outcome (0.63, 0.41 to 0.96; p=0.03), and nephropathy (0.37, 0.27 to 0.50; p<0.00001. The effect estimates of retinopathy, ketoacidosis and retinal photocoagulation were not consistently statistically significant between random and fixed effects models. The risk of severe hypoglycaemia was significantly increased with intensive glycaemic targets (1.40, 1.01 to 1.94). Trial sequential analyses showed that the amount of data needed to demonstrate a relative risk reduction of 10% were, in general, inadequate. There was no significant effect towards improved all-cause mortality when targeting intensive glycaemic control compared with conventional glycaemic control. However, there may be beneficial effects of targeting intensive glycaemic control on the composite macrovascular outcome and on nephropathy, and detrimental effects on severe hypoglycaemia. Notably, the data for retinopathy and ketoacidosis were inconsistent. There was a severe lack of reporting on patient relevant outcomes, and all trials had poor bias control. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  11. Reporting quality of statistical methods in surgical observational studies: protocol for systematic review.

    PubMed

    Wu, Robert; Glen, Peter; Ramsay, Tim; Martel, Guillaume

    2014-06-28

    Observational studies dominate the surgical literature. Statistical adjustment is an important strategy to account for confounders in observational studies. Research has shown that published articles are often poor in statistical quality, which may jeopardize their conclusions. The Statistical Analyses and Methods in the Published Literature (SAMPL) guidelines have been published to help establish standards for statistical reporting.This study will seek to determine whether the quality of statistical adjustment and the reporting of these methods are adequate in surgical observational studies. We hypothesize that incomplete reporting will be found in all surgical observational studies, and that the quality and reporting of these methods will be of lower quality in surgical journals when compared with medical journals. Finally, this work will seek to identify predictors of high-quality reporting. This work will examine the top five general surgical and medical journals, based on a 5-year impact factor (2007-2012). All observational studies investigating an intervention related to an essential component area of general surgery (defined by the American Board of Surgery), with an exposure, outcome, and comparator, will be included in this systematic review. Essential elements related to statistical reporting and quality were extracted from the SAMPL guidelines and include domains such as intent of analysis, primary analysis, multiple comparisons, numbers and descriptive statistics, association and correlation analyses, linear regression, logistic regression, Cox proportional hazard analysis, analysis of variance, survival analysis, propensity analysis, and independent and correlated analyses. Each article will be scored as a proportion based on fulfilling criteria in relevant analyses used in the study. A logistic regression model will be built to identify variables associated with high-quality reporting. A comparison will be made between the scores of surgical observational studies published in medical versus surgical journals. Secondary outcomes will pertain to individual domains of analysis. Sensitivity analyses will be conducted. This study will explore the reporting and quality of statistical analyses in surgical observational studies published in the most referenced surgical and medical journals in 2013 and examine whether variables (including the type of journal) can predict high-quality reporting.

  12. Methods to systematically review and meta-analyse observational studies: a systematic scoping review of recommendations.

    PubMed

    Mueller, Monika; D'Addario, Maddalena; Egger, Matthias; Cevallos, Myriam; Dekkers, Olaf; Mugglin, Catrina; Scott, Pippa

    2018-05-21

    Systematic reviews and meta-analyses of observational studies are frequently performed, but no widely accepted guidance is available at present. We performed a systematic scoping review of published methodological recommendations on how to systematically review and meta-analyse observational studies. We searched online databases and websites and contacted experts in the field to locate potentially eligible articles. We included articles that provided any type of recommendation on how to conduct systematic reviews and meta-analyses of observational studies. We extracted and summarised recommendations on pre-defined key items: protocol development, research question, search strategy, study eligibility, data extraction, dealing with different study designs, risk of bias assessment, publication bias, heterogeneity, statistical analysis. We summarised recommendations by key item, identifying areas of agreement and disagreement as well as areas where recommendations were missing or scarce. The searches identified 2461 articles of which 93 were eligible. Many recommendations for reviews and meta-analyses of observational studies were transferred from guidance developed for reviews and meta-analyses of RCTs. Although there was substantial agreement in some methodological areas there was also considerable disagreement on how evidence synthesis of observational studies should be conducted. Conflicting recommendations were seen on topics such as the inclusion of different study designs in systematic reviews and meta-analyses, the use of quality scales to assess the risk of bias, and the choice of model (e.g. fixed vs. random effects) for meta-analysis. There is a need for sound methodological guidance on how to conduct systematic reviews and meta-analyses of observational studies, which critically considers areas in which there are conflicting recommendations.

  13. Nonpharmacological interventions for ADHD: systematic review and meta-analyses of randomized controlled trials of dietary and psychological treatments.

    PubMed

    Sonuga-Barke, Edmund J S; Brandeis, Daniel; Cortese, Samuele; Daley, David; Ferrin, Maite; Holtmann, Martin; Stevenson, Jim; Danckaerts, Marina; van der Oord, Saskia; Döpfner, Manfred; Dittmann, Ralf W; Simonoff, Emily; Zuddas, Alessandro; Banaschewski, Tobias; Buitelaar, Jan; Coghill, David; Hollis, Chris; Konofal, Eric; Lecendreux, Michel; Wong, Ian C K; Sergeant, Joseph

    2013-03-01

    Nonpharmacological treatments are available for attention deficit hyperactivity disorder (ADHD), although their efficacy remains uncertain. The authors undertook meta-analyses of the efficacy of dietary (restricted elimination diets, artificial food color exclusions, and free fatty acid supplementation) and psychological (cognitive training, neurofeedback, and behavioral interventions) ADHD treatments. Using a common systematic search and a rigorous coding and data extraction strategy across domains, the authors searched electronic databases to identify published randomized controlled trials that involved individuals who were diagnosed with ADHD (or who met a validated cutoff on a recognized rating scale) and that included an ADHD outcome. Fifty-four of the 2,904 nonduplicate screened records were included in the analyses. Two different analyses were performed. When the outcome measure was based on ADHD assessments by raters closest to the therapeutic setting, all dietary (standardized mean differences=0.21-0.48) and psychological (standardized mean differences=0.40-0.64) treatments produced statistically significant effects. However, when the best probably blinded assessment was employed, effects remained significant for free fatty acid supplementation (standardized mean difference=0.16) and artificial food color exclusion (standardized mean difference=0.42) but were substantially attenuated to nonsignificant levels for other treatments. Free fatty acid supplementation produced small but significant reductions in ADHD symptoms even with probably blinded assessments, although the clinical significance of these effects remains to be determined. Artificial food color exclusion produced larger effects but often in individuals selected for food sensitivities. Better evidence for efficacy from blinded assessments is required for behavioral interventions, neurofeedback, cognitive training, and restricted elimination diets before they can be supported as treatments for core ADHD symptoms.

  14. Environmental exposure assessment in European birth cohorts: results from the ENRIECO project

    PubMed Central

    2013-01-01

    Environmental exposures during pregnancy and early life may have adverse health effects. Single birth cohort studies often lack statistical power to tease out such effects reliably. To improve the use of existing data and to facilitate collaboration among these studies, an inventory of the environmental exposure and health data in these studies was made as part of the ENRIECO (Environmental Health Risks in European Birth Cohorts) project. The focus with regard to exposure was on outdoor air pollution, water contamination, allergens and biological organisms, metals, pesticides, smoking and second hand tobacco smoke (SHS), persistent organic pollutants (POPs), noise, radiation, and occupational exposures. The review lists methods and data on environmental exposures in 37 European birth cohort studies. Most data is currently available for smoking and SHS (N=37 cohorts), occupational exposures (N=33), outdoor air pollution, and allergens and microbial agents (N=27). Exposure modeling is increasingly used for long-term air pollution exposure assessment; biomonitoring is used for assessment of exposure to metals, POPs and other chemicals; and environmental monitoring for house dust mite exposure assessment. Collaborative analyses with data from several birth cohorts have already been performed successfully for outdoor air pollution, water contamination, allergens, biological contaminants, molds, POPs and SHS. Key success factors for collaborative analyses are common definitions of main exposure and health variables. Our review emphasizes that such common definitions need ideally be arrived at in the study design phase. However, careful comparison of methods used in existing studies also offers excellent opportunities for collaborative analyses. Investigators can use this review to evaluate the potential for future collaborative analyses with respect to data availability and methods used in the different cohorts and to identify potential partners for a specific research question. PMID:23343014

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shirazi, M.A.; Davis, L.R.

    To obtain improved prediction of heated plume characteristics from a surface jet, an integral analysis computer model was modified and a comprehensive set of field and laboratory data available from the literature was gathered, analyzed, and correlated for estimating the magnitude of certain coefficients that are normally introduced in these analyses to achieve closure. The parameters so estimated include the coefficients for entrainment, turbulent exchange, drag, and shear. Since there appeared considerable scatter in the data, even after appropriate subgrouping to narrow the influence of various flow conditions on the data, only statistical procedures could be applied to find themore » best fit. This and other analyses of its type have been widely used in industry and government for the prediction of thermal plumes from steam power plants. Although the present model has many shortcomings, a recent independent and exhaustive assessment of such predictions revealed that in comparison with other analyses of its type the present analysis predicts the field situations more successfully.« less

  16. Assessing potential effects of highway runoff on receiving-water quality at selected sites in Oregon with the Stochastic Empirical Loading and Dilution Model (SELDM)

    USGS Publications Warehouse

    Risley, John C.; Granato, Gregory E.

    2014-01-01

    6. An analysis of the use of grab sampling and nonstochastic upstream modeling methods was done to evaluate the potential effects on modeling outcomes. Additional analyses using surrogate water-quality datasets for the upstream basin and highway catchment were provided for six Oregon study sites to illustrate the risk-based information that SELDM will produce. These analyses show that the potential effects of highway runoff on receiving-water quality downstream of the outfall depends on the ratio of drainage areas (dilution), the quality of the receiving water upstream of the highway, and the concentration of the criteria of the constituent of interest. These analyses also show that the probability of exceeding a water-quality criterion may depend on the input statistics used, thus careful selection of representative values is important.

  17. Statistical analyses of commercial vehicle accident factors. Volume 1 Part 1

    DOT National Transportation Integrated Search

    1978-02-01

    Procedures for conducting statistical analyses of commercial vehicle accidents have been established and initially applied. A file of some 3,000 California Highway Patrol accident reports from two areas of California during a period of about one year...

  18. 40 CFR 90.712 - Request for public hearing.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... sampling plans and statistical analyses have been properly applied (specifically, whether sampling procedures and statistical analyses specified in this subpart were followed and whether there exists a basis... Clerk and will be made available to the public during Agency business hours. ...

  19. Angiogenesis and lymphangiogenesis as prognostic factors after therapy in patients with cervical cancer

    PubMed Central

    Makarewicz, Roman; Kopczyńska, Ewa; Marszałek, Andrzej; Goralewska, Alina; Kardymowicz, Hanna

    2012-01-01

    Aim of the study This retrospective study attempts to evaluate the influence of serum vascular endothelial growth factor C (VEGF-C), microvessel density (MVD) and lymphatic vessel density (LMVD) on the result of tumour treatment in women with cervical cancer. Material and methods The research was carried out in a group of 58 patients scheduled for brachytherapy for cervical cancer. All women were patients of the Department and University Hospital of Oncology and Brachytherapy, Collegium Medicum in Bydgoszcz of Nicolaus Copernicus University in Toruń. VEGF-C was determined by means of a quantitative sandwich enzyme immunoassay using a human antibody VEGF-C ELISA produced by Bender MedSystem, enzyme-linked immunosorbent detecting the activity of human VEGF-C in body fluids. The measure for the intensity of angiogenesis and lymphangiogenesis in immunohistochemical reactions is the number of blood vessels within the tumour. Statistical analysis was done using Statistica 6.0 software (StatSoft, Inc. 2001). The Cox proportional hazards model was used for univariate and multivariate analyses. Univariate analysis of overall survival was performed as outlined by Kaplan and Meier. In all statistical analyses p < 0.05 (marked red) was taken as significant. Results In 51 patients who showed up for follow-up examination, the influence of the factors of angiogenesis, lymphangiogenesis, patients’ age and the level of haemoglobin at the end of treatment were assessed. Selected variables, such as patients’ age, lymph vessel density (LMVD), microvessel density (MVD) and the level of haemoglobin (Hb) before treatment were analysed by means of Cox logical regression as potential prognostic factors for lymph node invasion. The observed differences were statistically significant for haemoglobin level before treatment and the platelet number after treatment. The study revealed the following prognostic factors: lymph node status, FIGO stage, and kind of treatment. No statistically significant influence of angiogenic and lymphangiogenic factors on the prognosis was found. Conclusion Angiogenic and lymphangiogenic factors have no value in predicting response to radiotherapy in cervical cancer patients. PMID:23788848

  20. On the Use of Biomineral Oxygen Isotope Data to Identify Human Migrants in the Archaeological Record: Intra-Sample Variation, Statistical Methods and Geographical Considerations

    PubMed Central

    Lightfoot, Emma; O’Connell, Tamsin C.

    2016-01-01

    Oxygen isotope analysis of archaeological skeletal remains is an increasingly popular tool to study past human migrations. It is based on the assumption that human body chemistry preserves the δ18O of precipitation in such a way as to be a useful technique for identifying migrants and, potentially, their homelands. In this study, the first such global survey, we draw on published human tooth enamel and bone bioapatite data to explore the validity of using oxygen isotope analyses to identify migrants in the archaeological record. We use human δ18O results to show that there are large variations in human oxygen isotope values within a population sample. This may relate to physiological factors influencing the preservation of the primary isotope signal, or due to human activities (such as brewing, boiling, stewing, differential access to water sources and so on) causing variation in ingested water and food isotope values. We compare the number of outliers identified using various statistical methods. We determine that the most appropriate method for identifying migrants is dependent on the data but is likely to be the IQR or median absolute deviation from the median under most archaeological circumstances. Finally, through a spatial assessment of the dataset, we show that the degree of overlap in human isotope values from different locations across Europe is such that identifying individuals’ homelands on the basis of oxygen isotope analysis alone is not possible for the regions analysed to date. Oxygen isotope analysis is a valid method for identifying first-generation migrants from an archaeological site when used appropriately, however it is difficult to identify migrants using statistical methods for a sample size of less than c. 25 individuals. In the absence of local previous analyses, each sample should be treated as an individual dataset and statistical techniques can be used to identify migrants, but in most cases pinpointing a specific homeland should not be attempted. PMID:27124001

  1. Research Waste: How Are Dental Survival Articles Indexed and Reported?

    PubMed

    Layton, Danielle M; Clarke, Michael

    2016-01-01

    Research waste occurs when research is ignored, cannot be found, cannot be used, or is unintentionally repeated. This article aims to investigate how dental survival analyses were indexed and reported, and to discuss whether errors in indexing and writing articles are affecting identification and use of survival articles, contributing to research waste. Articles reporting survival of dental prostheses in humans (also known as time-to-event) were identified by searching 50 dental journals that had the highest Impact Factor in 2008. These journals were hand searched twice (Kappa 0.92), and the articles were assessed by two independent reviewers (Kappa 0.86) to identify dental survival articles ("case" articles, n = 95), likely false positives (active controls, n = 91), and all other true negative articles (passive controls, n = 6,769). This means that the study used a case:control method. Once identified, the different groups of articles were assessed and compared. Allocation of medical subject headings (MeSH) by MEDLINE indexers that related to survival was sought, use of words by authors in the abstract and title that related to survival was identified, and use of words and figures by authors that related to survival in the articles themselves was also sought. Differences were assessed with chi-square and Fisher's Exact statistics. Reporting quality was also assessed. The results were reviewed to discuss their potential impact on research waste. Allocation of survival-related MeSH index terms across the three article groups was inconsistent and inaccurate. Statistical MeSH had not been allocated to 30% of the dental survival "case" articles and had been incorrectly allocated to 15% of active controls. Additionally, information reported by authors in titles and abstracts varied, with only two-thirds of survival "case" articles mentioning survival "statistics" in the abstract. In the articles themselves, time-to-event statistical methods, survival curves, and life tables were poorly reported or constructed. Overall, the low quality of indexing by indexers and reporting by authors means that these articles will not be readily identifiable through electronic searches, and, even if they are found, the poor reporting quality makes it unnecessarily difficult for readers to understand and use them. There are substantial problems with the reporting of time-to-event analyses in the dental literature. These problems will adversely impact how these articles can be found and used, thereby contributing to research waste. Changes are needed in the way that authors report these studies and the way indexers classify them.

  2. Rapid assessment of tinnitus-related psychological distress using the Mini-TQ.

    PubMed

    Hiller, Wolfgang; Goebel, Gerhard

    2004-01-01

    The aim of this study was to develop an abridged version of the Tinnitus Questionnaire (TQ) to be used as a quick tool for the assessment of tinnitus-related psychological distress. Data from 351 inpatients and 122 outpatients with chronic tinnitus were used to analyse item statistics and psychometric properties. Twelve items with an optimal combination of high item-total correlations, reliability and sensitivity in assessing changes were selected for the Mini-TQ. Correlation with the full TQ was >0.90, and test-retest reliability was 0.89. Validity was confirmed by associations with general psychological symptom patterns. Treatment effects indicated by the Mini-TQ were slightly greater than those indicated by the full TQ. The Mini-TQ is recommended as a psychometrically approved and solid tool for rapid and economical assessment of subjective tinnitus distress.

  3. Analyzing phenological extreme events over the past five decades in Germany

    NASA Astrophysics Data System (ADS)

    Schleip, Christoph; Menzel, Annette; Estrella, Nicole; Graeser, Philipp

    2010-05-01

    As climate change may alter the frequency and intensity of extreme temperatures, we analysed whether warming of the last 5 decades has already changed the statistics of phenological extreme events. In this context, two extreme value statistical concepts are discussed and applied to existing phenological datasets of German Weather Service (DWD) in order to derive probabilities of occurrence for extreme early or late phenological events. We analyse four phenological groups; "begin of flowering, "leaf foliation", "fruit ripening" and "leaf colouring" as well as DWD indicator phases of the "phenological year". Additionally we put an emphasis on a between-species analysis; a comparison of differences in extreme onsets between three common northern conifers. Furthermore we conducted a within-species analysis with different phases of horse chestnut throughout a year. The first statistical approach fits data to a Gaussian model using traditional statistical techniques, and then analyses the extreme quantile. The key point of this approach is the adoption of an appropriate probability density function (PDF) to the observed data and the assessment of the PDF parameters change in time. The full analytical description in terms of the estimated PDF for defined time steps of the observation period allows probability assessments of extreme values for e.g. annual or decadal time steps. Related with this approach is the possibility of counting out the onsets which fall in our defined extreme percentiles. The estimation of the probability of extreme events on the basis of the whole data set is in contrast to analyses with the generalized extreme value distribution (GEV). The second approach deals with the extreme PDFs itself and fits the GEV distribution to annual minima of phenological series to provide useful estimates about return levels. For flowering and leaf unfolding phases exceptionally early extremes are seen since the mid 1980s and especially for the single years 1961, 1990 and 2007 whereas exceptionally extreme late events are seen in the year 1970. Summer phases such as fruit ripening exhibit stronger shifts to early extremes than spring phases. Leaf colouring phases reveal increasing probability for late extremes. The with GEV estimated 100-year event of Picea, Pinus and Larix amount to extreme early events of about -27, -31.48 and -32.79 days, respectively. If we assume non-stationary minimum data we get a more extreme 100-year event of about -35.40 for Picea but associated with wider confidence intervals. The GEV is simply another probability distribution but for purposes of extreme analysis in phenology it should be considered as equally important as (if not more important than) the Gaussian PDF approach.

  4. Assessing the suitability of summary data for two-sample Mendelian randomization analyses using MR-Egger regression: the role of the I2 statistic.

    PubMed

    Bowden, Jack; Del Greco M, Fabiola; Minelli, Cosetta; Davey Smith, George; Sheehan, Nuala A; Thompson, John R

    2016-12-01

    : MR-Egger regression has recently been proposed as a method for Mendelian randomization (MR) analyses incorporating summary data estimates of causal effect from multiple individual variants, which is robust to invalid instruments. It can be used to test for directional pleiotropy and provides an estimate of the causal effect adjusted for its presence. MR-Egger regression provides a useful additional sensitivity analysis to the standard inverse variance weighted (IVW) approach that assumes all variants are valid instruments. Both methods use weights that consider the single nucleotide polymorphism (SNP)-exposure associations to be known, rather than estimated. We call this the `NO Measurement Error' (NOME) assumption. Causal effect estimates from the IVW approach exhibit weak instrument bias whenever the genetic variants utilized violate the NOME assumption, which can be reliably measured using the F-statistic. The effect of NOME violation on MR-Egger regression has yet to be studied. An adaptation of the I2 statistic from the field of meta-analysis is proposed to quantify the strength of NOME violation for MR-Egger. It lies between 0 and 1, and indicates the expected relative bias (or dilution) of the MR-Egger causal estimate in the two-sample MR context. We call it IGX2 . The method of simulation extrapolation is also explored to counteract the dilution. Their joint utility is evaluated using simulated data and applied to a real MR example. In simulated two-sample MR analyses we show that, when a causal effect exists, the MR-Egger estimate of causal effect is biased towards the null when NOME is violated, and the stronger the violation (as indicated by lower values of IGX2 ), the stronger the dilution. When additionally all genetic variants are valid instruments, the type I error rate of the MR-Egger test for pleiotropy is inflated and the causal effect underestimated. Simulation extrapolation is shown to substantially mitigate these adverse effects. We demonstrate our proposed approach for a two-sample summary data MR analysis to estimate the causal effect of low-density lipoprotein on heart disease risk. A high value of IGX2 close to 1 indicates that dilution does not materially affect the standard MR-Egger analyses for these data. : Care must be taken to assess the NOME assumption via the IGX2 statistic before implementing standard MR-Egger regression in the two-sample summary data context. If IGX2 is sufficiently low (less than 90%), inferences from the method should be interpreted with caution and adjustment methods considered. © The Author 2016. Published by Oxford University Press on behalf of the International Epidemiological Association.

  5. Are we there yet?

    PubMed

    Cristianini, Nello

    2010-05-01

    Statistical approaches to Artificial Intelligence are behind most success stories of the field in the past decade. The idea of generating non-trivial behaviour by analysing vast amounts of data has enabled recommendation systems, search engines, spam filters, optical character recognition, machine translation and speech recognition, among other things. As we celebrate the spectacular achievements of this line of research, we need to assess its full potential and its limitations. What are the next steps to take towards machine intelligence? 2010 Elsevier Ltd. All rights reserved.

  6. Performance assessment through pre- and post-training evaluation of continuing medical education courses in prevention and management of cardio-vascular diseases in primary health care facilities of Armenia.

    PubMed

    Khachatryan, Lilit; Balalian, Arin

    2013-12-01

    To assess the difference of pre- and post-training performance evaluation of continuing medical education (CME) courses in cardio-vascular diseases (CVD) management among physicians at primary health care facilities of Armenian regions we conducted an evaluation survey. 212 medical records were surveyed on assessment of performance before and after the training courses through a self-employed structured questionnaire. Analysis of survey revealed statistically significant differences (p < 0.05) in a number of variables: threefold increased recording of lipids and body mass index (p = 0.001); moderate increased recording of comorbidities and aspirin prescription (p < 0.012); eightfold increased recording of dyslipidemia management plan, twofold increased recording for CVD management plan and fivefold increased recording for CVD absolute risk (p = 0.000). Missing records of electrocardiography and urine/creatinine analyses decreased statistically significantly (p < 0.05). Statistically significant decrease was observed in prescription of thiazides and angiotensin receptor blockers/angiotensin converting enzyme inhibitors (p < 0.005), while prescription of statins and statins with diet for dyslipidemia management showed increased recording (p < 0.05). Similarly, we observed increased records for counseling of rehabilitation physical activity (p = 0.006). In this survey most differences in pre- and post-evaluation of performance assessment may be explained by improved and interactive training modes, more advanced methods of demonstration of modeling. Current findings may serve a basis for future planning of CME courses for physicians of remote areas facing challenges in upgrading their knowledge, as well as expand the experience of performance assessment along with evaluation of knowledge scores.

  7. Impact of specimen adequacy on the assessment of renal allograft biopsy specimens.

    PubMed

    Cimen, S; Geldenhuys, L; Guler, S; Imamoglu, A; Molinari, M

    2016-01-01

    The Banff classification was introduced to achieve uniformity in the assessment of renal allograft biopsies. The primary aim of this study was to evaluate the impact of specimen adequacy on the Banff classification. All renal allograft biopsies obtained between July 2010 and June 2012 for suspicion of acute rejection were included. Pre-biopsy clinical data on suspected diagnosis and time from renal transplantation were provided to a nephropathologist who was blinded to the original pathological report. Second pathological readings were compared with the original to assess agreement stratified by specimen adequacy. Cohen's kappa test and Fisher's exact test were used for statistical analyses. Forty-nine specimens were reviewed. Among these specimens, 81.6% were classified as adequate, 6.12% as minimal, and 12.24% as unsatisfactory. The agreement analysis among the first and second readings revealed a kappa value of 0.97. Full agreement between readings was found in 75% of the adequate specimens, 66.7 and 50% for minimal and unsatisfactory specimens, respectively. There was no agreement between readings in 5% of the adequate specimens and 16.7% of the unsatisfactory specimens. For the entire sample full agreement was found in 71.4%, partial agreement in 20.4% and no agreement in 8.2% of the specimens. Statistical analysis using Fisher's exact test yielded a P value above 0.25 showing that - probably due to small sample size - the results were not statistically significant. Specimen adequacy may be a determinant of a diagnostic agreement in renal allograft specimen assessment. While additional studies including larger case numbers are required to further delineate the impact of specimen adequacy on the reliability of histopathological assessments, specimen quality must be considered during clinical decision making while dealing with biopsy reports based on minimal or unsatisfactory specimens.

  8. Supply Chain Collaboration: Information Sharing in a Tactical Operating Environment

    DTIC Science & Technology

    2013-06-01

    architecture, there are four tiers: Client (Web Application Clients ), Presentation (Web-Server), Processing (Application-Server), Data (Database...organization in each period. This data will be collected to analyze. i) Analyses and Validation: We will do a statistics test in this data, Pareto ...notes, outstanding deliveries, and inventory. i) Analyses and Validation: We will do a statistics test in this data, Pareto analyses and confirmation

  9. Using assemblage data in ecological indicators: A comparison and evaluation of commonly available statistical tools

    USGS Publications Warehouse

    Smith, Joseph M.; Mather, Martha E.

    2012-01-01

    Ecological indicators are science-based tools used to assess how human activities have impacted environmental resources. For monitoring and environmental assessment, existing species assemblage data can be used to make these comparisons through time or across sites. An impediment to using assemblage data, however, is that these data are complex and need to be simplified in an ecologically meaningful way. Because multivariate statistics are mathematical relationships, statistical groupings may not make ecological sense and will not have utility as indicators. Our goal was to define a process to select defensible and ecologically interpretable statistical simplifications of assemblage data in which researchers and managers can have confidence. For this, we chose a suite of statistical methods, compared the groupings that resulted from these analyses, identified convergence among groupings, then we interpreted the groupings using species and ecological guilds. When we tested this approach using a statewide stream fish dataset, not all statistical methods worked equally well. For our dataset, logistic regression (Log), detrended correspondence analysis (DCA), cluster analysis (CL), and non-metric multidimensional scaling (NMDS) provided consistent, simplified output. Specifically, the Log, DCA, CL-1, and NMDS-1 groupings were ≥60% similar to each other, overlapped with the fluvial-specialist ecological guild, and contained a common subset of species. Groupings based on number of species (e.g., Log, DCA, CL and NMDS) outperformed groupings based on abundance [e.g., principal components analysis (PCA) and Poisson regression]. Although the specific methods that worked on our test dataset have generality, here we are advocating a process (e.g., identifying convergent groupings with redundant species composition that are ecologically interpretable) rather than the automatic use of any single statistical tool. We summarize this process in step-by-step guidance for the future use of these commonly available ecological and statistical methods in preparing assemblage data for use in ecological indicators.

  10. The direct and indirect effects of lurasidone monotherapy on functional improvement among patients with bipolar depression: results from a randomized placebo-controlled trial.

    PubMed

    Rajagopalan, Krithika; Bacci, Elizabeth Dansie; Wyrwich, Kathleen W; Pikalov, Andrei; Loebel, Antony

    2016-12-01

    Bipolar depression is characterized by depressive symptoms and impairment in many areas of functioning, including work, family, and social life. The objective of this study was to assess the independent, direct effect of lurasidone treatment on functioning improvement, and examine the indirect effect of lurasidone treatment on functioning improvement, mediated through improvements in depression symptoms. Data from a 6-week placebo-controlled trial assessing the effect of lurasidone monotherapy versus placebo in patients with bipolar depression was used. Patient functioning was measured using the Sheehan disability scale (SDS). Descriptive statistics were used to assess the effect of lurasidone on improvement on the SDS total and domain scores (work/school, social, and family life), as well as number of days lost and unproductive due to symptoms. Path analyses evaluated the total effect (β1), as well as the indirect effect (β2×β3) and direct effect (β4) of lurasidone treatment on SDS total score change, using standardized beta path coefficients and baseline scores as covariates. The direct effect of treatment on SDS total score change and indirect effects accounting for mediation through depression improvement were examined for statistical significance and magnitude using MPlus. In this 6-week trial (N = 485), change scores from baseline to 6-weeks were significantly larger for both lurasidone treatment dosage groups versus placebo on the SDS total and all three SDS domain scores (p < 0.05). Through path analyses, lurasidone treatment predicted improvement in depression (β2 = -0.33, p = 0.009), subsequently predicting improvement in functional impairment (β3 = 0.70, p < 0.001; indirect effect = -0.23). The direct effect was of medium magnitude (β4 = -0.17, p = 0.04), indicating lurasidone had a significant and direct effect on improvement in functional impairment, after accounting for depression improvement. Results demonstrated statistically significant improvement in functioning among patients on lurasidone monotherapy compared to placebo. Improvement in functioning among patients on lurasidone was largely mediated through a reduction in depression symptoms, but lurasidone also had a medium and statistically significant independent direct effect in improving functioning.

  11. Cluster detection methods applied to the Upper Cape Cod cancer data.

    PubMed

    Ozonoff, Al; Webster, Thomas; Vieira, Veronica; Weinberg, Janice; Ozonoff, David; Aschengrau, Ann

    2005-09-15

    A variety of statistical methods have been suggested to assess the degree and/or the location of spatial clustering of disease cases. However, there is relatively little in the literature devoted to comparison and critique of different methods. Most of the available comparative studies rely on simulated data rather than real data sets. We have chosen three methods currently used for examining spatial disease patterns: the M-statistic of Bonetti and Pagano; the Generalized Additive Model (GAM) method as applied by Webster; and Kulldorff's spatial scan statistic. We apply these statistics to analyze breast cancer data from the Upper Cape Cancer Incidence Study using three different latency assumptions. The three different latency assumptions produced three different spatial patterns of cases and controls. For 20 year latency, all three methods generally concur. However, for 15 year latency and no latency assumptions, the methods produce different results when testing for global clustering. The comparative analyses of real data sets by different statistical methods provides insight into directions for further research. We suggest a research program designed around examining real data sets to guide focused investigation of relevant features using simulated data, for the purpose of understanding how to interpret statistical methods applied to epidemiological data with a spatial component.

  12. Evaluation of the ecological relevance of mysid toxicity tests using population modeling techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kuhn-Hines, A.; Munns, W.R. Jr.; Lussier, S.

    1995-12-31

    A number of acute and chronic bioassay statistics are used to evaluate the toxicity and risks of chemical stressors to the mysid shrimp, Mysidopsis bahia. These include LC{sub 50}S from acute tests, NOECs from 7-day and life-cycle tests, and the US EPA Water Quality Criteria Criterion Continuous Concentrations (CCC). Because these statistics are generated from endpoints which focus upon the responses of individual organisms, their relationships to significant effects at higher levels of ecological organization are unknown. This study was conducted to evaluate the quantitative relationships between toxicity test statistics and a concentration-based statistic derived from exposure-response models describing populationmore » growth rate ({lambda}) to stressor concentration. This statistic, C{sup {sm_bullet}} (concentration where {lambda} = I, zero population growth) describes the concentration above which mysid populations are projected to decline in abundance as determined using population modeling techniques. An analysis of M. bahia responses to 9 metals and 9 organic contaminants indicated the NOEC from life-cycle tests to be the best predictor of C{sup {sm_bullet}}, although the acute LC{sub 50} predicted population-level response surprisingly well. These analyses provide useful information regarding uncertainties of extrapolation among test statistics in assessments of ecological risk.« less

  13. The Work-Family Conflict Scale (WAFCS): development and initial validation of a self-report measure of work-family conflict for use with parents.

    PubMed

    Haslam, Divna; Filus, Ania; Morawska, Alina; Sanders, Matthew R; Fletcher, Renee

    2015-06-01

    This paper outlines the development and validation of the Work-Family Conflict Scale (WAFCS) designed to measure work-to-family conflict (WFC) and family-to-work conflict (FWC) for use with parents of young children. An expert informant and consumer feedback approach was utilised to develop and refine 20 items, which were subjected to a rigorous validation process using two separate samples of parents of 2-12 year old children (n = 305 and n = 264). As a result of statistical analyses several items were dropped resulting in a brief 10-item scale comprising two subscales assessing theoretically distinct but related constructs: FWC (five items) and WFC (five items). Analyses revealed both subscales have good internal consistency, construct validity as well as concurrent and predictive validity. The results indicate the WAFCS is a promising brief measure for the assessment of work-family conflict in parents. Benefits of the measure as well as potential uses are discussed.

  14. Hypervelocity Impact of Unstressed and Stressed Titanium in a Whipple Configuration in Support of the Orion Crew Exploration Vehicle Service Module Propellant Tanks

    NASA Technical Reports Server (NTRS)

    Nahra, Henry K.; Christiansen, Eric; Piekutowski, Andrew; Lyons, Frankel; Keddy, Christopher; Salem, Jonathan; Miller, Joshua; Bohl, William; Poormon, Kevin; Greene, Nathanel; hide

    2010-01-01

    Hypervelocity impacts were performed on six unstressed and six stressed titanium coupons with aluminium shielding in order to assess the effects of the partial penetration damage on the post impact micromechanical properties of titanium and on the residual strength after impact. This work is performed in support of the definition of the penetration criteria of the propellant tanks surfaces for the service module of the crew exploration vehicle where such a criterion is based on testing and analyses rather than on historical precedence. The objective of this work is to assess the effects of applied biaxial stress on the damage dynamics and morphology. The crater statistics revealed minute differences between stressed and unstressed coupon damage. The post impact residual stress analyses showed that the titanium strength properties were generally unchanged for the unstressed coupons when compared with undamaged titanium. However, high localized strains were shown near the craters during the tensile tests.

  15. Hypervelocity Impact of Unstressed and Stressed Titanium in a Whipple Configuration in Support of the Orion Crew Exploration Vehicle Service Module Propellant Tanks

    NASA Technical Reports Server (NTRS)

    Nahra, Henry K.; Christiansen, Eric; Piekutowski, Andrew; Lyons, Frankel; Keddy, Christopher; Salem, Jonathan; Poormon, Kevin; Bohl, William; Miller, Joshua; Greene, Nathanael; hide

    2010-01-01

    Hypervelocity impacts were performed on six unstressed and six stressed titanium coupons with aluminium: shielding in order to assess the effects of the partial penetration damage on the post impact micromechanical properties of titanium and on the residual strength after impact. This work is performed in support of the defInition of the penetration criteria of the propellant and oxidizer tanks dome surfaces for the service module of the crew exploration vehicle where such a criterion is based on testing and analyses rather than on historical precedence. The objective of this work is to assess the effects of applied biaxial stress on the damage dynamics and morphology. The crater statistics revealed minute differences between stressed and unstressed coupon damage. The post impact residual stress analyses showed that the titanium strength properties were generally unchanged for the unstressed coupons when compared with undamaged titanium. However, high localized strains were shown near the craters during the tensile tests.

  16. Religion and Spirituality's Influences on HIV Syndemics Among MSM: A Systematic Review and Conceptual Model.

    PubMed

    Lassiter, Jonathan M; Parsons, Jeffrey T

    2016-02-01

    This paper presents a systematic review of the quantitative HIV research that assessed the relationships between religion, spirituality, HIV syndemics, and individual HIV syndemics-related health conditions (e.g. depression, substance abuse, HIV risk) among men who have sex with men (MSM) in the United States. No quantitative studies were found that assessed the relationships between HIV syndemics, religion, and spirituality. Nine studies, with 13 statistical analyses, were found that examined the relationships between individual HIV syndemics-related health conditions, religion, and spirituality. Among the 13 analyses, religion and spirituality were found to have mixed relationships with HIV syndemics-related health conditions (6 nonsignificant associations; 5 negative associations; 2 positive associations). Given the overall lack of inclusion of religion and spirituality in HIV syndemics research, a conceptual model that hypothesizes the potential interactions of religion and spirituality with HIV syndemics-related health conditions is presented. The implications of the model for MSM's health are outlined.

  17. Religion and Spirituality’s Influences on HIV Syndemics Among MSM: A Systematic Review and Conceptual Model

    PubMed Central

    Parsons, Jeffrey T.

    2015-01-01

    This paper presents a systematic review of the quantitative HIV research that assessed the relationships between religion, spirituality, HIV syndemics, and individual HIV syndemics-related health conditions (e.g. depression, substance abuse, HIV risk) among men who have sex with men (MSM) in the United States. No quantitative studies were found that assessed the relationships between HIV syndemics, religion, and spirituality. Nine studies, with 13 statistical analyses, were found that examined the relationships between individual HIV syndemics-related health conditions, religion, and spirituality. Among the 13 analyses, religion and spirituality were found to have mixed relationships with HIV syndemics-related health conditions (6 nonsignificant associations; 5 negative associations; 2 positive associations). Given the overall lack of inclusion of religion and spirituality in HIV syndemics research, a conceptual model that hypothesizes the potential interactions of religion and spirituality with HIV syndemics-related health conditions is presented. The implications of the model for MSM’s health are outlined. PMID:26319130

  18. Statistical power analysis in wildlife research

    USGS Publications Warehouse

    Steidl, R.J.; Hayes, J.P.

    1997-01-01

    Statistical power analysis can be used to increase the efficiency of research efforts and to clarify research results. Power analysis is most valuable in the design or planning phases of research efforts. Such prospective (a priori) power analyses can be used to guide research design and to estimate the number of samples necessary to achieve a high probability of detecting biologically significant effects. Retrospective (a posteriori) power analysis has been advocated as a method to increase information about hypothesis tests that were not rejected. However, estimating power for tests of null hypotheses that were not rejected with the effect size observed in the study is incorrect; these power estimates will always be a??0.50 when bias adjusted and have no relation to true power. Therefore, retrospective power estimates based on the observed effect size for hypothesis tests that were not rejected are misleading; retrospective power estimates are only meaningful when based on effect sizes other than the observed effect size, such as those effect sizes hypothesized to be biologically significant. Retrospective power analysis can be used effectively to estimate the number of samples or effect size that would have been necessary for a completed study to have rejected a specific null hypothesis. Simply presenting confidence intervals can provide additional information about null hypotheses that were not rejected, including information about the size of the true effect and whether or not there is adequate evidence to 'accept' a null hypothesis as true. We suggest that (1) statistical power analyses be routinely incorporated into research planning efforts to increase their efficiency, (2) confidence intervals be used in lieu of retrospective power analyses for null hypotheses that were not rejected to assess the likely size of the true effect, (3) minimum biologically significant effect sizes be used for all power analyses, and (4) if retrospective power estimates are to be reported, then the I?-level, effect sizes, and sample sizes used in calculations must also be reported.

  19. Dependency of high coastal water level and river discharge at the global scale

    NASA Astrophysics Data System (ADS)

    Ward, P.; Couasnon, A.; Haigh, I. D.; Muis, S.; Veldkamp, T.; Winsemius, H.; Wahl, T.

    2017-12-01

    It is widely recognized that floods cause huge socioeconomic impacts. From 1980-2013, global flood losses exceeded $1 trillion, with 220,000 fatalities. These impacts are particularly hard felt in low-lying densely populated deltas and estuaries, whose location at the coast-land interface makes them naturally prone to flooding. When river and coastal floods coincide, their impacts in these deltas and estuaries are often worse than when they occur in isolation. Such floods are examples of so-called `compound events'. In this contribution, we present the first global scale analysis of the statistical dependency of high coastal water levels (and the storm surge component alone) and river discharge. We show that there is statistical dependency between these components at more than half of the stations examined. We also show time-lags in the highest correlation between peak discharges and coastal water levels. Finally, we assess the probability of the simultaneous occurrence of design discharge and design coastal water levels, assuming both independence and statistical dependence. For those stations where we identified statistical dependency, the probability is between 1 and 5 times greater, when the dependence structure is accounted for. This information is essential for understanding the likelihood of compound flood events occurring at locations around the world as well as for accurate flood risk assessments and effective flood risk management. The research was carried out by analysing the statistical dependency between observed coastal water levels (and the storm surge component) from GESLA-2 and river discharge using gauged data from GRDC stations all around the world. The dependence structure was examined using copula functions.

  20. Research of Extension of the Life Cycle of Helicopter Rotor Blade in Hungary

    DTIC Science & Technology

    2003-02-01

    Radiography (DXR), and (iii) Vibration Diagnostics (VD) with Statistical Energy Analysis (SEA) were semi- simultaneously applied [1]. The used three...2.2. Vibration Diagnostics (VD)) Parallel to the NDT measurements the Statistical Energy Analysis (SEA) as a vibration diagnostical tool were...noises were analysed with a dual-channel real time frequency analyser (BK2035). In addition to the Statistical Energy Analysis measurement a small

  1. A quantitative analysis of factors influencing the professional longevity of high school science teachers in Florida

    NASA Astrophysics Data System (ADS)

    Ridgley, James Alexander, Jr.

    This dissertation is an exploratory quantitative analysis of various independent variables to determine their effect on the professional longevity (years of service) of high school science teachers in the state of Florida for the academic years 2011-2012 to 2013-2014. Data are collected from the Florida Department of Education, National Center for Education Statistics, and the National Assessment of Educational Progress databases. The following research hypotheses are examined: H1 - There are statistically significant differences in Level 1 (teacher variables) that influence the professional longevity of a high school science teacher in Florida. H2 - There are statistically significant differences in Level 2 (school variables) that influence the professional longevity of a high school science teacher in Florida. H3 - There are statistically significant differences in Level 3 (district variables) that influence the professional longevity of a high school science teacher in Florida. H4 - When tested in a hierarchical multiple regression, there are statistically significant differences in Level 1, Level 2, or Level 3 that influence the professional longevity of a high school science teacher in Florida. The professional longevity of a Floridian high school science teacher is the dependent variable. The independent variables are: (Level 1) a teacher's sex, age, ethnicity, earned degree, salary, number of schools taught in, migration count, and various years of service in different areas of education; (Level 2) a school's geographic location, residential population density, average class size, charter status, and SES; and (Level 3) a school district's average SES and average spending per pupil. Statistical analyses of exploratory MLRs and a HMR are used to support the research hypotheses. The final results of the HMR analysis show a teacher's age, salary, earned degree (unknown, associate, and doctorate), and ethnicity (Hispanic and Native Hawaiian/Pacific Islander); a school's charter status; and a school district's average SES are all significant predictors of a Florida high school science teacher's professional longevity. Although statistically significant in the initial exploratory MLR analyses, a teacher's ethnicity (Asian and Black), a school's geographic location (city and rural), and a school's SES are not statistically significant in the final HMR model.

  2. Nitrogen Dioxide Exposure and Airway Responsiveness in ...

    EPA Pesticide Factsheets

    Controlled human exposure studies evaluating the effect of inhaled NO2 on the inherent responsiveness of the airways to challenge by bronchoconstricting agents have had mixed results. In general, existing meta-analyses show statistically significant effects of NO2 on the airway responsiveness of individuals with asthma. However, no meta-analysis has provided a comprehensive assessment of clinical relevance of changes in airway responsiveness, the potential for methodological biases in the original papers, and the distribution of responses. This paper provides analyses showing that a statistically significant fraction, 70% of individuals with asthma exposed to NO2 at rest, experience increases in airway responsiveness following 30-minute exposures to NO2 in the range of 200 to 300 ppb and following 60-minute exposures to 100 ppb. The distribution of changes in airway responsiveness is log-normally distributed with a median change of 0.75 (provocative dose following NO2 divided by provocative dose following filtered air exposure) and geometric standard deviation of 1.88. About a quarter of the exposed individuals experience a clinically relevant reduction in their provocative dose due to NO2 relative to air exposure. The fraction experiencing an increase in responsiveness was statistically significant and robust to exclusion of individual studies. Results showed minimal change in airway responsiveness for individuals exposed to NO2 during exercise. A variety of fa

  3. Sources of Safety Data and Statistical Strategies for Design and Analysis: Postmarket Surveillance.

    PubMed

    Izem, Rima; Sanchez-Kam, Matilde; Ma, Haijun; Zink, Richard; Zhao, Yueqin

    2018-03-01

    Safety data are continuously evaluated throughout the life cycle of a medical product to accurately assess and characterize the risks associated with the product. The knowledge about a medical product's safety profile continually evolves as safety data accumulate. This paper discusses data sources and analysis considerations for safety signal detection after a medical product is approved for marketing. This manuscript is the second in a series of papers from the American Statistical Association Biopharmaceutical Section Safety Working Group. We share our recommendations for the statistical and graphical methodologies necessary to appropriately analyze, report, and interpret safety outcomes, and we discuss the advantages and disadvantages of safety data obtained from passive postmarketing surveillance systems compared to other sources. Signal detection has traditionally relied on spontaneous reporting databases that have been available worldwide for decades. However, current regulatory guidelines and ease of reporting have increased the size of these databases exponentially over the last few years. With such large databases, data-mining tools using disproportionality analysis and helpful graphics are often used to detect potential signals. Although the data sources have many limitations, analyses of these data have been successful at identifying safety signals postmarketing. Experience analyzing these dynamic data is useful in understanding the potential and limitations of analyses with new data sources such as social media, claims, or electronic medical records data.

  4. Environmental implications of element emissions from phosphate-processing operations in southeastern Idaho

    USGS Publications Warehouse

    Severson, R.C.; Gough, L.P.

    1979-01-01

    In order to assess the contribution to plants and soils of certain elements emitted by phosphate processing, we sampled sagebrush, grasses, and A- and C-horizon soils along upwind and downwind transects at Pocatello and Soda Springs, Idaho. Analyses for 70 elements in plants showed that, statistically, the concentration of 7 environmentally important elements, cadmium, chromium, fluorine, selenium, uranium, vanadium, and zinc, were related to emissions from phosphate-processing operations. Two additional elements, lithium and nickel, show probable relationships. The literature on the effects of these elements on plant and animal health is briefly surveyed. Relations between element content in plants and distance from the phosphate-processing operations were stronger at Soda Springs than at Pocatello and, in general, stronger in sagebrush than in the grasses. Analyses for 58 elements in soils showed that, statistically, beryllium, fluorine, iron, lead, lithium, potassium, rubidium, thorium, and zinc were related to emissions only at Pocatello and only in the A horizon. Moreover, six additional elements, copper, mercury, nickel, titanium, uranium, and vanadium, probably are similarly related along the same transect. The approximate amounts of elements added to the soils by the emissions are estimated. In C-horizon soils, no statistically significant relations were observed between element concentrations and distance from the processing sites. At Soda Springs, the nonuniformity of soils at the sampling locations may have obscured the relationship between soil-element content and emissions from phosphate processing.

  5. Dose response explorer: an integrated open-source tool for exploring and modelling radiotherapy dose volume outcome relationships

    NASA Astrophysics Data System (ADS)

    El Naqa, I.; Suneja, G.; Lindsay, P. E.; Hope, A. J.; Alaly, J. R.; Vicic, M.; Bradley, J. D.; Apte, A.; Deasy, J. O.

    2006-11-01

    Radiotherapy treatment outcome models are a complicated function of treatment, clinical and biological factors. Our objective is to provide clinicians and scientists with an accurate, flexible and user-friendly software tool to explore radiotherapy outcomes data and build statistical tumour control or normal tissue complications models. The software tool, called the dose response explorer system (DREES), is based on Matlab, and uses a named-field structure array data type. DREES/Matlab in combination with another open-source tool (CERR) provides an environment for analysing treatment outcomes. DREES provides many radiotherapy outcome modelling features, including (1) fitting of analytical normal tissue complication probability (NTCP) and tumour control probability (TCP) models, (2) combined modelling of multiple dose-volume variables (e.g., mean dose, max dose, etc) and clinical factors (age, gender, stage, etc) using multi-term regression modelling, (3) manual or automated selection of logistic or actuarial model variables using bootstrap statistical resampling, (4) estimation of uncertainty in model parameters, (5) performance assessment of univariate and multivariate analyses using Spearman's rank correlation and chi-square statistics, boxplots, nomograms, Kaplan-Meier survival plots, and receiver operating characteristics curves, and (6) graphical capabilities to visualize NTCP or TCP prediction versus selected variable models using various plots. DREES provides clinical researchers with a tool customized for radiotherapy outcome modelling. DREES is freely distributed. We expect to continue developing DREES based on user feedback.

  6. Toxic essential oils. Part V: Behaviour modulating and toxic properties of thujones and thujone-containing essential oils of Salvia officinalis L., Artemisia absinthium L., Thuja occidentalis L. and Tanacetum vulgare L.

    PubMed

    Radulović, Niko S; Genčić, Marija S; Stojanović, Nikola M; Randjelović, Pavle J; Stojanović-Radić, Zorica Z; Stojiljković, Nenad I

    2017-07-01

    Neurotoxic thujones (α- and β-diastereoisomers) are common constituents of plant essential oils. In this study, we employed a statistical approach to determine the contribution of thujones to the overall observed behaviour-modulating and toxic effects of essential oils (Salvia officinalis L., Artemisia absinthium L., Thuja occidentalis L. and Tanacetum vulgare L.) containing these monoterpene ketones. The data from three in vivo neuropharmacological tests on rats (open field, light-dark, and diazepam-induced sleep), and toxicity assays (brine shrimp, and antimicrobial activity against a panel of microorganisms), together with the data from detailed chemical analyses, were subjected to a multivariate statistical treatment to reveal the possible correlation(s) between the content of essential-oil constituents and the observed effects. The results strongly imply that the toxic and behaviour-modulating activity of the oils (hundreds of constituents) should not be associated exclusively with thujones. The statistical analyses pinpointed to a number of essential-oil constituents other than thujones that demonstrated a clear correlation with either the toxicity, antimicrobial effect or the activity on CNS. Thus, in addition to the thujone content, the amount and toxicity of other constituents should be taken into consideration when making risk assessment and determining the regulatory status of plants in food and medicines. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. The classification of secondary colorectal liver cancer in human biopsy samples using angular dispersive x-ray diffraction and multivariate analysis

    NASA Astrophysics Data System (ADS)

    Theodorakou, Chrysoula; Farquharson, Michael J.

    2009-08-01

    The motivation behind this study is to assess whether angular dispersive x-ray diffraction (ADXRD) data, processed using multivariate analysis techniques, can be used for classifying secondary colorectal liver cancer tissue and normal surrounding liver tissue in human liver biopsy samples. The ADXRD profiles from a total of 60 samples of normal liver tissue and colorectal liver metastases were measured using a synchrotron radiation source. The data were analysed for 56 samples using nonlinear peak-fitting software. Four peaks were fitted to all of the ADXRD profiles, and the amplitude, area, amplitude and area ratios for three of the four peaks were calculated and used for the statistical and multivariate analysis. The statistical analysis showed that there are significant differences between all the peak-fitting parameters and ratios between the normal and the diseased tissue groups. The technique of soft independent modelling of class analogy (SIMCA) was used to classify normal liver tissue and colorectal liver metastases resulting in 67% of the normal tissue samples and 60% of the secondary colorectal liver tissue samples being classified correctly. This study has shown that the ADXRD data of normal and secondary colorectal liver cancer are statistically different and x-ray diffraction data analysed using multivariate analysis have the potential to be used as a method of tissue classification.

  8. Discovering genetic variants in Crohn's disease by exploring genomic regions enriched of weak association signals.

    PubMed

    D'Addabbo, Annarita; Palmieri, Orazio; Maglietta, Rosalia; Latiano, Anna; Mukherjee, Sayan; Annese, Vito; Ancona, Nicola

    2011-08-01

    A meta-analysis has re-analysed previous genome-wide association scanning definitively confirming eleven genes and further identifying 21 new loci. However, the identified genes/loci still explain only the minority of genetic predisposition of Crohn's disease. To identify genes weakly involved in disease predisposition by analysing chromosomal regions enriched of single nucleotide polymorphisms with modest statistical association. We utilized the WTCCC data set evaluating 1748 CD and 2938 controls. The identification of candidate genes/loci was performed by a two-step procedure: first of all chromosomal regions enriched of weak association signals were localized; subsequently, weak signals clustered in gene regions were identified. The statistical significance was assessed by non parametric permutation tests. The cytoband enrichment analysis highlighted 44 regions (P≤0.05) enriched with single nucleotide polymorphisms significantly associated with the trait including 23 out of 31 previously confirmed and replicated genes. Importantly, we highlight further 20 novel chromosomal regions carrying approximately one hundred genes/loci with modest association. Amongst these we find compelling functional candidate genes such as MAPT, GRB2 and CREM, LCT, and IL12RB2. Our study suggests a different statistical perspective to discover genes weakly associated with a given trait, although further confirmatory functional studies are needed. Copyright © 2011 Editrice Gastroenterologica Italiana S.r.l. All rights reserved.

  9. Using Network Analysis to Characterize Biogeographic Data in a Community Archive

    NASA Astrophysics Data System (ADS)

    Wellman, T. P.; Bristol, S.

    2017-12-01

    Informative measures are needed to evaluate and compare data from multiple providers in a community-driven data archive. This study explores insights from network theory and other descriptive and inferential statistics to examine data content and application across an assemblage of publically available biogeographic data sets. The data are archived in ScienceBase, a collaborative catalog of scientific data supported by the U.S Geological Survey to enhance scientific inquiry and acuity. In gaining understanding through this investigation and other scientific venues our goal is to improve scientific insight and data use across a spectrum of scientific applications. Network analysis is a tool to reveal patterns of non-trivial topological features in the data that do not exhibit complete regularity or randomness. In this work, network analyses are used to explore shared events and dependencies between measures of data content and application derived from metadata and catalog information and measures relevant to biogeographic study. Descriptive statistical tools are used to explore relations between network analysis properties, while inferential statistics are used to evaluate the degree of confidence in these assessments. Network analyses have been used successfully in related fields to examine social awareness of scientific issues, taxonomic structures of biological organisms, and ecosystem resilience to environmental change. Use of network analysis also shows promising potential to identify relationships in biogeographic data that inform programmatic goals and scientific interests.

  10. Risk assessment of vector-borne diseases for public health governance.

    PubMed

    Sedda, L; Morley, D W; Braks, M A H; De Simone, L; Benz, D; Rogers, D J

    2014-12-01

    In the context of public health, risk governance (or risk analysis) is a framework for the assessment and subsequent management and/or control of the danger posed by an identified disease threat. Generic frameworks in which to carry out risk assessment have been developed by various agencies. These include monitoring, data collection, statistical analysis and dissemination. Due to the inherent complexity of disease systems, however, the generic approach must be modified for individual, disease-specific risk assessment frameworks. The analysis was based on the review of the current risk assessments of vector-borne diseases adopted by the main Public Health organisations (OIE, WHO, ECDC, FAO, CDC etc…). Literature, legislation and statistical assessment of the risk analysis frameworks. This review outlines the need for the development of a general public health risk assessment method for vector-borne diseases, in order to guarantee that sufficient information is gathered to apply robust models of risk assessment. Stochastic (especially spatial) methods, often in Bayesian frameworks are now gaining prominence in standard risk assessment procedures because of their ability to assess accurately model uncertainties. Risk assessment needs to be addressed quantitatively wherever possible, and submitted with its quality assessment in order to enable successful public health measures to be adopted. In terms of current practice, often a series of different models and analyses are applied to the same problem, with results and outcomes that are difficult to compare because of the unknown model and data uncertainties. Therefore, the risk assessment areas in need of further research are identified in this article. Copyright © 2014 The Royal Society for Public Health. Published by Elsevier Ltd. All rights reserved.

  11. Interdisciplinary evaluation of dysphagia: clinical swallowing evaluation and videoendoscopy of swallowing.

    PubMed

    Sordi, Marina de; Mourão, Lucia Figueiredo; Silva, Ariovaldo Armando da; Flosi, Luciana Claudia Leite

    2009-01-01

    Patients with dysphagia have impairments in many aspects, and an interdisciplinary approach is fundamental to define diagnosis and treatment. A joint approach in the clinical and videoendoscopy evaluation is paramount. To study the correlation between the clinical assessment (ACD) and the videoendoscopic (VED) assessment of swallowing by classifying the degree of severity and the qualitative/descriptive analyses of the procedures. cross-sectional, descriptive and comparative. held from March to December of 2006, at the Otolaryngology/Dysphagia ward of a hospital in the country side of São Paulo. 30 dysphagic patients with different disorders were assessed by ACD and VED. The data was classified by means of severity scales and qualitative/ descriptive analysis. the correlation between severity ACD and VED scales pointed to a statistically significant low agreement (KAPA = 0.4) (p=0,006). The correlation between the qualitative/descriptive analysis pointed to an excellent and statistically significant agreement (KAPA=0.962) (p<0.001) concerning the entire sample. the low agreement between the severity scales point to a need to perform both procedures, reinforcing VED as a doable procedure. The descriptive qualitative analysis pointed to an excellent agreement, and such data reinforces our need to understand swallowing as a process.

  12. Analysis of data on large explosive eruptions of stratovolcanoes to constrain under-recording and eruption rates

    NASA Astrophysics Data System (ADS)

    Rougier, Jonty; Cashman, Kathy; Sparks, Stephen

    2016-04-01

    We have analysed the Large Magnitude Explosive Volcanic Eruptions database (LaMEVE) for volcanoes that classify as stratovolcanoes. A non-parametric statistical approach is used to assess the global recording rate for large (M4+). The approach imposes minimal structure on the shape of the recording rate through time. We find that the recording rates have declined rapidly, going backwards in time. Prior to 1600 they are below 50%, and prior to 1100 they are below 20%. Even in the recent past, e.g. the 1800s, they are likely to be appreciably less than 100%.The assessment for very large (M5+) eruptions is more uncertain, due to the scarcity of events. Having taken under-recording into account the large-eruption rates of stratovolcanoes are modelled exchangeably, in order to derive an informative prior distribution as an input into a subsequent volcano-by-volcano hazard assessment. The statistical model implies that volcano-by-volcano predictions can be grouped by the number of recorded large eruptions. Further, it is possible to combine all volcanoes together into a global large eruption prediction, with an M4+ rate computed from the LaMEVE database of 0.57/yr.

  13. An Assessment of Organizational Health Literacy Practices at an Academic Health Center.

    PubMed

    Prince, Latrina Y; Schmidtke, Carsten; Beck, Jules K; Hadden, Kristie B

    Organizational health literacy is the degree to which an organization considers and promotes the health literacy of patients. Addressing health literacy at an organizational level has the potential to have a greater impact on more health consumers in a health system than individual-level approaches. The purpose of this study was to assess health care practices at an academic health center using the 10 attributes of a health-literate health care organization. Using a survey research design, the Health Literate Healthcare Organization 10-Item Questionnaire was administered online using total population sampling. Employees (N = 10 300) rated the extent that their organization's health care practices consider and promote patients' health literacy. Differences in responses were assessed using factorial analysis of variance. The mean response was 4.7 on a 7-point Likert scale. Employee training and communication about costs received the lowest ratings. Univariate analyses revealed that there were no statistically significant differences (P = .05) by employees' health profession, years of service, or level of patient contact. There were statistically significant differences by highest education obtained with lowest ratings from employees with college degrees. Survey responses indicate a need for improvements in health care practices to better assist patients with inadequate health literacy.

  14. Meta-Analysis of Correlations Between Marginal Bone Resorption and High Insertion Torque of Dental Implants.

    PubMed

    Li, Haoyan; Liang, Yongqiang; Zheng, Qiang

    2015-01-01

    To evaluate correlations between marginal bone resorption and high insertion torque value (> 50 Ncm) of dental implants and to assess the significance of immediate and early/conventional loading of implants under a certain range torque value. Specific inclusion and exclusion criteria were used to retrieve eligible articles from Ovid, PubMed, and EBSCO up to December 2013. Screening of eligible studies, quality assessment, and data extraction were conducted in duplicate. The results were expressed as random/fixed-effects models using weighted mean differences for continuous outcomes with 95% confidence intervals. Initially, 154 articles were selected (11 from Ovid, 112 from PubMed, and 31 from EBSCO). After exclusion of duplicate articles and articles that did not meet the inclusion criteria, six clinical studies were selected. Assessment of P values revealed that correlations between marginal bone resorption and high insertion torque were not statistically significant and that there was no difference between immediately versus early/conventionally loaded implants under a certain range of torque. None of the meta-analyses revealed any statistically significant differences between high insertion torque and conventional insertion torque in terms of effects on marginal bone resorption.

  15. Migration characteristics and early clinical results of a novel-finned press-fit acetabular cup.

    PubMed

    Kaipel, Martin; Prenner, Anton; Bachl, Sebastian; Farr, Sebastian; Sinz, Günter

    2014-04-01

    Ana Nova® is a novel-finned press-fit acetabular cup which showed superior biomechanical characteristics in an experimental set-up. Using Einzel Bild Röntgen Analyse (EBRA) measurements should offer the opportunity to predict implant survival at an early stage. The purpose of this study was to assess migration and clinical outcome 2 years after total hip replacement by a novel-finned press-fit acetabular cup. In this study, migration and clinical results of the implant were prospectively assessed in 67 patients. Clinical outcome was assessed using the Harris hip score (HHS). Migration analyses were performed using the computer assisted EBRA system. Data were analyzed for normal distribution using the Kolmogorov-Smirnov test. Group comparisons were performed using the analysis of variance (ANOVA) test. P-values less than 0.05 were considered statistically significant. At 2 years after surgery, none of the implants needed revision and HHS increased from 39.7 up to 92.2. In contrast to the beneficial clinical outcome, 17 of 44 patients showed increased total migration ( 1 mm/2a). Adverse migration data in this study might predict aseptic loosening and decreased survival of the implant. According to previous studies, it is possible that this effect occurred because of limited accuracy of the EBRA system. In our opinion, migration analyses may not be recommended as a screening tool in a 2 year follow-up.

  16. Meta-epidemiologic study showed frequent time trends in summary estimates from meta-analyses of diagnostic accuracy studies.

    PubMed

    Cohen, Jérémie F; Korevaar, Daniël A; Wang, Junfeng; Leeflang, Mariska M; Bossuyt, Patrick M

    2016-09-01

    To evaluate changes over time in summary estimates from meta-analyses of diagnostic accuracy studies. We included 48 meta-analyses from 35 MEDLINE-indexed systematic reviews published between September 2011 and January 2012 (743 diagnostic accuracy studies; 344,015 participants). Within each meta-analysis, we ranked studies by publication date. We applied random-effects cumulative meta-analysis to follow how summary estimates of sensitivity and specificity evolved over time. Time trends were assessed by fitting a weighted linear regression model of the summary accuracy estimate against rank of publication. The median of the 48 slopes was -0.02 (-0.08 to 0.03) for sensitivity and -0.01 (-0.03 to 0.03) for specificity. Twelve of 96 (12.5%) time trends in sensitivity or specificity were statistically significant. We found a significant time trend in at least one accuracy measure for 11 of the 48 (23%) meta-analyses. Time trends in summary estimates are relatively frequent in meta-analyses of diagnostic accuracy studies. Results from early meta-analyses of diagnostic accuracy studies should be considered with caution. Copyright © 2016 Elsevier Inc. All rights reserved.

  17. Estimating total maximum daily loads with the Stochastic Empirical Loading and Dilution Model

    USGS Publications Warehouse

    Granato, Gregory; Jones, Susan Cheung

    2017-01-01

    The Massachusetts Department of Transportation (DOT) and the Rhode Island DOT are assessing and addressing roadway contributions to total maximum daily loads (TMDLs). Example analyses for total nitrogen, total phosphorus, suspended sediment, and total zinc in highway runoff were done by the U.S. Geological Survey in cooperation with FHWA to simulate long-term annual loads for TMDL analyses with the stochastic empirical loading and dilution model known as SELDM. Concentration statistics from 19 highway runoff monitoring sites in Massachusetts were used with precipitation statistics from 11 long-term monitoring sites to simulate long-term pavement yields (loads per unit area). Highway sites were stratified by traffic volume or surrounding land use to calculate concentration statistics for rural roads, low-volume highways, high-volume highways, and ultraurban highways. The median of the event mean concentration statistics in each traffic volume category was used to simulate annual yields from pavement for a 29- or 30-year period. Long-term average yields for total nitrogen, phosphorus, and zinc from rural roads are lower than yields from the other categories, but yields of sediment are higher than for the low-volume highways. The average yields of the selected water quality constituents from high-volume highways are 1.35 to 2.52 times the associated yields from low-volume highways. The average yields of the selected constituents from ultraurban highways are 1.52 to 3.46 times the associated yields from high-volume highways. Example simulations indicate that both concentration reduction and flow reduction by structural best management practices are crucial for reducing runoff yields.

  18. Crop identification technology assessment for remote sensing (CITARS). Volume 6: Data processing at the laboratory for applications of remote sensing

    NASA Technical Reports Server (NTRS)

    Bauer, M. E.; Cary, T. K.; Davis, B. J.; Swain, P. H.

    1975-01-01

    The results of classifications and experiments for the crop identification technology assessment for remote sensing are summarized. Using two analysis procedures, 15 data sets were classified. One procedure used class weights while the other assumed equal probabilities of occurrence for all classes. Additionally, 20 data sets were classified using training statistics from another segment or date. The classification and proportion estimation results of the local and nonlocal classifications are reported. Data also describe several other experiments to provide additional understanding of the results of the crop identification technology assessment for remote sensing. These experiments investigated alternative analysis procedures, training set selection and size, effects of multitemporal registration, spectral discriminability of corn, soybeans, and other, and analyses of aircraft multispectral data.

  19. Status and trends of land change in the United States--1973 to 2000

    USGS Publications Warehouse

    ,

    2012-01-01

    U.S. Geological Survey (USGS) Professional Paper 1794 is a four-volume series on the status and trends of the Nation’s land use and land cover, providing an assessment of the rates and causes of land-use and land-cover change in the United States between 1973 and 2000. Volumes A, B, C, and D provide analyses for the Western United States, the Great Plains, the Midwest–South Central United States, and the Eastern United States, respectively. The assessments of land-use and land-cover trends are conducted on an ecoregion-by-ecoregion basis, and each ecoregion assessment is guided by a nationally consistent study design that includes mapping, statistical methods, field studies, and analysis. Individual assessments provide a picture of the characteristics of land change occurring in a given ecoregion; in combination, they provide a framework for understanding the complex national mosaic of change and also the causes and consequences of change. Thus, each volume in this series provides a regional assessment of how (and how fast) land use and land cover are changing, and why. The four volumes together form the first comprehensive picture of land change across the Nation. This report is only one of the products produced by USGS on land-use and land-cover change in the United States. Other reports and land-cover statistics are available online at http://landcovertrends.usgs.gov.

  20. Explaining nitrate pollution pressure on the groundwater resource in Kinshasa using a multivariate statistical modelling approach

    NASA Astrophysics Data System (ADS)

    Mfumu Kihumba, Antoine; Vanclooster, Marnik

    2013-04-01

    Drinking water in Kinshasa, the capital of the Democratic Republic of Congo, is provided by extracting groundwater from the local aquifer, particularly in peripheral areas. The exploited groundwater body is mainly unconfined and located within a continuous detrital aquifer, primarily composed of sedimentary formations. However, the aquifer is subjected to an increasing threat of anthropogenic pollution pressure. Understanding the detailed origin of this pollution pressure is important for sustainable drinking water management in Kinshasa. The present study aims to explain the observed nitrate pollution problem, nitrate being considered as a good tracer for other pollution threats. The analysis is made in terms of physical attributes that are readily available using a statistical modelling approach. For the nitrate data, use was made of a historical groundwater quality assessment study, for which the data were re-analysed. The physical attributes are related to the topography, land use, geology and hydrogeology of the region. Prior to the statistical modelling, intrinsic and specific vulnerability for nitrate pollution was assessed. This vulnerability assessment showed that the alluvium area in the northern part of the region is the most vulnerable area. This area consists of urban land use with poor sanitation. Re-analysis of the nitrate pollution data demonstrated that the spatial variability of nitrate concentrations in the groundwater body is high, and coherent with the fragmented land use of the region and the intrinsic and specific vulnerability maps. For the statistical modeling use was made of multiple regression and regression tree analysis. The results demonstrated the significant impact of land use variables on the Kinshasa groundwater nitrate pollution and the need for a detailed delineation of groundwater capture zones around the monitoring stations. Key words: Groundwater , Isotopic, Kinshasa, Modelling, Pollution, Physico-chemical.

  1. Design of a factorial experiment with randomization restrictions to assess medical device performance on vascular tissue.

    PubMed

    Diestelkamp, Wiebke S; Krane, Carissa M; Pinnell, Margaret F

    2011-05-20

    Energy-based surgical scalpels are designed to efficiently transect and seal blood vessels using thermal energy to promote protein denaturation and coagulation. Assessment and design improvement of ultrasonic scalpel performance relies on both in vivo and ex vivo testing. The objective of this work was to design and implement a robust, experimental test matrix with randomization restrictions and predictive statistical power, which allowed for identification of those experimental variables that may affect the quality of the seal obtained ex vivo. The design of the experiment included three factors: temperature (two levels); the type of solution used to perfuse the artery during transection (three types); and artery type (two types) resulting in a total of twelve possible treatment combinations. Burst pressures of porcine carotid and renal arteries sealed ex vivo were assigned as the response variable. The experimental test matrix was designed and carried out as a split-plot experiment in order to assess the contributions of several variables and their interactions while accounting for randomization restrictions present in the experimental setup. The statistical software package SAS was utilized and PROC MIXED was used to account for the randomization restrictions in the split-plot design. The combination of temperature, solution, and vessel type had a statistically significant impact on seal quality. The design and implementation of a split-plot experimental test-matrix provided a mechanism for addressing the existing technical randomization restrictions of ex vivo ultrasonic scalpel performance testing, while preserving the ability to examine the potential effects of independent factors or variables. This method for generating the experimental design and the statistical analyses of the resulting data are adaptable to a wide variety of experimental problems involving large-scale tissue-based studies of medical or experimental device efficacy and performance.

  2. Influence of family environment on language outcomes in children with myelomeningocele.

    PubMed

    Vachha, B; Adams, R

    2005-09-01

    Previously, our studies demonstrated language differences impacting academic performance among children with myelomeningocele and shunted hydrocephalus (MMSH). This follow-up study considers the environmental facilitators within families (achievement orientation, intellectual-cultural orientation, active recreational orientation, independence) among a cohort of children with MMSH and their relationship to language performance. Fifty-eight monolingual, English-speaking children (36 females; mean age: 10.1 years; age range: 7-16 years) with MMSH were evaluated. Exclusionary criteria were prior shunt infection; seizure or shunt malfunction within the previous 3 months; uncorrected visual or auditory impairments; prior diagnoses of mental retardation or attention deficit disorder. The Comprehensive Assessment of Spoken Language (CASL) and the Wechsler Abbreviated Scale of Intelligence (WASI) were administered individually to all participants. The CASL Measures four subsystems: lexical, syntactic, supralinguistic and pragmatic. Parents completed the Family Environment Scale (FES) questionnaire and provided background demographic information. Spearman correlation analyses and partial correlation analyses were performed. Mean intelligence scores for the MMSH group: full scale IQ 92.2 (SD = 11.9). The CASL revealed statistically significant difficulty for supralinguistic and pragmatic (or social) language tasks. FES scores fell within the average range for the group. Spearman correlation and partial correlation analyses revealed statistically significant positive relationships for the FES 'intellectual-cultural orientation' variable and performance within the four language subsystems. Socio-economic status (SES) characteristics were analyzed and did not discriminate language performance when the intellectual-cultural orientation factor was taken into account. The role of family facilitators on language skills in children with MMSH has not previously been described. The relationship between language performance and the families' value on intellectual/cultural activities seems both statistically and intuitively sound. Focused interest in the integration of family values and practices should assist developmental specialists in supporting families and children within their most natural environment.

  3. Randomized trial of parent training to prevent adolescent problem behaviors during the high school transition.

    PubMed

    Mason, W Alex; Fleming, Charles B; Gross, Thomas J; Thompson, Ronald W; Parra, Gilbert R; Haggerty, Kevin P; Snyder, James J

    2016-12-01

    This randomized controlled trial tested a widely used general parent training program, Common Sense Parenting (CSP), with low-income 8th graders and their families to support a positive transition to high school. The program was tested in its original 6-session format and in a modified format (CSP-Plus), which added 2 sessions that included adolescents. Over 2 annual cohorts, 321 families were enrolled and randomly assigned to either the CSP, CSP-Plus, or minimal-contact control condition. Pretest, posttest, 1-year follow-up, and 2-year follow-up survey data on parenting as well as youth school bonding, social skills, and problem behaviors were collected from parents and youth (94% retention). Extending prior examinations of posttest outcomes, intent-to-treat regression analyses tested for intervention effects at the 2 follow-up assessments, and growth curve analyses examined experimental condition differences in yearly change across time. Separate exploratory tests of moderation by youth gender, youth conduct problems, and family economic hardship also were conducted. Out of 52 regression models predicting 1- and 2-year follow-up outcomes, only 2 out of 104 possible intervention effects were statistically significant. No statistically significant intervention effects were found in the growth curve analyses. Tests of moderation also showed few statistically significant effects. Because CSP already is in widespread use, findings have direct implications for practice. Specifically, findings suggest that the program may not be efficacious with parents of adolescents in a selective prevention context and may reveal the limits of brief, general parent training for achieving outcomes with parents of adolescents. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  4. Time trends of persistent organic pollutants in Sweden during 1993-2007 and relation to age, gender, body mass index, breast-feeding and parity.

    PubMed

    Hardell, Elin; Carlberg, Michael; Nordström, Marie; van Bavel, Bert

    2010-09-15

    Persistent organic pollutants (POPs) are lipophilic chemicals that bioaccumulate. Most of them were resticted or banned in the 1970s and 1980s to protect human health and the environment. The main source for humans is dietary intake of dairy products, meat and fish. Little data exist on changes of the concentration of POPs in the Swedish population over time. To study if the concentrations of polychlorinated biphenyls (PCBs), DDE, hexachlorobenzene (HCB) and chlordanes have changed in the Swedish population during 1993-2007, and certain factors that may influence the concentrations. During 1993-2007 samples from 537 controls in different human cancer studies were collected and analysed. Background information such as body mass index, breast-feeding and parity was assessed by questionaires. Wilcoxon rank-sum test was used to analyse the explanatory factors specimen (blood or adipose tissue), gender, BMI, total breast-feeding and parity in relation to POPs. Time trends for POPs were analysed using linear regression analysis, adjusted for specimen, gender, BMI and age. The concentration decreased for all POPs during 1993-2007. The annual change was statistically significant for the sum of PCBs -7.2%, HCB -8.8%, DDE -13.5% and the sum of chlordanes -10.3%. BMI and age were determinants of the concentrations. Cumulative breast-feeding >8 months gave statistically significantly lower concentrations for the sum of PCBs, DDE and the sum of chlordanes. Parity with >2 children yielded statistically significantly lower sum of PCBs. All the studied POPs decreased during the time period, probably due to restrictions of their use. Copyright 2010 Elsevier B.V. All rights reserved.

  5. Is using multiple imputation better than complete case analysis for estimating a prevalence (risk) difference in randomized controlled trials when binary outcome observations are missing?

    PubMed

    Mukaka, Mavuto; White, Sarah A; Terlouw, Dianne J; Mwapasa, Victor; Kalilani-Phiri, Linda; Faragher, E Brian

    2016-07-22

    Missing outcomes can seriously impair the ability to make correct inferences from randomized controlled trials (RCTs). Complete case (CC) analysis is commonly used, but it reduces sample size and is perceived to lead to reduced statistical efficiency of estimates while increasing the potential for bias. As multiple imputation (MI) methods preserve sample size, they are generally viewed as the preferred analytical approach. We examined this assumption, comparing the performance of CC and MI methods to determine risk difference (RD) estimates in the presence of missing binary outcomes. We conducted simulation studies of 5000 simulated data sets with 50 imputations of RCTs with one primary follow-up endpoint at different underlying levels of RD (3-25 %) and missing outcomes (5-30 %). For missing at random (MAR) or missing completely at random (MCAR) outcomes, CC method estimates generally remained unbiased and achieved precision similar to or better than MI methods, and high statistical coverage. Missing not at random (MNAR) scenarios yielded invalid inferences with both methods. Effect size estimate bias was reduced in MI methods by always including group membership even if this was unrelated to missingness. Surprisingly, under MAR and MCAR conditions in the assessed scenarios, MI offered no statistical advantage over CC methods. While MI must inherently accompany CC methods for intention-to-treat analyses, these findings endorse CC methods for per protocol risk difference analyses in these conditions. These findings provide an argument for the use of the CC approach to always complement MI analyses, with the usual caveat that the validity of the mechanism for missingness be thoroughly discussed. More importantly, researchers should strive to collect as much data as possible.

  6. Predicting clinical trial results based on announcements of interim analyses

    PubMed Central

    2014-01-01

    Background Announcements of interim analyses of a clinical trial convey information about the results beyond the trial’s Data Safety Monitoring Board (DSMB). The amount of information conveyed may be minimal, but the fact that none of the trial’s stopping boundaries has been crossed implies that the experimental therapy is neither extremely effective nor hopeless. Predicting success of the ongoing trial is of interest to the trial’s sponsor, the medical community, pharmaceutical companies, and investors. We determine the probability of trial success by quantifying only the publicly available information from interim analyses of an ongoing trial. We illustrate our method in the context of the National Surgical Adjuvant Breast and Bowel (NSABP) trial, C-08. Methods We simulated trials based on the specifics of the NSABP C-08 protocol that were publicly available. We quantified the uncertainty around the treatment effect using prior weights for the various possibilities in light of other colon cancer studies and other studies of the investigational agent, bevacizumab. We considered alternative prior distributions. Results Subsequent to the trial’s third interim analysis, our predictive probabilities were: that the trial would eventually be successful, 48.0%; would stop for futility, 7.4%; and would continue to completion without statistical significance, 44.5%. The actual trial continued to completion without statistical significance. Conclusions Announcements of interim analyses provide information outside the DSMB’s sphere of confidentiality. This information is potentially helpful to clinical trial prognosticators. ‘Information leakage’ from standard interim analyses such as in NSABP C-08 is conventionally viewed as acceptable even though it may be quite revealing. Whether leakage from more aggressive types of adaptations is acceptable should be assessed at the design stage. PMID:24607270

  7. Polarimetry based partial least square classification of ex vivo healthy and basal cell carcinoma human skin tissues.

    PubMed

    Ahmad, Iftikhar; Ahmad, Manzoor; Khan, Karim; Ikram, Masroor

    2016-06-01

    Optical polarimetry was employed for assessment of ex vivo healthy and basal cell carcinoma (BCC) tissue samples from human skin. Polarimetric analyses revealed that depolarization and retardance for healthy tissue group were significantly higher (p<0.001) compared to BCC tissue group. Histopathology indicated that these differences partially arise from BCC-related characteristic changes in tissue morphology. Wilks lambda statistics demonstrated the potential of all investigated polarimetric properties for computer assisted classification of the two tissue groups. Based on differences in polarimetric properties, partial least square (PLS) regression classified the samples with 100% accuracy, sensitivity and specificity. These findings indicate that optical polarimetry together with PLS statistics hold promise for automated pathology classification. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. Differentiation of chocolates according to the cocoa's geographical origin using chemometrics.

    PubMed

    Cambrai, Amandine; Marcic, Christophe; Morville, Stéphane; Sae Houer, Pierre; Bindler, Françoise; Marchioni, Eric

    2010-02-10

    The determination of the geographical origin of cocoa used to produce chocolate has been assessed through the analysis of the volatile compounds of chocolate samples. The analysis of the volatile content and their statistical processing by multivariate analyses tended to form independent groups for both Africa and Madagascar, even if some of the chocolate samples analyzed appeared in a mixed zone together with those from America. This analysis also allowed a clear separation between Caribbean chocolates and those from other origins. Height compounds (such as linalool or (E,E)-2,4-decadienal) characteristic of chocolate's different geographical origins were also identified. The method described in this work (hydrodistillation, GC analysis, and statistic treatment) may improve the control of the geographical origin of chocolate during its long production process.

  9. Measuring pain phenomena after spinal cord injury: Development and psychometric properties of the SCI-QOL Pain Interference and Pain Behavior assessment tools.

    PubMed

    Cohen, Matthew L; Kisala, Pamela A; Dyson-Hudson, Trevor A; Tulsky, David S

    2018-05-01

    To develop modern patient-reported outcome measures that assess pain interference and pain behavior after spinal cord injury (SCI). Grounded-theory based qualitative item development; large-scale item calibration field-testing; confirmatory factor analyses; graded response model item response theory analyses; statistical linking techniques to transform scores to the Patient Reported Outcome Measurement Information System (PROMIS) metric. Five SCI Model Systems centers and one Department of Veterans Affairs medical center in the United States. Adults with traumatic SCI. N/A. Spinal Cord Injury - Quality of Life (SCI-QOL) Pain Interference item bank, SCI-QOL Pain Interference short form, and SCI-QOL Pain Behavior scale. Seven hundred fifty-seven individuals with traumatic SCI completed 58 items addressing various aspects of pain. Items were then separated by whether they assessed pain interference or pain behavior, and poorly functioning items were removed. Confirmatory factor analyses confirmed that each set of items was unidimensional, and item response theory analyses were used to estimate slopes and thresholds for the items. Ultimately, 7 items (4 from PROMIS) comprised the Pain Behavior scale and 25 items (18 from PROMIS) comprised the Pain Interference item bank. Ten of these 25 items were selected to form the Pain Interference short form. The SCI-QOL Pain Interference item bank and the SCI-QOL Pain Behavior scale demonstrated robust psychometric properties. The Pain Interference item bank is available as a computer adaptive test or short form for research and clinical applications, and scores are transformed to the PROMIS metric.

  10. Proper assessment of the JFK assassination bullet lead evidence from metallurgical and statistical perspectives.

    PubMed

    Randich, Erik; Grant, Patrick M

    2006-07-01

    The bullet evidence in the JFK assassination investigation was reexamined from metallurgical and statistical standpoints. The questioned specimens are comprised of soft lead, possibly from full-metal-jacketed Mannlicher-Carcano (MC), 6.5-mm ammunition. During lead refining, contaminant elements are removed to specified levels for a desired alloy or composition. Microsegregation of trace and minor elements during lead casting and processing can account for the experimental variabilities measured in various evidentiary and comparison samples by laboratory analysts. Thus, elevated concentrations of antimony and copper at crystallographic grain boundaries, the widely varying sizes of grains in MC bullet lead, and the 5-60 mg bullet samples analyzed for assassination intelligence effectively resulted in operational sampling error for the analyses. This deficiency was not considered in the original data interpretation and resulted in an invalid conclusion in favor of the single-bullet theory of the assassination. Alternate statistical calculations, based on the historic analytical data, incorporating weighted averaging and propagation of experimental uncertainties also considerably weaken support for the single-bullet theory. In effect, this assessment of the material composition of the lead specimens from the assassination concludes that the extant evidence is consistent with any number between two and five rounds fired in Dealey Plaza during the shooting.

  11. Proper Assessment of the JFK Assassination Bullet Lead Evidence from Metallurgical and Statistical Perspectives

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Randich, E; Grant, P M

    2006-08-29

    The bullet evidence in the JFK assassination investigation was reexamined from metallurgical and statistical standpoints. The questioned specimens are comprised of soft lead, possibly from full-metal-jacketed Mannlicher-Carcano, 6.5-mm ammunition. During lead refining, contaminant elements are removed to specified levels for a desired alloy or composition. Microsegregation of trace and minor elements during lead casting and processing can account for the experimental variabilities measured in various evidentiary and comparison samples by laboratory analysts. Thus, elevated concentrations of antimony and copper at crystallographic grain boundaries, the widely varying sizes of grains in Mannlicher-Carcano bullet lead, and the 5-60 mg bullet samples analyzedmore » for assassination intelligence effectively resulted in operational sampling error for the analyses. This deficiency was not considered in the original data interpretation and resulted in an invalid conclusion in favor of the single-bullet theory of the assassination. Alternate statistical calculations, based on the historic analytical data, incorporating weighted averaging and propagation of experimental uncertainties also considerably weaken support for the single-bullet theory. In effect, this assessment of the material composition of the lead specimens from the assassination concludes that the extant evidence is consistent with any number between two and five rounds fired in Dealey Plaza during the shooting.« less

  12. Body fat indices and biomarkers of inflammation: a cross-sectional study with implications for obesity and peri-implant oral health.

    PubMed

    Elangovan, Satheesh; Brogden, Kim A; Dawson, Deborah V; Blanchette, Derek; Pagan-Rivera, Keyla; Stanford, Clark M; Johnson, Georgia K; Recker, Erica; Bowers, Rob; Haynes, William G; Avila-Ortiz, Gustavo

    2014-01-01

    To examine the relationships between three measures of body fat-body mass index (BMI), waist circumference (WC), and total body fat percent-and markers of inflammation around dental implants in stable periodontal maintenance patients. Seventy-three subjects were enrolled in this cross-sectional assessment. The study visit consisted of a physical examination that included anthropologic measurements of body composition (BMI, WC, body fat %); intraoral assessments were performed (full-mouth plaque index, periodontal and peri-implant comprehensive examinations) and peri-implant sulcular fluid (PISF) was collected on the study implants. Levels of interleukin (IL)-1α, IL-1β, IL-6, IL-8, IL-10, IL-12, IL-17, tumor necrosis factor-α, C-reactive protein, osteoprotegerin, leptin, and adiponectin in the PISF were measured using multiplex proteomic immunoassays. Correlation analysis with body fat measures was then performed using appropriate statistical methods. After adjustments for covariates, regression analyses revealed statistically significant correlation between IL-1β in PISF and WC (R = 0.33; P = .0047). In this study in stable periodontal maintenance patients, a modest but statistically significant positive correlation was observed between the levels of IL-1β, a major proinflammatory cytokine in PISF, and WC, a reliable measure of central obesity.

  13. Evaluating and interpreting cross-taxon congruence: Potential pitfalls and solutions

    NASA Astrophysics Data System (ADS)

    Gioria, Margherita; Bacaro, Giovanni; Feehan, John

    2011-05-01

    Characterizing the relationship between different taxonomic groups is critical to identify potential surrogates for biodiversity. Previous studies have shown that cross-taxa relationships are generally weak and/or inconsistent. The difficulties in finding predictive patterns have often been attributed to the spatial and temporal scales of these studies and on the differences in the measure used to evaluate such relationships (species richness versus composition). However, the choice of the analytical approach used to evaluate cross-taxon congruence inevitably represents a major source of variation. Here, we described the use of a range of methods that can be used to comprehensively assess cross-taxa relationships. To do so, we used data for two taxonomic groups, wetland plants and water beetles, collected from 54 farmland ponds in Ireland. Specifically, we used the Pearson correlation and rarefaction curves to analyse patterns in species richness, while Mantel tests, Procrustes analysis, and co-correspondence analysis were used to evaluate congruence in species composition. We compared the results of these analyses and we described some of the potential pitfalls associated with the use of each of these statistical approaches. Cross-taxon congruence was moderate to strong, depending on the choice of the analytical approach, on the nature of the response variable, and on local and environmental conditions. Our findings indicate that multiple approaches and measures of community structure are required for a comprehensive assessment of cross-taxa relationships. In particular, we showed that selection of surrogate taxa in conservation planning should not be based on a single statistic expressing the degree of correlation in species richness or composition. Potential solutions to the analytical issues associated with the assessment of cross-taxon congruence are provided and the implications of our findings in the selection of surrogates for biodiversity are discussed.

  14. The effectiveness and safety of antifibrinolytics in patients with acute intracranial haemorrhage: statistical analysis plan for an individual patient data meta-analysis.

    PubMed

    Ker, Katharine; Prieto-Merino, David; Sprigg, Nikola; Mahmood, Abda; Bath, Philip; Kang Law, Zhe; Flaherty, Katie; Roberts, Ian

    2017-01-01

    Introduction : The Antifibrinolytic Trialists Collaboration aims to increase knowledge about the effectiveness and safety of antifibrinolytic treatment by conducting individual patient data (IPD) meta-analyses of randomised trials. This article presents the statistical analysis plan for an IPD meta-analysis of the effects of antifibrinolytics for acute intracranial haemorrhage. Methods : The protocol for the IPD meta-analysis has been registered with PROSPERO (CRD42016052155). We will conduct an individual patient data meta-analysis of randomised controlled trials with 1000 patients or more assessing the effects of antifibrinolytics in acute intracranial haemorrhage. We will assess the effect on two co-primary outcomes: 1) death in hospital at end of trial follow-up, and 2) death in hospital or dependency at end of trial follow-up. The co-primary outcomes will be limited to patients treated within three hours of injury or stroke onset. We will report treatment effects using odds ratios and 95% confidence intervals. We use logistic regression models to examine how the effect of antifibrinolytics vary by time to treatment, severity of intracranial bleeding, and age. We will also examine the effect of antifibrinolytics on secondary outcomes including death, dependency, vascular occlusive events, seizures, and neurological outcomes. Secondary outcomes will be assessed in all patients irrespective of time of treatment. All analyses will be conducted on an intention-to-treat basis. Conclusions : This IPD meta-analysis will examine important clinical questions about the effects of antifibrinolytic treatment in patients with intracranial haemorrhage that cannot be answered using aggregate data. With IPD we can examine how effects vary by time to treatment, bleeding severity, and age, to gain better understanding of the balance of benefit and harms on which to base recommendations for practice.

  15. Derivation and validation of the prediabetes self-assessment screening score after acute pancreatitis (PERSEUS).

    PubMed

    Soo, Danielle H E; Pendharkar, Sayali A; Jivanji, Chirag J; Gillies, Nicola A; Windsor, John A; Petrov, Maxim S

    2017-10-01

    Approximately 40% of patients develop abnormal glucose metabolism after a single episode of acute pancreatitis. This study aimed to develop and validate a prediabetes self-assessment screening score for patients after acute pancreatitis. Data from non-overlapping training (n=82) and validation (n=80) cohorts were analysed. Univariate logistic and linear regression identified variables associated with prediabetes after acute pancreatitis. Multivariate logistic regression developed the score, ranging from 0 to 215. The area under the receiver-operating characteristic curve (AUROC), Hosmer-Lemeshow χ 2 statistic, and calibration plots were used to assess model discrimination and calibration. The developed score was validated using data from the validation cohort. The score had an AUROC of 0.88 (95% CI, 0.80-0.97) and Hosmer-Lemeshow χ 2 statistic of 5.75 (p=0.676). Patients with a score of ≥75 had a 94.1% probability of having prediabetes, and were 29 times more likely to have prediabetes than those with a score of <75. The AUROC in the validation cohort was 0.81 (95% CI, 0.70-0.92) and the Hosmer-Lemeshow χ 2 statistic was 5.50 (p=0.599). Model calibration of the score showed good calibration in both cohorts. The developed and validated score, called PERSEUS, is the first instrument to identify individuals who are at high risk of developing abnormal glucose metabolism following an episode of acute pancreatitis. Copyright © 2017 Editrice Gastroenterologica Italiana S.r.l. Published by Elsevier Ltd. All rights reserved.

  16. A Retrospective Survey of Research Design and Statistical Analyses in Selected Chinese Medical Journals in 1998 and 2008

    PubMed Central

    Jin, Zhichao; Yu, Danghui; Zhang, Luoman; Meng, Hong; Lu, Jian; Gao, Qingbin; Cao, Yang; Ma, Xiuqiang; Wu, Cheng; He, Qian; Wang, Rui; He, Jia

    2010-01-01

    Background High quality clinical research not only requires advanced professional knowledge, but also needs sound study design and correct statistical analyses. The number of clinical research articles published in Chinese medical journals has increased immensely in the past decade, but study design quality and statistical analyses have remained suboptimal. The aim of this investigation was to gather evidence on the quality of study design and statistical analyses in clinical researches conducted in China for the first decade of the new millennium. Methodology/Principal Findings Ten (10) leading Chinese medical journals were selected and all original articles published in 1998 (N = 1,335) and 2008 (N = 1,578) were thoroughly categorized and reviewed. A well-defined and validated checklist on study design, statistical analyses, results presentation, and interpretation was used for review and evaluation. Main outcomes were the frequencies of different types of study design, error/defect proportion in design and statistical analyses, and implementation of CONSORT in randomized clinical trials. From 1998 to 2008: The error/defect proportion in statistical analyses decreased significantly ( = 12.03, p<0.001), 59.8% (545/1,335) in 1998 compared to 52.2% (664/1,578) in 2008. The overall error/defect proportion of study design also decreased ( = 21.22, p<0.001), 50.9% (680/1,335) compared to 42.40% (669/1,578). In 2008, design with randomized clinical trials remained low in single digit (3.8%, 60/1,578) with two-third showed poor results reporting (defects in 44 papers, 73.3%). Nearly half of the published studies were retrospective in nature, 49.3% (658/1,335) in 1998 compared to 48.2% (761/1,578) in 2008. Decreases in defect proportions were observed in both results presentation ( = 93.26, p<0.001), 92.7% (945/1,019) compared to 78.2% (1023/1,309) and interpretation ( = 27.26, p<0.001), 9.7% (99/1,019) compared to 4.3% (56/1,309), some serious ones persisted. Conclusions/Significance Chinese medical research seems to have made significant progress regarding statistical analyses, but there remains ample room for improvement regarding study designs. Retrospective clinical studies are the most often used design, whereas randomized clinical trials are rare and often show methodological weaknesses. Urgent implementation of the CONSORT statement is imperative. PMID:20520824

  17. [Patient first - The impact of characteristics of target populations on decisions about therapy effectiveness of complex interventions: Psychological variables to assess effectiveness in interdisciplinary multimodal pain therapy].

    PubMed

    Kaiser, Ulrike; Sabatowski, Rainer; Balck, Friedrich

    2017-08-01

    The assessment of treatment effectiveness in public health settings is ensured by indicators that reflect the changes caused by specific interventions. These indicators are also applied in benchmarking systems. The selection of constructs should be guided by their relevance for affected patients (patient reported outcomes). The interdisciplinary multimodal pain therapy (IMPT) is a complex intervention based on a biopsychosocial understanding of chronic pain. For quality assurance purposes, psychological parameters (depression, general anxiety, health-related quality of life) are included in standardized therapy assessment in pain medicine (KEDOQ), which can also be used for comparative analyses in a benchmarking system. The aim of the present study was to investigate the relevance of depressive symptoms, general anxiety and mental quality of life in patients undergoing IMPT under real life conditions. In this retrospective, one-armed and exploratory observational study we used secondary data of a routine documentation of IMST in routine care, applying several variables of the German Pain Questionnaire and the facility's comprehensive basic documentation. 352 participants with IMPT (from 2006 to 2010) were included, and the follow-up was performed over two years with six assessments. Because of statistically heterogeneous characteristics a complex analysis consisting of factor and cluster analyses was applied to build subgroups. These subgroups were explored to identify differences in depressive symptoms (HADS-D), general anxiety (HADS-A), and mental quality of life (SF 36 PSK) at the time of therapy admission and their development estimated by means of effect sizes. Analyses were performed using SPSS 21.0®. Six subgroups were derived and mainly proved to be clinically and psychologically normal, with the exception of one subgroup that consistently showed psychological impairment for all three parameters. The follow-up of the total study population revealed medium or large effects; changes in the subgroups were consistently caused by two subgroups, while the other four showed little or no change. In summary, only a small proportion of the target population (20 %) demonstrated clinically relevant scores in the psychological parameters applied. When selecting indicators for quality assurance, the heterogeneity of the target populations as well as conceptual and methodological aspects should be considered. The characteristics of the parameters intended, along with clinical and personal relevance of indicators for patients, should be investigated by specific procedures such as patient surveys and statistical analyses. Copyright © 2017. Published by Elsevier GmbH.

  18. Use of Statistical Analyses in the Ophthalmic Literature

    PubMed Central

    Lisboa, Renato; Meira-Freitas, Daniel; Tatham, Andrew J.; Marvasti, Amir H.; Sharpsten, Lucie; Medeiros, Felipe A.

    2014-01-01

    Purpose To identify the most commonly used statistical analyses in the ophthalmic literature and to determine the likely gain in comprehension of the literature that readers could expect if they were to sequentially add knowledge of more advanced techniques to their statistical repertoire. Design Cross-sectional study Methods All articles published from January 2012 to December 2012 in Ophthalmology, American Journal of Ophthalmology and Archives of Ophthalmology were reviewed. A total of 780 peer-reviewed articles were included. Two reviewers examined each article and assigned categories to each one depending on the type of statistical analyses used. Discrepancies between reviewers were resolved by consensus. Main Outcome Measures Total number and percentage of articles containing each category of statistical analysis were obtained. Additionally we estimated the accumulated number and percentage of articles that a reader would be expected to be able to interpret depending on their statistical repertoire. Results Readers with little or no statistical knowledge would be expected to be able to interpret the statistical methods presented in only 20.8% of articles. In order to understand more than half (51.4%) of the articles published, readers were expected to be familiar with at least 15 different statistical methods. Knowledge of 21 categories of statistical methods was necessary to comprehend 70.9% of articles, while knowledge of more than 29 categories was necessary to comprehend more than 90% of articles. Articles in retina and glaucoma subspecialties showed a tendency for using more complex analysis when compared to cornea. Conclusions Readers of clinical journals in ophthalmology need to have substantial knowledge of statistical methodology to understand the results of published studies in the literature. The frequency of use of complex statistical analyses also indicates that those involved in the editorial peer-review process must have sound statistical knowledge in order to critically appraise articles submitted for publication. The results of this study could provide guidance to direct the statistical learning of clinical ophthalmologists, researchers and educators involved in the design of courses for residents and medical students. PMID:24612977

  19. Relationship between sitting volleyball performance and field fitness of sitting volleyball players in Korea

    PubMed Central

    Jeoung, Bogja

    2017-01-01

    The purpose of this study was to evaluate the relationship between sitting volleyball performance and the field fitness of sitting volleyball players. Forty-five elite sitting volleyball players participated in 10 field fitness tests. Additionally, the players’ head coach and coach assessed their volleyball performance (receive and defense, block, attack, and serve). Data were analyzed with SPSS software version 21 by using correlation and regression analyses, and the significance level was set at P< 0.05. The results showed that chest pass, overhand throw, one-hand throw, one-hand side throw, splint, speed endurance, reaction time, and graded exercise test results had a statistically significant influence on the players’ abilities to attack, serve, and block. Grip strength, t-test, speed, and agility showed a statistically significant relationship with the players’ skill at defense and receive. Our results showed that chest pass, overhand throw, one-hand throw, one-hand side throw, speed endurance, reaction time, and graded exercise test results had a statistically significant influence on volleyball performance. PMID:29326896

  20. Application of multivariate statistical techniques in microbial ecology

    PubMed Central

    Paliy, O.; Shankar, V.

    2016-01-01

    Recent advances in high-throughput methods of molecular analyses have led to an explosion of studies generating large scale ecological datasets. Especially noticeable effect has been attained in the field of microbial ecology, where new experimental approaches provided in-depth assessments of the composition, functions, and dynamic changes of complex microbial communities. Because even a single high-throughput experiment produces large amounts of data, powerful statistical techniques of multivariate analysis are well suited to analyze and interpret these datasets. Many different multivariate techniques are available, and often it is not clear which method should be applied to a particular dataset. In this review we describe and compare the most widely used multivariate statistical techniques including exploratory, interpretive, and discriminatory procedures. We consider several important limitations and assumptions of these methods, and we present examples of how these approaches have been utilized in recent studies to provide insight into the ecology of the microbial world. Finally, we offer suggestions for the selection of appropriate methods based on the research question and dataset structure. PMID:26786791

  1. A Comparative Evaluation of Mixed Dentition Analysis on Reliability of Cone Beam Computed Tomography Image Compared to Plaster Model.

    PubMed

    Gowd, Snigdha; Shankar, T; Dash, Samarendra; Sahoo, Nivedita; Chatterjee, Suravi; Mohanty, Pritam

    2017-01-01

    The aim of the study was to evaluate the reliability of cone beam computed tomography (CBCT) obtained image over plaster model for the assessment of mixed dentition analysis. Thirty CBCT-derived images and thirty plaster models were derived from the dental archives, and Moyer's and Tanaka-Johnston analyses were performed. The data obtained were interpreted and analyzed statistically using SPSS 10.0/PC (SPSS Inc., Chicago, IL, USA). Descriptive and analytical analysis along with Student's t -test was performed to qualitatively evaluate the data and P < 0.05 was considered statistically significant. Statistically, significant results were obtained on data comparison between CBCT-derived images and plaster model; the mean for Moyer's analysis in the left and right lower arch for CBCT and plaster model was 21.2 mm, 21.1 mm and 22.5 mm, 22.5 mm, respectively. CBCT-derived images were less reliable as compared to data obtained directly from plaster model for mixed dentition analysis.

  2. Quantifying variation in speciation and extinction rates with clade data.

    PubMed

    Paradis, Emmanuel; Tedesco, Pablo A; Hugueny, Bernard

    2013-12-01

    High-level phylogenies are very common in evolutionary analyses, although they are often treated as incomplete data. Here, we provide statistical tools to analyze what we name "clade data," which are the ages of clades together with their numbers of species. We develop a general approach for the statistical modeling of variation in speciation and extinction rates, including temporal variation, unknown variation, and linear and nonlinear modeling. We show how this approach can be generalized to a wide range of situations, including testing the effects of life-history traits and environmental variables on diversification rates. We report the results of an extensive simulation study to assess the performance of some statistical tests presented here as well as of the estimators of speciation and extinction rates. These latter results suggest the possibility to estimate correctly extinction rate in the absence of fossils. An example with data on fish is presented. © 2013 The Author(s). Evolution © 2013 The Society for the Study of Evolution.

  3. Lifetime use of cannabis from longitudinal assessments, cannabinoid receptor (CNR1) variation, and reduced volume of the right anterior cingulate

    PubMed Central

    Hill, Shirley Y.; Sharma, Vinod; Jones, Bobby L.

    2016-01-01

    Lifetime measures of cannabis use and co-occurring exposures were obtained from a longitudinal cohort followed an average of 13 years at the time they received a structural MRI scan. MRI scans were analyzed for 88 participants (mean age=25.9 years), 34 of whom were regular users of cannabis. Whole brain voxel based morphometry analyses (SPM8) were conducted using 50 voxel clusters at p=0.005. Controlling for age, familial risk, and gender, we found reduced volume in Regular Users compared to Non-Users, in the lingual gyrus, anterior cingulum (right and left), and the rolandic operculum (right). The right anterior cingulum reached family-wise error statistical significance at p=0.001, controlling for personal lifetime use of alcohol and cigarettes and any prenatal exposures. CNR1 haplotypes were formed from four CNR1 SNPs (rs806368, rs1049353, rs2023239, and rs6454674) and tested with level of cannabis exposure to assess their interactive effects on the lingual gyrus, cingulum (right and left) and rolandic operculum, regions showing cannabis exposure effects in the SPM8 analyses. These analyses used mixed model analyses (SPSS) to control for multiple potentially confounding variables. Level of cannabis exposure was associated with decreased volume of the right anterior cingulum and showed interaction effects with haplotype variation. PMID:27500453

  4. Global atmospheric circulation statistics, 1000-1 mb

    NASA Technical Reports Server (NTRS)

    Randel, William J.

    1992-01-01

    The atlas presents atmospheric general circulation statistics derived from twelve years (1979-90) of daily National Meteorological Center (NMC) operational geopotential height analyses; it is an update of a prior atlas using data over 1979-1986. These global analyses are available on pressure levels covering 1000-1 mb (approximately 0-50 km). The geopotential grids are a combined product of the Climate Analysis Center (which produces analyses over 70-1 mb) and operational NMC analyses (over 1000-100 mb). Balance horizontal winds and hydrostatic temperatures are derived from the geopotential fields.

  5. Assessment of statistical education in Indonesia: Preliminary results and initiation to simulation-based inference

    NASA Astrophysics Data System (ADS)

    Saputra, K. V. I.; Cahyadi, L.; Sembiring, U. A.

    2018-01-01

    Start in this paper, we assess our traditional elementary statistics education and also we introduce elementary statistics with simulation-based inference. To assess our statistical class, we adapt the well-known CAOS (Comprehensive Assessment of Outcomes in Statistics) test that serves as an external measure to assess the student’s basic statistical literacy. This test generally represents as an accepted measure of statistical literacy. We also introduce a new teaching method on elementary statistics class. Different from the traditional elementary statistics course, we will introduce a simulation-based inference method to conduct hypothesis testing. From the literature, it has shown that this new teaching method works very well in increasing student’s understanding of statistics.

  6. Performance assessment in a flight simulator test—Validation of a space psychology methodology

    NASA Astrophysics Data System (ADS)

    Johannes, B.; Salnitski, Vyacheslav; Soll, Henning; Rauch, Melina; Goeters, Klaus-Martin; Maschke, Peter; Stelling, Dirk; Eißfeldt, Hinnerk

    2007-02-01

    The objective assessment of operator performance in hand controlled docking of a spacecraft on a space station has 30 years of tradition and is well established. In the last years the performance assessment was successfully combined with a psycho-physiological approach for the objective assessment of the levels of physiological arousal and psychological load. These methods are based on statistical reference data. For the enhancement of the statistical power of the evaluation methods, both were actually implemented into a comparable terrestrial task: the flight simulator test of DLR in the selection procedure for ab initio pilot applicants for civil airlines. In the first evaluation study 134 male subjects were analysed. Subjects underwent a flight simulator test including three tasks, which were evaluated by instructors applying well-established and standardised rating scales. The principles of the performance algorithms of the docking training were adapted for the automated flight performance assessment. They are presented here. The increased human errors under instrument flight conditions without visual feedback required a manoeuvre recognition algorithm before calculating the deviation of the flown track from the given task elements. Each manoeuvre had to be evaluated independently of former failures. The expert rated performance showed a highly significant correlation with the automatically calculated performance for each of the three tasks: r=.883, r=.874, r=.872, respectively. An automated algorithm successfully assessed the flight performance. This new method will possibly provide a wide range of other future applications in aviation and space psychology.

  7. [Assessment of the efficiency of the auditory training in children with dyslalia and auditory processing disorders].

    PubMed

    Włodarczyk, Elżbieta; Szkiełkowska, Agata; Skarżyński, Henryk; Piłka, Adam

    2011-01-01

    To assess effectiveness of the auditory training in children with dyslalia and central auditory processing disorders. Material consisted of 50 children aged 7-9-years-old. Children with articulation disorders stayed under long-term speech therapy care in the Auditory and Phoniatrics Clinic. All children were examined by a laryngologist and a phoniatrician. Assessment included tonal and impedance audiometry and speech therapists' and psychologist's consultations. Additionally, a set of electrophysiological examinations was performed - registration of N2, P2, N2, P2, P300 waves and psychoacoustic test of central auditory functions: FPT - frequency pattern test. Next children took part in the regular auditory training and attended speech therapy. Speech assessment followed treatment and therapy, again psychoacoustic tests were performed and P300 cortical potentials were recorded. After that statistical analyses were performed. Analyses revealed that application of auditory training in patients with dyslalia and other central auditory disorders is very efficient. Auditory training may be a very efficient therapy supporting speech therapy in children suffering from dyslalia coexisting with articulation and central auditory disorders and in children with educational problems of audiogenic origin. Copyright © 2011 Polish Otolaryngology Society. Published by Elsevier Urban & Partner (Poland). All rights reserved.

  8. Secondary Analysis of National Longitudinal Transition Study 2 Data

    ERIC Educational Resources Information Center

    Hicks, Tyler A.; Knollman, Greg A.

    2015-01-01

    This review examines published secondary analyses of National Longitudinal Transition Study 2 (NLTS2) data, with a primary focus upon statistical objectives, paradigms, inferences, and methods. Its primary purpose was to determine which statistical techniques have been common in secondary analyses of NLTS2 data. The review begins with an…

  9. A Nonparametric Geostatistical Method For Estimating Species Importance

    Treesearch

    Andrew J. Lister; Rachel Riemann; Michael Hoppus

    2001-01-01

    Parametric statistical methods are not always appropriate for conducting spatial analyses of forest inventory data. Parametric geostatistical methods such as variography and kriging are essentially averaging procedures, and thus can be affected by extreme values. Furthermore, non normal distributions violate the assumptions of analyses in which test statistics are...

  10. "Who Was 'Shadow'?" The Computer Knows: Applying Grammar-Program Statistics in Content Analyses to Solve Mysteries about Authorship.

    ERIC Educational Resources Information Center

    Ellis, Barbara G.; Dick, Steven J.

    1996-01-01

    Employs the statistics-documentation portion of a word-processing program's grammar-check feature together with qualitative analyses to determine that Henry Watterson, long-time editor of the "Louisville Courier-Journal," was probably the South's famed Civil War correspondent "Shadow." (TB)

  11. Influence of peer review on the reporting of primary outcome(s) and statistical analyses of randomised trials.

    PubMed

    Hopewell, Sally; Witt, Claudia M; Linde, Klaus; Icke, Katja; Adedire, Olubusola; Kirtley, Shona; Altman, Douglas G

    2018-01-11

    Selective reporting of outcomes in clinical trials is a serious problem. We aimed to investigate the influence of the peer review process within biomedical journals on reporting of primary outcome(s) and statistical analyses within reports of randomised trials. Each month, PubMed (May 2014 to April 2015) was searched to identify primary reports of randomised trials published in six high-impact general and 12 high-impact specialty journals. The corresponding author of each trial was invited to complete an online survey asking authors about changes made to their manuscript as part of the peer review process. Our main outcomes were to assess: (1) the nature and extent of changes as part of the peer review process, in relation to reporting of the primary outcome(s) and/or primary statistical analysis; (2) how often authors followed these requests; and (3) whether this was related to specific journal or trial characteristics. Of 893 corresponding authors who were invited to take part in the online survey 258 (29%) responded. The majority of trials were multicentre (n = 191; 74%); median sample size 325 (IQR 138 to 1010). The primary outcome was clearly defined in 92% (n = 238), of which the direction of treatment effect was statistically significant in 49%. The majority responded (1-10 Likert scale) they were satisfied with the overall handling (mean 8.6, SD 1.5) and quality of peer review (mean 8.5, SD 1.5) of their manuscript. Only 3% (n = 8) said that the editor or peer reviewers had asked them to change or clarify the trial's primary outcome. However, 27% (n = 69) reported they were asked to change or clarify the statistical analysis of the primary outcome; most had fulfilled the request, the main motivation being to improve the statistical methods (n = 38; 55%) or avoid rejection (n = 30; 44%). Overall, there was little association between authors being asked to make this change and the type of journal, intervention, significance of the primary outcome, or funding source. Thirty-six percent (n = 94) of authors had been asked to include additional analyses that had not been included in the original manuscript; in 77% (n = 72) these were not pre-specified in the protocol. Twenty-three percent (n = 60) had been asked to modify their overall conclusion, usually (n = 53; 88%) to provide a more cautious conclusion. Overall, most changes, as a result of the peer review process, resulted in improvements to the published manuscript; there was little evidence of a negative impact in terms of post hoc changes of the primary outcome. However, some suggested changes might be considered inappropriate, such as unplanned additional analyses, and should be discouraged.

  12. Police response to domestic violence: making decisions about risk and risk management.

    PubMed

    Perez Trujillo, Monica; Ross, Stuart

    2008-04-01

    Assessing and responding to risk are key elements in how police respond to domestic violence. However, relatively little is known about the way police make judgments about the risks associated with domestic violence and how these judgments influence their actions. This study examines police decisions about risk in domestic violence incidents when using a risk assessment instrument. Based on a sample of 501 risk assessments completed by police in Australia, this study shows that a limited number of items on the risk assessment instrument are important in police officers' decisions about risk. Statistical analyses show that the victim's level of fear contributes to police officers' judgment on the level of risk and their decisions on which risk management strategy should be used. These findings suggest that research on police responses to domestic violence needs to pay greater attention to situational dynamics and the task requirements of risk-based decision making.

  13. Application of a faith-based integration tool to assess mental and physical health interventions.

    PubMed

    Saunders, Donna M; Leak, Jean; Carver, Monique E; Smith, Selina A

    2017-01-01

    To build on current research involving faith-based interventions (FBIs) for addressing mental and physical health, this study a) reviewed the extent to which relevant publications integrate faith concepts with health and b) initiated analysis of the degree of FBI integration with intervention outcomes. Derived from a systematic search of articles published between 2007 and 2017, 36 studies were assessed with a Faith-Based Integration Assessment Tool (FIAT) to quantify faith-health integration. Basic statistical procedures were employed to determine the association of faith-based integration with intervention outcomes. The assessed studies possessed (on average) moderate, inconsistent integration because of poor use of faith measures, and moderate, inconsistent use of faith practices. Analysis procedures for determining the effect of FBI integration on intervention outcomes were inadequate for formulating practical conclusions. Regardless of integration, interventions were associated with beneficial outcomes. To determine the link between FBI integration and intervention outcomes, additional analyses are needed.

  14. Probabilistic Survivability Versus Time Modeling

    NASA Technical Reports Server (NTRS)

    Joyner, James J., Sr.

    2016-01-01

    This presentation documents Kennedy Space Center's Independent Assessment work completed on three assessments for the Ground Systems Development and Operations (GSDO) Program to assist the Chief Safety and Mission Assurance Officer during key programmatic reviews and provided the GSDO Program with analyses of how egress time affects the likelihood of astronaut and ground worker survival during an emergency. For each assessment, a team developed probability distributions for hazard scenarios to address statistical uncertainty, resulting in survivability plots over time. The first assessment developed a mathematical model of probabilistic survivability versus time to reach a safe location using an ideal Emergency Egress System at Launch Complex 39B (LC-39B); the second used the first model to evaluate and compare various egress systems under consideration at LC-39B. The third used a modified LC-39B model to determine if a specific hazard decreased survivability more rapidly than other events during flight hardware processing in Kennedy's Vehicle Assembly Building.

  15. Evaluating Intervention Programs with a Pretest-Posttest Design: A Structural Equation Modeling Approach

    PubMed Central

    Alessandri, Guido; Zuffianò, Antonio; Perinelli, Enrico

    2017-01-01

    A common situation in the evaluation of intervention programs is the researcher's possibility to rely on two waves of data only (i.e., pretest and posttest), which profoundly impacts on his/her choice about the possible statistical analyses to be conducted. Indeed, the evaluation of intervention programs based on a pretest-posttest design has been usually carried out by using classic statistical tests, such as family-wise ANOVA analyses, which are strongly limited by exclusively analyzing the intervention effects at the group level. In this article, we showed how second order multiple group latent curve modeling (SO-MG-LCM) could represent a useful methodological tool to have a more realistic and informative assessment of intervention programs with two waves of data. We offered a practical step-by-step guide to properly implement this methodology, and we outlined the advantages of the LCM approach over classic ANOVA analyses. Furthermore, we also provided a real-data example by re-analyzing the implementation of the Young Prosocial Animation, a universal intervention program aimed at promoting prosociality among youth. In conclusion, albeit there are previous studies that pointed to the usefulness of MG-LCM to evaluate intervention programs (Muthén and Curran, 1997; Curran and Muthén, 1999), no previous study showed that it is possible to use this approach even in pretest-posttest (i.e., with only two time points) designs. Given the advantages of latent variable analyses in examining differences in interindividual and intraindividual changes (McArdle, 2009), the methodological and substantive implications of our proposed approach are discussed. PMID:28303110

  16. Evaluating Intervention Programs with a Pretest-Posttest Design: A Structural Equation Modeling Approach.

    PubMed

    Alessandri, Guido; Zuffianò, Antonio; Perinelli, Enrico

    2017-01-01

    A common situation in the evaluation of intervention programs is the researcher's possibility to rely on two waves of data only (i.e., pretest and posttest), which profoundly impacts on his/her choice about the possible statistical analyses to be conducted. Indeed, the evaluation of intervention programs based on a pretest-posttest design has been usually carried out by using classic statistical tests, such as family-wise ANOVA analyses, which are strongly limited by exclusively analyzing the intervention effects at the group level. In this article, we showed how second order multiple group latent curve modeling (SO-MG-LCM) could represent a useful methodological tool to have a more realistic and informative assessment of intervention programs with two waves of data. We offered a practical step-by-step guide to properly implement this methodology, and we outlined the advantages of the LCM approach over classic ANOVA analyses. Furthermore, we also provided a real-data example by re-analyzing the implementation of the Young Prosocial Animation, a universal intervention program aimed at promoting prosociality among youth. In conclusion, albeit there are previous studies that pointed to the usefulness of MG-LCM to evaluate intervention programs (Muthén and Curran, 1997; Curran and Muthén, 1999), no previous study showed that it is possible to use this approach even in pretest-posttest (i.e., with only two time points) designs. Given the advantages of latent variable analyses in examining differences in interindividual and intraindividual changes (McArdle, 2009), the methodological and substantive implications of our proposed approach are discussed.

  17. Empirically Derived Personality Subtyping for Predicting Clinical Symptoms and Treatment Response in Bulimia Nervosa

    PubMed Central

    Haynos, Ann F.; Pearson, Carolyn M.; Utzinger, Linsey M.; Wonderlich, Stephen A.; Crosby, Ross D.; Mitchell, James E.; Crow, Scott J.; Peterson, Carol B.

    2016-01-01

    Objective Evidence suggests that eating disorder subtypes reflecting under-controlled, over-controlled, and low psychopathology personality traits constitute reliable phenotypes that differentiate treatment response. This study is the first to use statistical analyses to identify these subtypes within treatment-seeking individuals with bulimia nervosa (BN) and to use these statistically derived clusters to predict clinical outcomes. Methods Using variables from the Dimensional Assessment of Personality Pathology–Basic Questionnaire, K-means cluster analyses identified under-controlled, over-controlled, and low psychopathology subtypes within BN patients (n = 80) enrolled in a treatment trial. Generalized linear models examined the impact of personality subtypes on Eating Disorder Examination global score, binge eating frequency, and purging frequency cross-sectionally at baseline and longitudinally at end of treatment (EOT) and follow-up. In the longitudinal models, secondary analyses were conducted to examine personality subtype as a potential moderator of response to Cognitive Behavioral Therapy-Enhanced (CBT-E) or Integrative Cognitive-Affective Therapy for BN (ICAT-BN). Results There were no baseline clinical differences between groups. In the longitudinal models, personality subtype predicted binge eating (p = .03) and purging (p = .01) frequency at EOT and binge eating frequency at follow-up (p = .045). The over-controlled group demonstrated the best outcomes on these variables. In secondary analyses, there was a treatment by subtype interaction for purging at follow-up (p = .04), which indicated a superiority of CBT-E over ICAT-BN for reducing purging among the over-controlled group. Discussion Empirically derived personality subtyping is appears to be a valid classification system with potential to guide eating disorder treatment decisions. PMID:27611235

  18. Oncology Nurses' Attitudes Toward the Edmonton Symptom Assessment System: Results From a Large Cancer Care Ontario Study.

    PubMed

    Green, Esther; Yuen, Dora; Chasen, Martin; Amernic, Heidi; Shabestari, Omid; Brundage, Michael; Krzyzanowska, Monika K; Klinger, Christopher; Ismail, Zahra; Pereira, José

    2017-01-01

    To examine oncology nurses' attitudes toward and reported use of the Edmonton Symptom Assessment System (ESAS) and to determine whether the length of work experience and presence of oncology certification are associated with their attitudes and reported usage.
. Exploratory, mixed-methods study employing a questionnaire approach.
. 14 regional cancer centers (RCCs) in Ontario, Canada.
. Oncology nurses who took part in a larger province-wide study that surveyed 960 interdisciplinary providers in oncology care settings at all of Ontario's 14 RCCs.
. Oncology nurses' attitudes and use of ESAS were measured using a 21-item investigator-developed questionnaire. Descriptive statistics and Kendall's tau-b or tau-c test were used for data analyses. Qualitative responses were analyzed using content analysis.
. Attitudes toward and self-reported use of standardized symptom screening and ESAS.
. More than half of the participants agreed that ESAS improves symptom screening, most said they would encourage their patients to complete ESAS, and most felt that managing symptoms is within their scope of practice and clinical responsibilities. Qualitative comments provided additional information elucidating the quantitative responses. Statistical analyses revealed that oncology nurses who have 10 years or less of work experience were more likely to agree that the use of standardized, valid instruments to screen for and assess symptoms should be considered best practice, ESAS improves symptom screening, and ESAS enables them to better manage patients' symptoms. No statistically significant difference was found between oncology-certified RNs and noncertified RNs on attitudes or reported use of ESAS.
. Implementing a population-based symptom screening approach is a major undertaking. The current study found that oncology nurses recognize the value of standardized screening, as demonstrated by their attitudes toward ESAS.
. Oncology nurses are integral to providing high-quality person-centered care. Using standardized approaches that enable patients to self-report symptoms and understanding barriers and enablers to optimal use of patient-reported outcome tools can improve the quality of patient care.

  19. A Study of Stranding of Juvenile Salmon by Ship Wakes Along the Lower Columbia River Using a Before-and-After Design: Before-Phase Results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pearson, Walter H.; Skalski, J R.; Sobocinski, Kathryn L.

    2006-02-01

    Ship wakes produced by deep-draft vessels transiting the lower Columbia River have been observed to cause stranding of juvenile salmon. Proposed deepening of the Columbia River navigation channel has raised concerns about the potential impact of the deepening project on juvenile salmon stranding. The Portland District of the U.S. Army Corps of Engineers requested that the Pacific Northwest National Laboratory design and conduct a study to assess stranding impacts that may be associated with channel deepening. The basic study design was a multivariate analysis of covariance of field observations and measurements under a statistical design for a before and aftermore » impact comparison. We have summarized field activities and statistical analyses for the ?before? component of the study here. Stranding occurred at all three sampling sites and during all three sampling seasons (Summer 2004, Winter 2005, and Spring 2005), for a total of 46 stranding events during 126 observed vessel passages. The highest occurrence of stranding occurred at Barlow Point, WA, where 53% of the observed events resulted in stranding. Other sites included Sauvie Island, OR (37%) and County Line Park, WA (15%). To develop an appropriate impact assessment model that accounted for relevant covariates, regression analyses were conducted to determine the relationships between stranding probability and other factors. Nineteen independent variables were considered as potential factors affecting the incidence of juvenile salmon stranding, including tidal stage, tidal height, river flow, current velocity, ship type, ship direction, ship condition (loaded/unloaded), ship speed, ship size, and a proxy variable for ship kinetic energy. In addition to the ambient and ship characteristics listed above, site, season, and fish density were also considered. Although no single factor appears as the primary factor for stranding, statistical analyses of the covariates resulted in the following equations: (1) Stranding Probability {approx} Location + Kinetic Energy Proxy + Tidal Height + Salmonid Density + Kinetic energy proxy ? Tidal Height + Tidal Height x Salmonid Density. (2) Stranding Probability {approx} Location + Total Wave Distance + Salmonid Density Index. (3) Log(Total Wave Height) {approx} Ship Block + Tidal Height + Location + Ship Speed. (4) Log(Total Wave Excursion Across the Beach) {approx} Location + Kinetic Energy Proxy + Tidal Height The above equations form the basis for a conceptual model of the factors leading to salmon stranding. The equations also form the basis for an approach for assessing impacts of dredging under the before/after study design.« less

  20. Aircraft Maneuvers for the Evaluation of Flying Qualities and Agility. Volume 1. Maneuver Development Process and Initial Maneuver Set

    DTIC Science & Technology

    1993-08-01

    subtitled "Simulation Data," consists of detailed infonrnation on the design parmneter variations tested, subsequent statistical analyses conducted...used with confidence during the design process. The data quality can be examined in various forms such as statistical analyses of measure of merit data...merit, such as time to capture or nmaximurn pitch rate, can be calculated from the simulation time history data. Statistical techniques are then used

  1. Mechanism-based risk assessment strategy for drug-induced cholestasis using the transcriptional benchmark dose derived by toxicogenomics.

    PubMed

    Kawamoto, Taisuke; Ito, Yuichi; Morita, Osamu; Honda, Hiroshi

    2017-01-01

    Cholestasis is one of the major causes of drug-induced liver injury (DILI), which can result in withdrawal of approved drugs from the market. Early identification of cholestatic drugs is difficult due to the complex mechanisms involved. In order to develop a strategy for mechanism-based risk assessment of cholestatic drugs, we analyzed gene expression data obtained from the livers of rats that had been orally administered with 12 known cholestatic compounds repeatedly for 28 days at three dose levels. Qualitative analyses were performed using two statistical approaches (hierarchical clustering and principle component analysis), in addition to pathway analysis. The transcriptional benchmark dose (tBMD) and tBMD 95% lower limit (tBMDL) were used for quantitative analyses, which revealed three compound sub-groups that produced different types of differential gene expression; these groups of genes were mainly involved in inflammation, cholesterol biosynthesis, and oxidative stress. Furthermore, the tBMDL values for each test compound were in good agreement with the relevant no observed adverse effect level. These results indicate that our novel strategy for drug safety evaluation using mechanism-based classification and tBMDL would facilitate the application of toxicogenomics for risk assessment of cholestatic DILI.

  2. Wavelet Transform Based Higher Order Statistical Analysis of Wind and Wave Time Histories

    NASA Astrophysics Data System (ADS)

    Habib Huseni, Gulamhusenwala; Balaji, Ramakrishnan

    2017-10-01

    Wind, blowing on the surface of the ocean, imparts the energy to generate the waves. Understanding the wind-wave interactions is essential for an oceanographer. This study involves higher order spectral analyses of wind speeds and significant wave height time histories, extracted from European Centre for Medium-Range Weather Forecast database at an offshore location off Mumbai coast, through continuous wavelet transform. The time histories were divided by the seasons; pre-monsoon, monsoon, post-monsoon and winter and the analysis were carried out to the individual data sets, to assess the effect of various seasons on the wind-wave interactions. The analysis revealed that the frequency coupling of wind speeds and wave heights of various seasons. The details of data, analysing technique and results are presented in this paper.

  3. Vibro-acoustic analysis of composite plates

    NASA Astrophysics Data System (ADS)

    Sarigül, A. S.; Karagözlü, E.

    2014-03-01

    Vibro-acoustic analysis plays a vital role on the design of aircrafts, spacecrafts, land vehicles and ships produced from thin plates backed by closed cavities, with regard to human health and living comfort. For this type of structures, it is required a coupled solution that takes into account structural-acoustic interaction which is crucial for sensitive solutions. In this study, coupled vibro-acoustic analyses of plates produced from composite materials have been performed by using finite element analysis software. The study has been carried out for E-glass/Epoxy, Kevlar/Epoxy and Carbon/Epoxy plates with different ply angles and numbers of ply. The effects of composite material, ply orientation and number of layer on coupled vibro-acoustic characteristics of plates have been analysed for various combinations. The analysis results have been statistically examined and assessed.

  4. Assessment of the beryllium lymphocyte proliferation test using statistical process control.

    PubMed

    Cher, Daniel J; Deubner, David C; Kelsh, Michael A; Chapman, Pamela S; Ray, Rose M

    2006-10-01

    Despite more than 20 years of surveillance and epidemiologic studies using the beryllium blood lymphocyte proliferation test (BeBLPT) as a measure of beryllium sensitization (BeS) and as an aid for diagnosing subclinical chronic beryllium disease (CBD), improvements in specific understanding of the inhalation toxicology of CBD have been limited. Although epidemiologic data suggest that BeS and CBD risks vary by process/work activity, it has proven difficult to reach specific conclusions regarding the dose-response relationship between workplace beryllium exposure and BeS or subclinical CBD. One possible reason for this uncertainty could be misclassification of BeS resulting from variation in BeBLPT testing performance. The reliability of the BeBLPT, a biological assay that measures beryllium sensitization, is unknown. To assess the performance of four laboratories that conducted this test, we used data from a medical surveillance program that offered testing for beryllium sensitization with the BeBLPT. The study population was workers exposed to beryllium at various facilities over a 10-year period (1992-2001). Workers with abnormal results were offered diagnostic workups for CBD. Our analyses used a standard statistical technique, statistical process control (SPC), to evaluate test reliability. The study design involved a repeated measures analysis of BeBLPT results generated from the company-wide, longitudinal testing. Analytical methods included use of (1) statistical process control charts that examined temporal patterns of variation for the stimulation index, a measure of cell reactivity to beryllium; (2) correlation analysis that compared prior perceptions of BeBLPT instability to the statistical measures of test variation; and (3) assessment of the variation in the proportion of missing test results and how time periods with more missing data influenced SPC findings. During the period of this study, all laboratories displayed variation in test results that were beyond what would be expected due to chance alone. Patterns of test results suggested that variations were systematic. We conclude that laboratories performing the BeBLPT or other similar biological assays of immunological response could benefit from a statistical approach such as SPC to improve quality management.

  5. Exploratory study on a statistical method to analyse time resolved data obtained during nanomaterial exposure measurements

    NASA Astrophysics Data System (ADS)

    Clerc, F.; Njiki-Menga, G.-H.; Witschger, O.

    2013-04-01

    Most of the measurement strategies that are suggested at the international level to assess workplace exposure to nanomaterials rely on devices measuring, in real time, airborne particles concentrations (according different metrics). Since none of the instruments to measure aerosols can distinguish a particle of interest to the background aerosol, the statistical analysis of time resolved data requires special attention. So far, very few approaches have been used for statistical analysis in the literature. This ranges from simple qualitative analysis of graphs to the implementation of more complex statistical models. To date, there is still no consensus on a particular approach and the current period is always looking for an appropriate and robust method. In this context, this exploratory study investigates a statistical method to analyse time resolved data based on a Bayesian probabilistic approach. To investigate and illustrate the use of the this statistical method, particle number concentration data from a workplace study that investigated the potential for exposure via inhalation from cleanout operations by sandpapering of a reactor producing nanocomposite thin films have been used. In this workplace study, the background issue has been addressed through the near-field and far-field approaches and several size integrated and time resolved devices have been used. The analysis of the results presented here focuses only on data obtained with two handheld condensation particle counters. While one was measuring at the source of the released particles, the other one was measuring in parallel far-field. The Bayesian probabilistic approach allows a probabilistic modelling of data series, and the observed task is modelled in the form of probability distributions. The probability distributions issuing from time resolved data obtained at the source can be compared with the probability distributions issuing from the time resolved data obtained far-field, leading in a quantitative estimation of the airborne particles released at the source when the task is performed. Beyond obtained results, this exploratory study indicates that the analysis of the results requires specific experience in statistics.

  6. Wildfire cluster detection using space-time scan statistics

    NASA Astrophysics Data System (ADS)

    Tonini, M.; Tuia, D.; Ratle, F.; Kanevski, M.

    2009-04-01

    The aim of the present study is to identify spatio-temporal clusters of fires sequences using space-time scan statistics. These statistical methods are specifically designed to detect clusters and assess their significance. Basically, scan statistics work by comparing a set of events occurring inside a scanning window (or a space-time cylinder for spatio-temporal data) with those that lie outside. Windows of increasing size scan the zone across space and time: the likelihood ratio is calculated for each window (comparing the ratio "observed cases over expected" inside and outside): the window with the maximum value is assumed to be the most probable cluster, and so on. Under the null hypothesis of spatial and temporal randomness, these events are distributed according to a known discrete-state random process (Poisson or Bernoulli), which parameters can be estimated. Given this assumption, it is possible to test whether or not the null hypothesis holds in a specific area. In order to deal with fires data, the space-time permutation scan statistic has been applied since it does not require the explicit specification of the population-at risk in each cylinder. The case study is represented by Florida daily fire detection using the Moderate Resolution Imaging Spectroradiometer (MODIS) active fire product during the period 2003-2006. As result, statistically significant clusters have been identified. Performing the analyses over the entire frame period, three out of the five most likely clusters have been identified in the forest areas, on the North of the country; the other two clusters cover a large zone in the South, corresponding to agricultural land and the prairies in the Everglades. Furthermore, the analyses have been performed separately for the four years to analyze if the wildfires recur each year during the same period. It emerges that clusters of forest fires are more frequent in hot seasons (spring and summer), while in the South areas they are widely present along the whole year. The analysis of fires distribution to evaluate if they are statistically more frequent in some area or/and in some period of the year, can be useful to support fire management and to focus on prevention measures.

  7. Statistics of fully turbulent impinging jets

    NASA Astrophysics Data System (ADS)

    Wilke, Robert; Sesterhenn, Jörn

    2017-08-01

    Direct numerical simulations of sub- and supersonic impinging jets with Reynolds numbers of 3300 and 8000 are carried out to analyse their statistical properties. The influence of the parameters Mach number, Reynolds number and ambient temperature on the mean velocity and temperature fields are studied. For the compressible subsonic cold impinging jets into a heated environment, different Reynolds analogies are assesses. It is shown, that the (original) Reynolds analogy as well as the Chilton Colburn analogy are in good agreement with the DNS data outside the impinging area. The generalised Reynolds analogy (GRA) and the Crocco-Busemann relation are not suited for the estimation of the mean temperature field based on the mean velocity field of impinging jets. Furthermore, the prediction of fluctuating temperatures according to the GRA fails. On the contrary, the linear relation between thermodynamic fluctuations of entropy, density and temperature as suggested by Lechner et al. (2001) can be confirmed for the entire wall jet. The turbulent heat flux and Reynolds stress tensor are analysed and brought into coherence with the primary and secondary ring vortices of the wall jet. Budget terms of the Reynolds stress tensor are given as data base for the improvement of turbulence models.

  8. The structure of Diagnostic and Statistical Manual of Mental Disorders (4th edition, text revision) personality disorder symptoms in a large national sample.

    PubMed

    Trull, Timothy J; Vergés, Alvaro; Wood, Phillip K; Jahng, Seungmin; Sher, Kenneth J

    2012-10-01

    We examined the latent structure underlying the criteria for DSM-IV-TR (American Psychiatric Association, 2000, Diagnostic and statistical manual of mental disorders (4th ed., text revision). Washington, DC: Author.) personality disorders in a large nationally representative sample of U.S. adults. Personality disorder symptom data were collected using a structured diagnostic interview from approximately 35,000 adults assessed over two waves of data collection in the National Epidemiologic Survey on Alcohol and Related Conditions. Our analyses suggested that a seven-factor solution provided the best fit for the data, and these factors were marked primarily by one or at most two personality disorder criteria sets. A series of regression analyses that used external validators tapping Axis I psychopathology, treatment for mental health problems, functioning scores, interpersonal conflict, and suicidal ideation and behavior provided support for the seven-factor solution. We discuss these findings in the context of previous studies that have examined the structure underlying the personality disorder criteria as well as the current proposals for DSM-5 personality disorders. (PsycINFO Database Record (c) 2012 APA, all rights reserved).

  9. Community Design Impacts on Health Habits in Low-income Southern Nevadans.

    PubMed

    Coughenour, Courtney; Burns, Mackenzie S

    2016-07-01

    The purposes of this exploratory study were to: (1) characterize selected community design features; and (2) determine the relationship between select features and physical activity (PA) levels and nutrition habits for a small sample of low-income southern Nevadans. Secondary analysis was conducted on data from selected participants of the Nevada Healthy Homes Partnership program; self-report data on PA and diet habits were compared to national guidelines. Community design features were identified via GIS within a one-mile radius of participants' homes. Descriptive statistics characterized these features and chi-square analyses were conducted to determine the relationship between select features and habits. Data from 71 participants were analyzed; the majority failed to reach either PA or fruit and vegetable guidelines (81.7% and 93.0%, respectively). Many neighborhoods were absent of parks (71.8%), trailheads (36.6%), or pay-for-use PA facilities (47.9%). The mean number of grocery stores was 3.4 ± 2.3 per neighborhood. Chi-square analyses were not statistically significant. Findings were insufficient to make meaningful conclusions, but support the need for health promotion to meet guidelines. More research is needed to assess the impact of health-promoting community design and healthy behaviors, particularly in vulnerable populations.

  10. Seeking a fingerprint: analysis of point processes in actigraphy recording

    NASA Astrophysics Data System (ADS)

    Gudowska-Nowak, Ewa; Ochab, Jeremi K.; Oleś, Katarzyna; Beldzik, Ewa; Chialvo, Dante R.; Domagalik, Aleksandra; Fąfrowicz, Magdalena; Marek, Tadeusz; Nowak, Maciej A.; Ogińska, Halszka; Szwed, Jerzy; Tyburczyk, Jacek

    2016-05-01

    Motor activity of humans displays complex temporal fluctuations which can be characterised by scale-invariant statistics, thus demonstrating that structure and fluctuations of such kinetics remain similar over a broad range of time scales. Previous studies on humans regularly deprived of sleep or suffering from sleep disorders predicted a change in the invariant scale parameters with respect to those for healthy subjects. In this study we investigate the signal patterns from actigraphy recordings by means of characteristic measures of fractional point processes. We analyse spontaneous locomotor activity of healthy individuals recorded during a week of regular sleep and a week of chronic partial sleep deprivation. Behavioural symptoms of lack of sleep can be evaluated by analysing statistics of duration times during active and resting states, and alteration of behavioural organisation can be assessed by analysis of power laws detected in the event count distribution, distribution of waiting times between consecutive movements and detrended fluctuation analysis of recorded time series. We claim that among different measures characterising complexity of the actigraphy recordings and their variations implied by chronic sleep distress, the exponents characterising slopes of survival functions in resting states are the most effective biomarkers distinguishing between healthy and sleep-deprived groups.

  11. Patterns of exchange of forensic DNA data in the European Union through the Prüm system.

    PubMed

    Santos, Filipe; Machado, Helena

    2017-07-01

    This paper presents a study of the 5-year operation (2011-2015) of the transnational exchange of forensic DNA data between Member States of the European Union (EU) for the purpose of combating cross-border crime and terrorism within the so-called Prüm system. This first systematisation of the full official statistical dataset provides an overall assessment of the match figures and patterns of operation of the Prüm system for DNA exchange. These figures and patterns are analysed in terms of the differentiated contributions by participating EU Member States. The data suggest a trend for West and Central European countries to concentrate the majority of Prüm matches, while DNA databases of Eastern European countries tend to contribute with profiles of people that match stains in other countries. In view of the necessary transparency and accountability of the Prüm system, more extensive and informative statistics would be an important contribution to the assessment of its functioning and societal benefits. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  12. [Population models of mental health in the Russian population: assessment of an impact of living conditions and psychiatric care resources].

    PubMed

    Mitikhin, V G; Yastrebov, V S; Mitikhina, I A

    ОBJECTIVE: The development and use of population models of mental health in the Russian population to analyze the relationship between indicators of mental disorders, psychiatric care resources taking into account medical/demographic and socio-economic factors in the period of 1992-2015. The sources of information were: 1) the data of the Russian medical statistics on the main indicators of mental health of the Russian population and psychiatric care resources; 2) government statistics on the demographic and socio-economic situation of the population of Russia during this period. The study used system data analysis, correlation and regression analyses. Linear and nonlinear models with a high level of significance were obtained to assess the impact of socio-economic, health and demographic (population, life expectancy, migration, mortality) factors and resources of the service (primarily, manpower) on the dynamics of the main indicators (prevalence, incidence) of mental health of the population. In recent years, a decline in the prevalence and incidence of the Russian population is a consequence of the scarcity of mental health services, in particular, personnel resources.

  13. Efficacy of isokinetic exercise on functional capacity and pain in patellofemoral pain syndrome.

    PubMed

    Alaca, Ridvan; Yilmaz, Bilge; Goktepe, A Salim; Mohur, Haydar; Kalyon, Tunc Alp

    2002-11-01

    To assess the effect of an isokinetic exercise program on symptoms and functions of patients with patellofemoral pain syndrome. A total of 22 consecutive patients with the complaint of anterior knee pain who met the inclusion criteria were recruited to assess the efficacy of isokinetic exercise on functional capacity, isokinetic parameters, and pain scores in patients with patellofemoral pain syndrome. A total of 37 knees were examined. Six-meter hopping, three-step hopping, and single-limb hopping course tests were performed for each patient with the measurements of the Lysholm scale and visual analog scale. Tested parameters were peak torque, total work, average power, and endurance ratios. Statistical analyses revealed that at the end of the 6-wk treatment period, functional and isokinetic parameters improved significantly, as did pain scores. There was not statistically significant correlation between different groups of parameters. The isokinetic exercise treatment program used in this study prevented the extensor power loss due to patellofemoral pain syndrome, but the improvement in the functional capacity was not correlated with the gained power.

  14. Relationship between self-compassion and emotional intelligence in nursing students.

    PubMed

    Şenyuva, Emine; Kaya, Hülya; Işik, Burçin; Bodur, Gönül

    2014-12-01

    Nursing focuses on meeting physical, social and emotional health-care needs of individuals, families and society. In health care, nurses directly communicate with patients and try to empathize with them. Nurses give care under emotionally intense conditions where the individual undergoes pain and distress. Research is aimed at analysing the correlation of self-compassion and emotional intelligence of nursing students. The population of the research consisted of all the undergraduate students (571 students) of the 2010-2011 fall semester of the department of nursing. An information form, Self-compassion Scale and Emotional Intelligence Assessment Scale were utilized to obtain data for the research. For the assessment of the findings of research, Statistical Package for Social Sciences 16.0 for Windows was utilized for statistical analysis. Results indicated that there is a correlation between self-compassion and emotional intelligence and that emotional intelligence, which includes the individual perceiving one's emotions and using the knowledge one gained from them to function while directing thoughts, actions and professional applications, has positive contributions to the features of nurses with developed self-compassion. © 2013 Wiley Publishing Asia Pty Ltd.

  15. The influence of test mode and visuospatial ability on mathematics assessment performance

    NASA Astrophysics Data System (ADS)

    Logan, Tracy

    2015-12-01

    Mathematics assessment and testing are increasingly situated within digital environments with international tests moving to computer-based testing in the near future. This paper reports on a secondary data analysis which explored the influence the mode of assessment—computer-based (CBT) and pencil-and-paper based (PPT)—and visuospatial ability had on students' mathematics test performance. Data from 804 grade 6 Singaporean students were analysed using the knowledge discovery in data design. The results revealed statistically significant differences between performance on CBT and PPT test modes across content areas concerning whole number algebraic patterns and data and chance. However, there were no performance differences for content areas related to spatial arrangements geometric measurement or other number. There were also statistically significant differences in performance between those students who possess higher levels of visuospatial ability compared to those with lower levels across all six content areas. Implications include careful consideration for the comparability of CBT and PPT testing and the need for increased attention to the role of visuospatial reasoning in student's mathematics reasoning.

  16. Oral Health-Related Quality of Life in the Elderly in Israel--Results from the National Health and Nutrition Survey of the Elderly 2005-2006.

    PubMed

    Zusman, Shlomo Paul; Kushnir, Daniel; Natapov, Lena; Goldsmith, Rebecca; Dichtiar, Rita

    2016-01-01

    To assess the oral health-related quality of life of the Israeli elderly. Data were collected from a subsample of those interviewed for the cross-sectional Mabat Zahav National Health and Nutrition Survey of the Elderly, carried out in 2005 and 2006 by the Ministry of Health in Israel. In-person interviews were conducted in the interviewees' homes using a structured questionnaire which included 7 questions on subjective dental health status and the 14 questions of the Oral Health Impact Profile 14 (OHIP-14). Statistical significance of continuous variables was assessed with the Student t-test; categorical variables with normal distribution were analysed using the chi-square test and those with non-normal distribution with the Wilcoxon Mann-Whitney two-sample test. 828 Jews and 159 Arabs from the total survey population of 1852 elderly (1536 Jews and 316 Arabs) completed the OHIP-14 questionnaire. An impact of oral health on the quality of life was reported by 16.6% of the respondents, 19.2% of females and 13.9% of males (p<0.05). There were statistically significant differences in impact prevalence by gender, place of birth and economic status. No such differences were found by age group, population group or education. Significant statistical correlation was found between subjective assessment of general and dental health and OHIP impact prevalence, with poorer assessment correlated with increased prevalence of impact. The quality of life of 17% of Israeli elderly is affected by oral health. The OHIP-14 findings emphasise the importance of including basic dental treatment (treatment of dental pain and infections) in the range of services covered by the National Health Insurance Law.

  17. Mindfulness for palliative care patients. Systematic review.

    PubMed

    Latorraca, Carolina de Oliveira Cruz; Martimbianco, Ana Luiza Cabrera; Pachito, Daniela Vianna; Pacheco, Rafael Leite; Riera, Rachel

    2017-12-01

    Nineteen million adults worldwide are in need of palliative care. Of those who have access to it, 80% fail to receive an efficient management of symptoms. To assess the effectiveness and safety of mindfulness meditation for palliative care patients. We searched CENTRAL, MEDLINE, Embase, LILACS, PEDro, CINAHL, PsycINFO, Opengrey, ClinicalTrials.gov and WHO-ICTRP. No restriction of language, status or date of publication was applied. We considered randomised clinical trials (RCTs) comparing any mindfulness meditation scheme vs any comparator for palliative care. Cochrane Risk of Bias (Rob) Table was used for assessing methodological quality of RCTs. Screening, data extraction and methodological assessments were performed by two reviewers. Mean differences (MD) (confidence intervals of 95% (CI 95%)) were considered for estimating effect size. Quality of evidence was appraised by GRADE. Four RCTs, 234 participants, were included. All studies presented high risk of bias in at least one RoB table criteria. We assessed 4 comparisons, but only 2 studies showed statistically significant difference for at least one outcome. 1. Mindfulness meditation (eight weeks, one session/week, daily individual practice) vs control: statistically significant difference in favour of control for quality of life - physical aspects. 2. Mindfulness meditation (single 5-minute session) vs control: benefit in favour of mindfulness for stress outcome in both time-points. None of the included studies analysed safety and harms outcomes. Although two studies have showed statistically significant difference, only one showed effectiveness of mindfulness meditation in improving perceived stress. This study focused on one single session of mindfulness of 5 minutes for adult cancer patients in palliative care, but it was considered as possessing high risk of bias. Other schemes of mindfulness meditation did not show benefit in any outcome evaluated (low and very low quality evidence). © 2017 John Wiley & Sons Ltd.

  18. Prospective multi-centre Voxel Based Morphometry study employing scanner specific segmentations: Procedure development using CaliBrain structural MRI data

    PubMed Central

    2009-01-01

    Background Structural Magnetic Resonance Imaging (sMRI) of the brain is employed in the assessment of a wide range of neuropsychiatric disorders. In order to improve statistical power in such studies it is desirable to pool scanning resources from multiple centres. The CaliBrain project was designed to provide for an assessment of scanner differences at three centres in Scotland, and to assess the practicality of pooling scans from multiple-centres. Methods We scanned healthy subjects twice on each of the 3 scanners in the CaliBrain project with T1-weighted sequences. The tissue classifier supplied within the Statistical Parametric Mapping (SPM5) application was used to map the grey and white tissue for each scan. We were thus able to assess within scanner variability and between scanner differences. We have sought to correct for between scanner differences by adjusting the probability mappings of tissue occupancy (tissue priors) used in SPM5 for tissue classification. The adjustment procedure resulted in separate sets of tissue priors being developed for each scanner and we refer to these as scanner specific priors. Results Voxel Based Morphometry (VBM) analyses and metric tests indicated that the use of scanner specific priors reduced tissue classification differences between scanners. However, the metric results also demonstrated that the between scanner differences were not reduced to the level of within scanner variability, the ideal for scanner harmonisation. Conclusion Our results indicate the development of scanner specific priors for SPM can assist in pooling of scan resources from different research centres. This can facilitate improvements in the statistical power of quantitative brain imaging studies. PMID:19445668

  19. Social cognition in patients with schizophrenia, their unaffected first degree relatives and healthy controls. Comparison between groups and analysis of associated clinical and sociodemographic variables.

    PubMed

    Rodríguez Sosa, Juana Teresa; Gil Santiago, Hiurma; Trujillo Cubas, Angel; Winter Navarro, Marta; León Pérez, Petra; Guerra Cazorla, Luz Marina; Martín Jiménez, José María

    2013-01-01

    To evaluate and compare the social cognition in patients with schizophrenia, healthy first-degree relatives and controls, by studying the relationship between social cognition and nonsocial cognition, psychopathology, and other clinical and sociodemographic variables. The total sample was comprised of patients diagnosed with paranoid schizophrenia (N = 29), healthy first-degree relatives (N = 21) and controls (N = 28). All groups were assessed with an ad hoc questionnaire and a Social Cognition Scale, which assessed the domains: emotional processing, social perception and attributional style in a Spanish population. The patient group was also assessed with the Scale for the Positive and Negative Syndrome Scale and the Mini-mental state examination. Statistical analyses were performed with SPSS version 15.0. Patients scored significantly worse in all domains of social cognition assessed, compared with controls, and mastery attributional style, compared with relatives. The type of psychopathology correlated negatively and statistically significantly with different domains of social cognition: negative symptoms with emotional processing and attributional style, and positive symptoms with social perception. Basic cognition scores correlated positively and statistically significantly with the domains social perception and attributional style. Social cognition has become an interesting object of study, especially in how it relates to non-social cognition, psychopathology and global functioning of patients, bringing new elements to be considered in the early detection, comprehensive treatment and psychosocial rehabilitation of patients. Its conceptualization as trait variable, the consideration of the existence of a continuum between patients and relatives are plausible hypotheses that require further research. Copyright © 2012 SEP y SEPB. Published by Elsevier Espana. All rights reserved.

  20. A Technology-Based Statistical Reasoning Assessment Tool in Descriptive Statistics for Secondary School Students

    ERIC Educational Resources Information Center

    Chan, Shiau Wei; Ismail, Zaleha

    2014-01-01

    The focus of assessment in statistics has gradually shifted from traditional assessment towards alternative assessment where more attention has been paid to the core statistical concepts such as center, variability, and distribution. In spite of this, there are comparatively few assessments that combine the significant three types of statistical…

  1. Publication bias in obesity treatment trials?

    PubMed

    Allison, D B; Faith, M S; Gorman, B S

    1996-10-01

    The present investigation examined the extent of publication bias (namely the tendency to publish significant findings and file away non-significant findings) within the obesity treatment literature. Quantitative literature synthesis of four published meta-analyses from the obesity treatment literature. Interventions in these studies included pharmacological, educational, child, and couples treatments. To assess publication bias, several regression procedures (for example weighted least-squares, random-effects multi-level modeling, and robust regression methods) were used to regress effect sizes onto their standard errors, or proxies thereof, within each of the four meta-analysis. A significant positive beta weight in these analyses signified publication bias. There was evidence for publication bias within two of the four published meta-analyses, such that reviews of published studies were likely to overestimate clinical efficacy. The lack of evidence for publication bias within the two other meta-analyses might have been due to insufficient statistical power rather than the absence of selection bias. As in other disciplines, publication bias appears to exist in the obesity treatment literature. Suggestions are offered for managing publication bias once identified or reducing its likelihood in the first place.

  2. Inferential Statistics in "Language Teaching Research": A Review and Ways Forward

    ERIC Educational Resources Information Center

    Lindstromberg, Seth

    2016-01-01

    This article reviews all (quasi)experimental studies appearing in the first 19 volumes (1997-2015) of "Language Teaching Research" (LTR). Specifically, it provides an overview of how statistical analyses were conducted in these studies and of how the analyses were reported. The overall conclusion is that there has been a tight adherence…

  3. Medical students' attitudes towards science and gross anatomy, and the relationship to personality

    PubMed Central

    Plaisant, Odile; Stephens, Shiby; Apaydin, Nihal; Courtois, Robert; Lignier, Baptiste; Loukas, Marios; Moxham, Bernard

    2014-01-01

    Assessment of the personalities of medical students can enable medical educators to formulate strategies for the best development of academic and clinical competencies. Previous research has shown that medical students do not share a common personality profile, there being gender differences. We have also shown that, for French medical students, students with personality traits associated with strong competitiveness are selected for admission to medical school. In this study, we further show that the medical students have different personality profiles compared with other student groups (psychology and business studies). The main purpose of the present investigation was to assess attitudes to science and gross anatomy, and to relate these to the students' personalities. Questionnaires (including Thurstone and Chave analyses) were employed to measure attitudes, and personality was assessed using the Big Five Inventory (BFI). Data for attitudes were obtained for students at medical schools in Cardiff (UK), Paris, Descartes/Sorbonne (France), St George's University (Grenada) and Ankara (Turkey). Data obtained from personality tests were available for analysis from the Parisian cohort of students. Although the medical students were found to have strongly supportive views concerning the importance of science in medicine, their knowledge of the scientific method/philosophy of science was poor. Following analyses of the BFI in the French students, ‘openness’ and ‘conscientiousness’ were linked statistically with a positive attitude towards science. For anatomy, again strongly supportive views concerning the subject's importance in medicine were discerned. Analyses of the BFI in the French students did not show links statistically between personality profiles and attitudes towards gross anatomy, except male students with ‘negative affectivity’ showed less appreciation of the importance of anatomy. This contrasts with our earlier studies that showed that there is a relationship between the BF dimensions of personality traits and anxiety towards the dissection room experience (at the start of the course, ‘negative emotionality’ was related to an increased level of anxiety). We conclude that medical students agree on the importance to their studies of both science in general and gross anatomy in particular, and that some personality traits relate to their attitudes that could affect clinical competence. PMID:23594196

  4. Medical students' attitudes towards science and gross anatomy, and the relationship to personality.

    PubMed

    Plaisant, Odile; Stephens, Shiby; Apaydin, Nihal; Courtois, Robert; Lignier, Baptiste; Loukas, Marios; Moxham, Bernard

    2014-03-01

    Assessment of the personalities of medical students can enable medical educators to formulate strategies for the best development of academic and clinical competencies. Previous research has shown that medical students do not share a common personality profile, there being gender differences. We have also shown that, for French medical students, students with personality traits associated with strong competitiveness are selected for admission to medical school. In this study, we further show that the medical students have different personality profiles compared with other student groups (psychology and business studies). The main purpose of the present investigation was to assess attitudes to science and gross anatomy, and to relate these to the students' personalities. Questionnaires (including Thurstone and Chave analyses) were employed to measure attitudes, and personality was assessed using the Big Five Inventory (BFI). Data for attitudes were obtained for students at medical schools in Cardiff (UK), Paris, Descartes/Sorbonne (France), St George's University (Grenada) and Ankara (Turkey). Data obtained from personality tests were available for analysis from the Parisian cohort of students. Although the medical students were found to have strongly supportive views concerning the importance of science in medicine, their knowledge of the scientific method/philosophy of science was poor. Following analyses of the BFI in the French students, 'openness' and 'conscientiousness' were linked statistically with a positive attitude towards science. For anatomy, again strongly supportive views concerning the subject's importance in medicine were discerned. Analyses of the BFI in the French students did not show links statistically between personality profiles and attitudes towards gross anatomy, except male students with 'negative affectivity' showed less appreciation of the importance of anatomy. This contrasts with our earlier studies that showed that there is a relationship between the BF dimensions of personality traits and anxiety towards the dissection room experience (at the start of the course, 'negative emotionality' was related to an increased level of anxiety). We conclude that medical students agree on the importance to their studies of both science in general and gross anatomy in particular, and that some personality traits relate to their attitudes that could affect clinical competence. © 2013 Anatomical Society.

  5. Assessment of trace elements levels in patients with Type 2 diabetes using multivariate statistical analysis.

    PubMed

    Badran, M; Morsy, R; Soliman, H; Elnimr, T

    2016-01-01

    The trace elements metabolism has been reported to possess specific roles in the pathogenesis and progress of diabetes mellitus. Due to the continuous increase in the population of patients with Type 2 diabetes (T2D), this study aims to assess the levels and inter-relationships of fast blood glucose (FBG) and serum trace elements in Type 2 diabetic patients. This study was conducted on 40 Egyptian Type 2 diabetic patients and 36 healthy volunteers (Hospital of Tanta University, Tanta, Egypt). The blood serum was digested and then used to determine the levels of 24 trace elements using an inductive coupled plasma mass spectroscopy (ICP-MS). Multivariate statistical analysis depended on correlation coefficient, cluster analysis (CA) and principal component analysis (PCA), were used to analysis the data. The results exhibited significant changes in FBG and eight of trace elements, Zn, Cu, Se, Fe, Mn, Cr, Mg, and As, levels in the blood serum of Type 2 diabetic patients relative to those of healthy controls. The statistical analyses using multivariate statistical techniques were obvious in the reduction of the experimental variables, and grouping the trace elements in patients into three clusters. The application of PCA revealed a distinct difference in associations of trace elements and their clustering patterns in control and patients group in particular for Mg, Fe, Cu, and Zn that appeared to be the most crucial factors which related with Type 2 diabetes. Therefore, on the basis of this study, the contributors of trace elements content in Type 2 diabetic patients can be determine and specify with correlation relationship and multivariate statistical analysis, which confirm that the alteration of some essential trace metals may play a role in the development of diabetes mellitus. Copyright © 2015 Elsevier GmbH. All rights reserved.

  6. Analysis of Precipitation (Rain and Snow) Levels and Straight-line Wind Speeds in Support of the 10-year Natural Phenomena Hazards Review for Los Alamos National Laboratory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kelly, Elizabeth J.; Dewart, Jean Marie; Deola, Regina

    This report provides site-specific return level analyses for rain, snow, and straight-line wind extreme events. These analyses are in support of the 10-year review plan for the assessment of meteorological natural phenomena hazards at Los Alamos National Laboratory (LANL). These analyses follow guidance from Department of Energy, DOE Standard, Natural Phenomena Hazards Analysis and Design Criteria for DOE Facilities (DOE-STD-1020-2012), Nuclear Regulatory Commission Standard Review Plan (NUREG-0800, 2007) and ANSI/ ANS-2.3-2011, Estimating Tornado, Hurricane, and Extreme Straight-Line Wind Characteristics at Nuclear Facility Sites. LANL precipitation and snow level data have been collected since 1910, although not all years are complete.more » In this report the results from the more recent data (1990–2014) are compared to those of past analyses and a 2004 National Oceanographic and Atmospheric Administration report. Given the many differences in the data sets used in these different analyses, the lack of statistically significant differences in return level estimates increases confidence in the data and in the modeling and analysis approach.« less

  7. Los Alamos Climatology 2016 Update

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bruggeman, David Alan

    The Los Alamos National Laboratory (LANL or the Laboratory) operates a meteorology monitoring network to support LANL emergency response, engineering designs, environmental compliance, environmental assessments, safety evaluations, weather forecasting, environmental monitoring, research programs, and environmental restoration. Weather data has been collected in Los Alamos since 1910. Bowen (1990) provided climate statistics (temperature and precipitation) for the 1961– 1990 averaging period, and included other analyses (e.g., wind and relative humidity) based on the available station locations and time periods. This report provides an update to the 1990 publication Los Alamos Climatology (Bowen 1990).

  8. Impact of the Community-Wide Adolescent Health Project on Sexually Transmitted Infection Testing in Omaha, Nebraska.

    PubMed

    Tibbits, Melissa; Maloney, Shannon; Ndashe, Tambudzai Phiri; Grimm, Brandon; Johansson, Patrik; Siahpush, Mohammad

    2018-06-01

    We describe the impact of the Adolescent Health Project on sexually transmitted infection (STI) testing in Omaha, NE, during phase 1 (media campaigns) and phase 2 (free STI testing). To assess the impact of each phase on STI testing, we examined monthly data from January 2013 to April 2017 via interrupted time series analyses. There was an immediate and statistically significant increase in testing during phase 2. Expanding and advertising free STI testing is a promising approach to increasing testing.

  9. SOCR Analyses - an Instructional Java Web-based Statistical Analysis Toolkit.

    PubMed

    Chu, Annie; Cui, Jenny; Dinov, Ivo D

    2009-03-01

    The Statistical Online Computational Resource (SOCR) designs web-based tools for educational use in a variety of undergraduate courses (Dinov 2006). Several studies have demonstrated that these resources significantly improve students' motivation and learning experiences (Dinov et al. 2008). SOCR Analyses is a new component that concentrates on data modeling and analysis using parametric and non-parametric techniques supported with graphical model diagnostics. Currently implemented analyses include commonly used models in undergraduate statistics courses like linear models (Simple Linear Regression, Multiple Linear Regression, One-Way and Two-Way ANOVA). In addition, we implemented tests for sample comparisons, such as t-test in the parametric category; and Wilcoxon rank sum test, Kruskal-Wallis test, Friedman's test, in the non-parametric category. SOCR Analyses also include several hypothesis test models, such as Contingency tables, Friedman's test and Fisher's exact test.The code itself is open source (http://socr.googlecode.com/), hoping to contribute to the efforts of the statistical computing community. The code includes functionality for each specific analysis model and it has general utilities that can be applied in various statistical computing tasks. For example, concrete methods with API (Application Programming Interface) have been implemented in statistical summary, least square solutions of general linear models, rank calculations, etc. HTML interfaces, tutorials, source code, activities, and data are freely available via the web (www.SOCR.ucla.edu). Code examples for developers and demos for educators are provided on the SOCR Wiki website.In this article, the pedagogical utilization of the SOCR Analyses is discussed, as well as the underlying design framework. As the SOCR project is on-going and more functions and tools are being added to it, these resources are constantly improved. The reader is strongly encouraged to check the SOCR site for most updated information and newly added models.

  10. A comparative statistical study of long-term agroclimatic conditions affecting the growth of US winter wheat: Distributions of regional monthly average precipitation on the Great Plains and the state of Maryland and the effect of agroclimatic conditions on yield in the state of Kansas

    NASA Technical Reports Server (NTRS)

    Welker, J.

    1981-01-01

    A histogram analysis of average monthly precipitation over 30 and 84 year periods for both Maryland and Kansas was made and the results compared. A second analysis, a statistical assessment of the effect of average monthly precipitation on Kansas winter wheat yield was made. The data sets covered the three periods of 1941-1970, 1887-1970, and 1887-1921. Analyses of the limited data sets used (only the average monthly precipitation and temperature were correlated against yield) indicated that fall precipitation values, especially those of September and October, were more important to winter wheat yield than were spring values, particularly for the period 1941-1970.

  11. Multicategory reclassification statistics for assessing improvements in diagnostic accuracy

    PubMed Central

    Li, Jialiang; Jiang, Binyan; Fine, Jason P.

    2013-01-01

    In this paper, we extend the definitions of the net reclassification improvement (NRI) and the integrated discrimination improvement (IDI) in the context of multicategory classification. Both measures were proposed in Pencina and others (2008. Evaluating the added predictive ability of a new marker: from area under the receiver operating characteristic (ROC) curve to reclassification and beyond. Statistics in Medicine 27, 157–172) as numeric characterizations of accuracy improvement for binary diagnostic tests and were shown to have certain advantage over analyses based on ROC curves or other regression approaches. Estimation and inference procedures for the multiclass NRI and IDI are provided in this paper along with necessary asymptotic distributional results. Simulations are conducted to study the finite-sample properties of the proposed estimators. Two medical examples are considered to illustrate our methodology. PMID:23197381

  12. A psychometric evaluation of the Rorschach comprehensive system's perceptual thinking index.

    PubMed

    Dao, Tam K; Prevatt, Frances

    2006-04-01

    In this study, we investigated evidence for reliability and validity of the Perceptual Thinking Index (PTI; Exner, 2000a, 2000b) among an adult inpatient population. We conducted reliability and validity analyses on 107 patients who met the Diagnostic and Statistical Manual of Mental Disorders (4th ed., text revision; American Psychiatric Association, 2000) criteria for a schizophrenia-spectrum disorder (SSD) or mood disorder with no psychotic features (MD). Results provided support for interrater reliability as well as internal consistency of the PTI. Furthermore, the PTI was an effective index in differentiating SSD patients from patients diagnosed with an MD. Finally, the PTI demonstrated adequate diagnostic statistics that can be useful in the classification of patients diagnosed with SSD and MD. We discuss methodological issues, implications for assessment practice, and directions for future research.

  13. The effectiveness and cost-effectiveness of intraoperative imaging in high-grade glioma resection; a comparative review of intraoperative ALA, fluorescein, ultrasound and MRI.

    PubMed

    Eljamel, M Sam; Mahboob, Syed Osama

    2016-12-01

    Surgical resection of high-grade gliomas (HGG) is standard therapy because it imparts significant progression free (PFS) and overall survival (OS). However, HGG-tumor margins are indistinguishable from normal brain during surgery. Hence intraoperative technology such as fluorescence (ALA, fluorescein) and intraoperative ultrasound (IoUS) and MRI (IoMRI) has been deployed. This study compares the effectiveness and cost-effectiveness of these technologies. Critical literature review and meta-analyses, using MEDLINE/PubMed service. The list of references in each article was double-checked for any missing references. We included all studies that reported the use of ALA, fluorescein (FLCN), IoUS or IoMRI to guide HGG-surgery. The meta-analyses were conducted according to statistical heterogeneity between studies. If there was no heterogeneity, fixed effects model was used; otherwise, a random effects model was used. Statistical heterogeneity was explored by χ 2 and inconsistency (I 2 ) statistics. To assess cost-effectiveness, we calculated the incremental cost per quality-adjusted life-year (QALY). Gross total resection (GTR) after ALA, FLCN, IoUS and IoMRI was 69.1%, 84.4%, 73.4% and 70% respectively. The differences were not statistically significant. All four techniques led to significant prolongation of PFS and tended to prolong OS. However none of these technologies led to significant prolongation of OS compared to controls. The cost/QALY was $16,218, $3181, $6049 and $32,954 for ALA, FLCN, IoUS and IoMRI respectively. ALA, FLCN, IoUS and IoMRI significantly improve GTR and PFS of HGG. Their incremental cost was below the threshold for cost-effectiveness of HGG-therapy, denoting that each intraoperative technology was cost-effective on its own. Copyright © 2016 Elsevier B.V. All rights reserved.

  14. Biometric Analysis – A Reliable Indicator for Diagnosing Taurodontism using Panoramic Radiographs

    PubMed Central

    Hegde, Veda; Anegundi, Rajesh Trayambhak; Pravinchandra, K.R.

    2013-01-01

    Background: Taurodontism is a clinical entity with a morpho–anatomical change in the shape of the tooth, which was thought to be absent in modern man. Taurodontism is mostly observed as an isolated trait or a component of a syndrome. Various techniques have been devised to diagnose taurodontism. Aim: The aim of this study was to analyze whether a biometric analysis was useful in diagnosing taurodontism, in radiographs which appeared to be normal on cursory observations. Setting and Design: This study was carried out in our institution by using radiographs which were taken for routine procedures. Material and Methods: In this retrospective study, panoramic radiographs were obtained from dental records of children who were aged between 9–14 years, who did not have any abnormality on cursory observations. Biometric analyses were carried out on permanent mandibular first molar(s) by using a novel biometric method. The values were tabulated and analysed. Statistics: Fischer exact probability test, Chi square test and Chi-square test with Yates correction were used for statistical analysis of the data. Results: Cursory observation did not yield us any case of taurodontism. In contrast, the biometric analysis yielded us a statistically significant number of cases of taurodontism. However, there was no statistically significant difference in the number of cases with taurodontism, which was obtained between the genders and the age group which was considered. Conclusion: Thus, taurodontism was diagnosed on a biometric analysis, which was otherwise missed on a cursory observation. It is therefore necessary from the clinical point of view, to diagnose even the mildest form of taurodontism by using metric analysis rather than just relying on a visual radiographic assessment, as its occurrence has many clinical implications and a diagnostic importance. PMID:24086912

  15. Confidence intervals for the between-study variance in random-effects meta-analysis using generalised heterogeneity statistics: should we use unequal tails?

    PubMed

    Jackson, Dan; Bowden, Jack

    2016-09-07

    Confidence intervals for the between study variance are useful in random-effects meta-analyses because they quantify the uncertainty in the corresponding point estimates. Methods for calculating these confidence intervals have been developed that are based on inverting hypothesis tests using generalised heterogeneity statistics. Whilst, under the random effects model, these new methods furnish confidence intervals with the correct coverage, the resulting intervals are usually very wide, making them uninformative. We discuss a simple strategy for obtaining 95 % confidence intervals for the between-study variance with a markedly reduced width, whilst retaining the nominal coverage probability. Specifically, we consider the possibility of using methods based on generalised heterogeneity statistics with unequal tail probabilities, where the tail probability used to compute the upper bound is greater than 2.5 %. This idea is assessed using four real examples and a variety of simulation studies. Supporting analytical results are also obtained. Our results provide evidence that using unequal tail probabilities can result in shorter 95 % confidence intervals for the between-study variance. We also show some further results for a real example that illustrates how shorter confidence intervals for the between-study variance can be useful when performing sensitivity analyses for the average effect, which is usually the parameter of primary interest. We conclude that using unequal tail probabilities when computing 95 % confidence intervals for the between-study variance, when using methods based on generalised heterogeneity statistics, can result in shorter confidence intervals. We suggest that those who find the case for using unequal tail probabilities convincing should use the '1-4 % split', where greater tail probability is allocated to the upper confidence bound. The 'width-optimal' interval that we present deserves further investigation.

  16. Return to work in people with acquired brain injury: association with observed ability to use everyday technology.

    PubMed

    Larsson-Lund, Maria; Kottorp, Anders; Malinowsky, Camilla

    2017-07-01

    The aim of this study was to explore how the observed ability to use everyday technology (ET), intrapersonal capacities and environmental characteristics related to ET use contributes to the likelihood of return to work in people with ABI. The aim was also to explore whether these variables added to the likelihood of return to work to earlier defined significant variables in the group: age, perceived ADL ability and perceived ability in ET use. A cross-sectional study. The Management of Everyday Technology Assessment (META), the short version of the Everyday Technology Use Questionnaire (S-ETUQ) and a revised version of the ADL taxonomy were used to evaluate 74 people with ABI. Individual ability measures from all assessments were generated by Rasch analyses and used for additional statistical analysis. The univariate analyses showed that the observed ability to use ET, as well as intrapersonal capacities and environmental characteristics related to ET use were all significantly associated with returning to work. In the multivariate analyses, none of these associations remained. The explanatory precision of return to work in people with ABI increased minimally by adding the observed ability to use ET and the variables related to ET use when age, perceived ability in ET use and ADL had been taken in account.

  17. What Types of Pornography Do People Find Arousing and Do They Cluster? Assessing Types and Categories of Pornography in a Large-Scale Online Sample.

    PubMed

    Hald, Gert Martin; Štulhofer, Aleksandar

    2016-09-01

    Previous research on exposure to different types of pornography has primarily relied on analyses of millions of search terms and histories or on user exposure patterns within a given time period rather than the self-reported frequency of consumption. Further, previous research has almost exclusively relied on theoretical or ad hoc overarching categorizations of different types of pornography, when investigating patterns of pornography exposure, rather than latent structure analyses of these exposure patterns. In contrast, using a large sample of 18- to 40-year-old heterosexual and nonheterosexual Croatian men and women, this study investigated the self-reported frequency of using 27 different types of pornography and statistically explored their latent structures. The results showed substantial differences in consumption patterns across gender and sexual orientation. However, latent structure analyses of the 27 different types of pornography assessed suggested that although several categories of consumption were gender and sexual orientation specific, common categories across the different types of pornography could be established. Based on this finding, a five-item scale was proposed to indicate the use of nonmainstream (paraphilic) pornographic content, as this type of pornography has often been targeted in previous research. To the best of our knowledge, no similar measurement tool has been proposed before.

  18. Influence of exposure assessment and parameterization on exposure response. Aspects of epidemiologic cohort analysis using the Libby Amphibole asbestos worker cohort.

    PubMed

    Bateson, Thomas F; Kopylev, Leonid

    2015-01-01

    Recent meta-analyses of occupational epidemiology studies identified two important exposure data quality factors in predicting summary effect measures for asbestos-associated lung cancer mortality risk: sufficiency of job history data and percent coverage of work history by measured exposures. The objective was to evaluate different exposure parameterizations suggested in the asbestos literature using the Libby, MT asbestos worker cohort and to evaluate influences of exposure measurement error caused by historically estimated exposure data on lung cancer risks. Focusing on workers hired after 1959, when job histories were well-known and occupational exposures were predominantly based on measured exposures (85% coverage), we found that cumulative exposure alone, and with allowance of exponential decay, fit lung cancer mortality data similarly. Residence-time-weighted metrics did not fit well. Compared with previous analyses based on the whole cohort of Libby workers hired after 1935, when job histories were less well-known and exposures less frequently measured (47% coverage), our analyses based on higher quality exposure data yielded an effect size as much as 3.6 times higher. Future occupational cohort studies should continue to refine retrospective exposure assessment methods, consider multiple exposure metrics, and explore new methods of maintaining statistical power while minimizing exposure measurement error.

  19. Arctic biodiversity: Increasing richness accompanies shrinking refugia for a cold-associated tundra fauna

    USGS Publications Warehouse

    Hope, Andrew G.; Waltari, Eric; Malaney, Jason L.; Payer, David C.; Cook, J.A.; Talbot, Sandra L.

    2015-01-01

    As ancestral biodiversity responded dynamically to late-Quaternary climate changes, so are extant organisms responding to the warming trajectory of the Anthropocene. Ecological predictive modeling, statistical hypothesis tests, and genetic signatures of demographic change can provide a powerful integrated toolset for investigating these biodiversity responses to climate change, and relative resiliency across different communities. Within the biotic province of Beringia, we analyzed specimen localities and DNA sequences from 28 mammal species associated with boreal forest and Arctic tundra biomes to assess both historical distributional and evolutionary responses and then forecasted future changes based on statistical assessments of past and present trajectories, and quantified distributional and demographic changes in relation to major management regions within the study area. We addressed three sets of hypotheses associated with aspects of methodological, biological, and socio-political importance by asking (1) what is the consistency among implications of predicted changes based on the results of both ecological and evolutionary analyses; (2) what are the ecological and evolutionary implications of climate change considering either total regional diversity or distinct communities associated with major biomes; and (3) are there differences in management implications across regions? Our results indicate increasing Arctic richness through time that highlights a potential state shift across the Arctic landscape. However, within distinct ecological communities, we found a predicted decline in the range and effective population size of tundra species into several discrete refugial areas. Consistency in results based on a combination of both ecological and evolutionary approaches demonstrates increased statistical confidence by applying cross-discipline comparative analyses to conservation of biodiversity, particularly considering variable management regimes that seek to balance sustainable ecosystems with other anthropogenic values. Refugial areas for cold-adapted taxa appear to be persistent across both warm and cold climate phases and although fragmented, constitute vital regions for persistence of Arctic mammals.

  20. Assessing the reproducibility of discriminant function analyses

    PubMed Central

    Andrew, Rose L.; Albert, Arianne Y.K.; Renaut, Sebastien; Rennison, Diana J.; Bock, Dan G.

    2015-01-01

    Data are the foundation of empirical research, yet all too often the datasets underlying published papers are unavailable, incorrect, or poorly curated. This is a serious issue, because future researchers are then unable to validate published results or reuse data to explore new ideas and hypotheses. Even if data files are securely stored and accessible, they must also be accompanied by accurate labels and identifiers. To assess how often problems with metadata or data curation affect the reproducibility of published results, we attempted to reproduce Discriminant Function Analyses (DFAs) from the field of organismal biology. DFA is a commonly used statistical analysis that has changed little since its inception almost eight decades ago, and therefore provides an opportunity to test reproducibility among datasets of varying ages. Out of 100 papers we initially surveyed, fourteen were excluded because they did not present the common types of quantitative result from their DFA or gave insufficient details of their DFA. Of the remaining 86 datasets, there were 15 cases for which we were unable to confidently relate the dataset we received to the one used in the published analysis. The reasons ranged from incomprehensible or absent variable labels, the DFA being performed on an unspecified subset of the data, or the dataset we received being incomplete. We focused on reproducing three common summary statistics from DFAs: the percent variance explained, the percentage correctly assigned and the largest discriminant function coefficient. The reproducibility of the first two was fairly high (20 of 26, and 44 of 60 datasets, respectively), whereas our success rate with the discriminant function coefficients was lower (15 of 26 datasets). When considering all three summary statistics, we were able to completely reproduce 46 (65%) of 71 datasets. While our results show that a majority of studies are reproducible, they highlight the fact that many studies still are not the carefully curated research that the scientific community and public expects. PMID:26290793

  1. Assessment and improvement of statistical tools for comparative proteomics analysis of sparse data sets with few experimental replicates.

    PubMed

    Schwämmle, Veit; León, Ileana Rodríguez; Jensen, Ole Nørregaard

    2013-09-06

    Large-scale quantitative analyses of biological systems are often performed with few replicate experiments, leading to multiple nonidentical data sets due to missing values. For example, mass spectrometry driven proteomics experiments are frequently performed with few biological or technical replicates due to sample-scarcity or due to duty-cycle or sensitivity constraints, or limited capacity of the available instrumentation, leading to incomplete results where detection of significant feature changes becomes a challenge. This problem is further exacerbated for the detection of significant changes on the peptide level, for example, in phospho-proteomics experiments. In order to assess the extent of this problem and the implications for large-scale proteome analysis, we investigated and optimized the performance of three statistical approaches by using simulated and experimental data sets with varying numbers of missing values. We applied three tools, including standard t test, moderated t test, also known as limma, and rank products for the detection of significantly changing features in simulated and experimental proteomics data sets with missing values. The rank product method was improved to work with data sets containing missing values. Extensive analysis of simulated and experimental data sets revealed that the performance of the statistical analysis tools depended on simple properties of the data sets. High-confidence results were obtained by using the limma and rank products methods for analyses of triplicate data sets that exhibited more than 1000 features and more than 50% missing values. The maximum number of differentially represented features was identified by using limma and rank products methods in a complementary manner. We therefore recommend combined usage of these methods as a novel and optimal way to detect significantly changing features in these data sets. This approach is suitable for large quantitative data sets from stable isotope labeling and mass spectrometry experiments and should be applicable to large data sets of any type. An R script that implements the improved rank products algorithm and the combined analysis is available.

  2. An Asynchronous Many-Task Implementation of In-Situ Statistical Analysis using Legion.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pebay, Philippe Pierre; Bennett, Janine Camille

    2015-11-01

    In this report, we propose a framework for the design and implementation of in-situ analy- ses using an asynchronous many-task (AMT) model, using the Legion programming model together with the MiniAero mini-application as a surrogate for full-scale parallel scientific computing applications. The bulk of this work consists of converting the Learn/Derive/Assess model which we had initially developed for parallel statistical analysis using MPI [PTBM11], from a SPMD to an AMT model. In this goal, we propose an original use of the concept of Legion logical regions as a replacement for the parallel communication schemes used for the only operation ofmore » the statistics engines that require explicit communication. We then evaluate this proposed scheme in a shared memory environment, using the Legion port of MiniAero as a proxy for a full-scale scientific application, as a means to provide input data sets of variable size for the in-situ statistical analyses in an AMT context. We demonstrate in particular that the approach has merit, and warrants further investigation, in collaboration with ongoing efforts to improve the overall parallel performance of the Legion system.« less

  3. Prevalence of herpes simplex, Epstein Barr and human papilloma viruses in oral lichen planus.

    PubMed

    Yildirim, Benay; Sengüven, Burcu; Demir, Cem

    2011-03-01

    The aim of the present study was to assess the prevalence of Herpes Simplex virus, Epstein Barr virus and Human Papilloma virus -16 in oral lichen planus cases and to evaluate whether any clinical variant, histopathological or demographic feature correlates with these viruses. The study was conducted on 65 cases. Viruses were detected immunohistochemically. We evaluated the histopathological and demographic features and statistically analysed correlation of these features with Herpes Simplex virus, Epstein Barr virus and Human Papilloma virus-16 positivity. Herpes Simplex virus was positive in six (9%) cases and this was not statistically significant. The number of Epstein Barr virus positive cases was 23 (35%) and it was statistically significant. Human Papilloma virus positivity in 14 cases (21%) was statistically significant. Except basal cell degeneration in Herpes Simplex virus positive cases, we did not observe any significant correlation between virus positivity and demographic or histopathological features. However an increased risk of Epstein Barr virus and Human Papilloma virus infection was noted in oral lichen planus cases. Taking into account the oncogenic potential of both viruses, oral lichen planus cases should be detected for the presence of these viruses.

  4. A Statistical Skull Geometry Model for Children 0-3 Years Old

    PubMed Central

    Li, Zhigang; Park, Byoung-Keon; Liu, Weiguo; Zhang, Jinhuan; Reed, Matthew P.; Rupp, Jonathan D.; Hoff, Carrie N.; Hu, Jingwen

    2015-01-01

    Head injury is the leading cause of fatality and long-term disability for children. Pediatric heads change rapidly in both size and shape during growth, especially for children under 3 years old (YO). To accurately assess the head injury risks for children, it is necessary to understand the geometry of the pediatric head and how morphologic features influence injury causation within the 0–3 YO population. In this study, head CT scans from fifty-six 0–3 YO children were used to develop a statistical model of pediatric skull geometry. Geometric features important for injury prediction, including skull size and shape, skull thickness and suture width, along with their variations among the sample population, were quantified through a series of image and statistical analyses. The size and shape of the pediatric skull change significantly with age and head circumference. The skull thickness and suture width vary with age, head circumference and location, which will have important effects on skull stiffness and injury prediction. The statistical geometry model developed in this study can provide a geometrical basis for future development of child anthropomorphic test devices and pediatric head finite element models. PMID:25992998

  5. A statistical skull geometry model for children 0-3 years old.

    PubMed

    Li, Zhigang; Park, Byoung-Keon; Liu, Weiguo; Zhang, Jinhuan; Reed, Matthew P; Rupp, Jonathan D; Hoff, Carrie N; Hu, Jingwen

    2015-01-01

    Head injury is the leading cause of fatality and long-term disability for children. Pediatric heads change rapidly in both size and shape during growth, especially for children under 3 years old (YO). To accurately assess the head injury risks for children, it is necessary to understand the geometry of the pediatric head and how morphologic features influence injury causation within the 0-3 YO population. In this study, head CT scans from fifty-six 0-3 YO children were used to develop a statistical model of pediatric skull geometry. Geometric features important for injury prediction, including skull size and shape, skull thickness and suture width, along with their variations among the sample population, were quantified through a series of image and statistical analyses. The size and shape of the pediatric skull change significantly with age and head circumference. The skull thickness and suture width vary with age, head circumference and location, which will have important effects on skull stiffness and injury prediction. The statistical geometry model developed in this study can provide a geometrical basis for future development of child anthropomorphic test devices and pediatric head finite element models.

  6. Comparative assessment of blood lead levels of automobile technicians in organised and roadside garages in Lagos, Nigeria.

    PubMed

    Saliu, Abdulsalam; Adebayo, Onajole; Kofoworola, Odeyemi; Babatunde, Ogunowo; Ismail, Abdussalam

    2015-01-01

    Occupational exposure to lead is common among automobile technicians and constitutes 0.9% of total global health burden with a majority of cases in developing countries. The aim of this study was to determine and compare the blood lead levels of automobile technicians in roadside and organised garages in Lagos State, Nigeria. This was a comparative cross-sectional study. Data were collected using interviewer-administered questionnaires. Physical examinations were conducted and blood was analysed for lead using atomic spectrophotometery. Statistical analyses were performed to compare the median blood lead levels of each group using the independent sample (Mann-Whitney U) test. Seventy-three (40.3%) of the organised compared to 59 (34.3%) of the roadside groups had high blood lead levels. The organised group had statistically significant higher median blood lead levels of, 66.0 µg/dL than the roadside 43.5 µg/dL (P < 0.05). There was also statistically significant association between high blood lead levels and abnormal discolouration of the mucosa of the mouth in the organised group. Automobile technicians in organised garages in Lagos have higher prevalence of elevated blood lead levels and higher median levels than the roadside group. Preventive strategies against lead exposures should be instituted by the employers and further actions should be taken to minimize exposures, improve work practices, implement engineering controls (e.g., proper ventilation), and ensure the use of personal protective equipment.

  7. Comparative Assessment of Blood Lead Levels of Automobile Technicians in Organised and Roadside Garages in Lagos, Nigeria

    PubMed Central

    Saliu, Abdulsalam; Adebayo, Onajole; Kofoworola, Odeyemi; Babatunde, Ogunowo; Ismail, Abdussalam

    2015-01-01

    Occupational exposure to lead is common among automobile technicians and constitutes 0.9% of total global health burden with a majority of cases in developing countries. The aim of this study was to determine and compare the blood lead levels of automobile technicians in roadside and organised garages in Lagos State, Nigeria. This was a comparative cross-sectional study. Data were collected using interviewer-administered questionnaires. Physical examinations were conducted and blood was analysed for lead using atomic spectrophotometery. Statistical analyses were performed to compare the median blood lead levels of each group using the independent sample (Mann-Whitney U) test. Seventy-three (40.3%) of the organised compared to 59 (34.3%) of the roadside groups had high blood lead levels. The organised group had statistically significant higher median blood lead levels of, 66.0 µg/dL than the roadside 43.5 µg/dL (P < 0.05). There was also statistically significant association between high blood lead levels and abnormal discolouration of the mucosa of the mouth in the organised group. Automobile technicians in organised garages in Lagos have higher prevalence of elevated blood lead levels and higher median levels than the roadside group. Preventive strategies against lead exposures should be instituted by the employers and further actions should be taken to minimize exposures, improve work practices, implement engineering controls (e.g., proper ventilation), and ensure the use of personal protective equipment. PMID:25759723

  8. A multi-wave study of organizational justice at work and long-term sickness absence among employees with depressive symptoms.

    PubMed

    Hjarsbech, Pernille U; Christensen, Karl Bang; Bjorner, Jakob B; Madsen, Ida E H; Thorsen, Sannie V; Carneiro, Isabella G; Christensen, Ulla; Rugulies, Reiner

    2014-03-01

    Mental health problems are strong predictors of long-term sickness absence (LTSA). In this study, we investigated whether organizational justice at work - fairness in resolving conflicts and distributing work - prevents risk of LTSA among employees with depressive symptoms. In a longitudinal study with five waves of data collection, we examined a cohort of 1034 employees with depressive symptoms. Depressive symptoms and organizational justice were assessed by self-administered questionnaires and information on LTSA was derived from a national register. Using Poisson regression analyses, we calculated rate ratios (RR) for the prospective association of organizational justice and change in organizational justice with time to onset of LTSA. All analyses were sex stratified. Among men, intermediate levels of organizational justice were statistically significantly associated with a decreased risk of subsequent LTSA after adjustment for covariates [RR 0.49, 95% confidence interval (95% CI) 0.26-0.91]. There was also a decreased risk for men with high levels of organizational justice although these estimates did not reach statistical significance after adjustment (RR 0.47, 95% CI 0.20-1.10). We found no such results for women. In both sexes, neither favorable nor adverse changes in organizational justice were statistically significantly associated with the risk of LTSA. This study shows that organizational justice may have a protective effect on the risk of LTSA among men with depressive symptoms. A protective effect of favorable changes in organizational justice was not found.

  9. Is cultural activity at work related to mental health in employees?

    PubMed

    Theorell, Töres; Osika, Walter; Leineweber, Constanze; Magnusson Hanson, Linda L; Bojner Horwitz, Eva; Westerlund, Hugo

    2013-04-01

    To examine relationships between work-based cultural activities and mental employee health in working Swedes. A positive relationship between frequent cultural activity at work and good employee health was expected. Random sample of working Swedish men and women in three waves, 2006, 2008 and 2010, on average 60 % participation rate. A postal questionnaire with questions about cultural activities organised for employees and about emotional exhaustion (Maslach) and depressive symptoms (short form of SCL). Employee assessments of "non-listening manager" and work environment ("psychological demands" and "decision latitude") as well as socioeconomic variables were covariates. Cross-sectional analyses for each study year as well as prospective analyses for 2006-2008 and 2008-2010 were performed. Lower frequency of cultural activities at work during the period of high unemployment. The effects of relationships with emotional exhaustion were more significant than those with depressive symptoms. The associations were attenuated when adjustments were made for manager function (does your manager listen?) and demand/control. Associations were more pronounced during the period with low unemployment and high cultural activity at work (2008). In a prospective analysis, cultural activity at work in 2008 had an independent statistically significant "protective" effect on emotional exhaustion in 2010. No corresponding such association was found between 2006 and 2008. Cultural activities at work vary according to business cycle and have a statistical association with mental employee health, particularly with emotional exhaustion. There are particularly pronounced statistical protective effects of frequent cultural activity at work on likelihood of emotional exhaustion among employees.

  10. Comparison of safety, efficacy and tolerability of dexibuprofen and ibuprofen in the treatment of osteoarthritis of the hip or knee.

    PubMed

    Zamani, Omid; Böttcher, Elke; Rieger, Jörg D; Mitterhuber, Johann; Hawel, Reinhold; Stallinger, Sylvia; Eller, Norbert

    2014-06-01

    In this observer-blinded, multicenter, non-inferiority study, 489 patients suffering from painful osteoarthritis of the hip or knee were included to investigate safety and tolerability of Dexibuprofen vs. Ibuprofen powder for oral suspension. Only patients who had everyday joint pain for the past 3 months and "moderate" to "severe" global pain intensity in the involved hip/knee of within the last 48 h were enrolled. The treatment period was up to 14 days with a control visit after 3 days. The test product was Dexibuprofen 400 mg powder for oral suspension (daily dose 800 mg) compared to Ibuprofen 400 mg powder for oral suspension (daily dose 1,600 mg). Gastrointestinal adverse drug reactions were reported in 8 patients (3.3 %) in the Dexibuprofen group and in 19 patients (7.8 %) in the Ibuprofen group. Statistically significant non-inferiority was shown for Dexibuprofen. Comparing both groups by a Chi square test showed a statistical significant lower proportion of related gastrointestinal events in the Dexibuprofen group. All analyses of secondary tolerability parameters showed the same result of a significantly better safety profile in this therapy setting for Dexibuprofen compared to Ibuprofen. The sum of pain intensity, pain relief and global assessments showed no significant difference between treatment groups. In summary, analyses revealed at least non-inferiority in terms of efficacy and a statistically significant better safety profile for the Dexibuprofen treatment.

  11. Spatial variation in the bacterial and denitrifying bacterial community in a biofilter treating subsurface agricultural drainage.

    PubMed

    Andrus, J Malia; Porter, Matthew D; Rodríguez, Luis F; Kuehlhorn, Timothy; Cooke, Richard A C; Zhang, Yuanhui; Kent, Angela D; Zilles, Julie L

    2014-02-01

    Denitrifying biofilters can remove agricultural nitrates from subsurface drainage, reducing nitrate pollution that contributes to coastal hypoxic zones. The performance and reliability of natural and engineered systems dependent upon microbially mediated processes, such as the denitrifying biofilters, can be affected by the spatial structure of their microbial communities. Furthermore, our understanding of the relationship between microbial community composition and function is influenced by the spatial distribution of samples.In this study we characterized the spatial structure of bacterial communities in a denitrifying biofilter in central Illinois. Bacterial communities were assessed using automated ribosomal intergenic spacer analysis for bacteria and terminal restriction fragment length polymorphism of nosZ for denitrifying bacteria.Non-metric multidimensional scaling and analysis of similarity (ANOSIM) analyses indicated that bacteria showed statistically significant spatial structure by depth and transect,while denitrifying bacteria did not exhibit significant spatial structure. For determination of spatial patterns, we developed a package of automated functions for the R statistical environment that allows directional analysis of microbial community composition data using either ANOSIM or Mantel statistics.Applying this package to the biofilter data, the flow path correlation range for the bacterial community was 6.4 m at the shallower, periodically in undated depth and 10.7 m at the deeper, continually submerged depth. These spatial structures suggest a strong influence of hydrology on the microbial community composition in these denitrifying biofilters. Understanding such spatial structure can also guide optimal sample collection strategies for microbial community analyses.

  12. Modeling stimulus variation in three common implicit attitude tasks.

    PubMed

    Wolsiefer, Katie; Westfall, Jacob; Judd, Charles M

    2017-08-01

    We explored the consequences of ignoring the sampling variation due to stimuli in the domain of implicit attitudes. A large literature in psycholinguistics has examined the statistical treatment of random stimulus materials, but the recommendations from this literature have not been applied to the social psychological literature on implicit attitudes. This is partly because of inherent complications in applying crossed random-effect models to some of the most common implicit attitude tasks, and partly because no work to date has demonstrated that random stimulus variation is in fact consequential in implicit attitude measurement. We addressed this problem by laying out statistically appropriate and practically feasible crossed random-effect models for three of the most commonly used implicit attitude measures-the Implicit Association Test, affect misattribution procedure, and evaluative priming task-and then applying these models to large datasets (average N = 3,206) that assess participants' implicit attitudes toward race, politics, and self-esteem. We showed that the test statistics from the traditional analyses are substantially (about 60 %) inflated relative to the more-appropriate analyses that incorporate stimulus variation. Because all three tasks used the same stimulus words and faces, we could also meaningfully compare the relative contributions of stimulus variation across the tasks. In an appendix, we give syntax in R, SAS, and SPSS for fitting the recommended crossed random-effects models to data from all three tasks, as well as instructions on how to structure the data file.

  13. The allele combinations of three loci based on, liver, stomach cancers, hematencephalon, COPD and normal population: A preliminary study.

    PubMed

    Gai, Liping; Liu, Hui; Cui, Jing-Hui; Yu, Weijian; Ding, Xiao-Dong

    2017-03-20

    The purpose of this study was to examine the specific allele combinations of three loci connected with the liver cancers, stomach cancers, hematencephalon and patients with chronic obstructive pulmonary disease (COPD) and to explore the feasibility of the research methods. We explored different mathematical methods for statistical analyses to assess the association between the genotype and phenotype. At the same time we still analyses the statistical results of allele combinations of three loci by difference value method and ratio method. All the DNA blood samples were collected from patients with 50 liver cancers, 75 stomach cancers, 50 hematencephalon, 72 COPD and 200 normal populations. All the samples were from Chinese. Alleles from short tandem repeat (STR) loci were determined using the STR Profiler plus PCR amplification kit (15 STR loci). Previous research was based on combinations of single-locus alleles, and combinations of cross-loci (two loci) alleles. Allele combinations of three loci were obtained by computer counting and stronger genetic signal was obtained. The methods of allele combinations of three loci can help to identify the statistically significant differences of allele combinations between liver cancers, stomach cancers, patients with hematencephalon, COPD and the normal population. The probability of illness followed different rules and had apparent specificity. This method can be extended to other diseases and provide reference for early clinical diagnosis. Copyright © 2016. Published by Elsevier B.V.

  14. A d-statistic for single-case designs that is equivalent to the usual between-groups d-statistic.

    PubMed

    Shadish, William R; Hedges, Larry V; Pustejovsky, James E; Boyajian, Jonathan G; Sullivan, Kristynn J; Andrade, Alma; Barrientos, Jeannette L

    2014-01-01

    We describe a standardised mean difference statistic (d) for single-case designs that is equivalent to the usual d in between-groups experiments. We show how it can be used to summarise treatment effects over cases within a study, to do power analyses in planning new studies and grant proposals, and to meta-analyse effects across studies of the same question. We discuss limitations of this d-statistic, and possible remedies to them. Even so, this d-statistic is better founded statistically than other effect size measures for single-case design, and unlike many general linear model approaches such as multilevel modelling or generalised additive models, it produces a standardised effect size that can be integrated over studies with different outcome measures. SPSS macros for both effect size computation and power analysis are available.

  15. Assessment of the educational environment at the College of Medicine of King Saud University, Riyadh.

    PubMed

    Al-Ayed, I H; Sheik, S A

    2008-01-01

    We used an Arabic translation (revised in our college) of the Dundee Ready Education Environment Measure (DREEM) inventory to assess the educational environment at the College of Medicine in King Saud University, Riyadh. Over 500 questionnaires were distributed and 222 were analysed. Scores were: 45.0% overall; 40.7% for students' perception of learning, 48.2% for perception of teachers, 46.3% for academic self-perception, 44.4% for perception of atmosphere, and 46.1% for social self-perception. Scores for first year students were significantly higher than the others. Scores for pre-clinical students were also significantly higher than those of students in clinical years. Sex was not a statistically significant variable.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bracken, M.B.

    This joint EPRI/National Institutes of Health study is the largest epidemiological study ever undertaken to examine the relationship between exposure to electric and magnetic fields (EMF) during pregnancy and reproductive outcomes. Overall, the study concludes that EMF exposure during pregnancy is unrelated to pregnancy outcome. In specific, the study reveals no association between electromagnetic field exposure from electrically heated beds and intrauterine growth retardation or spontaneous abortion. Among the many strengths of this study are clearly specified hypotheses; prospective design; randomized assignment to exposure monitoring; very large sample size; detailed assessment of potential confounding by known risk factors for adversemore » pregnancy outcomes; and comprehensive statistical analyses. The study also featured extensive exposure assessment, including measurements of EMF from a variety of sources, personal monitoring, and wire coding information.« less

  17. Identifying city PV roof resource based on Gabor filter

    NASA Astrophysics Data System (ADS)

    Ruhang, Xu; Zhilin, Liu; Yong, Huang; Xiaoyu, Zhang

    2017-06-01

    To identify a city’s PV roof resources, the area and ownership distribution of residential buildings in an urban district should be assessed. To achieve this assessment, remote sensing data analysing is a promising approach. Urban building roof area estimation is a major topic for remote sensing image information extraction. There are normally three ways to solve this problem. The first way is pixel-based analysis, which is based on mathematical morphology or statistical methods; the second way is object-based analysis, which is able to combine semantic information and expert knowledge; the third way is signal-processing view method. This paper presented a Gabor filter based method. This result shows that the method is fast and with proper accuracy.

  18. Sedentary work and the risk of breast cancer in premenopausal and postmenopausal women: a pooled analysis of two case-control studies.

    PubMed

    Boyle, Terry; Fritschi, Lin; Kobayashi, Lindsay C; Heyworth, Jane S; Lee, Derrick G; Si, Si; Aronson, Kristan J; Spinelli, John J

    2016-11-01

    There is limited research on the association between sedentary behaviour and breast cancer risk, particularly whether sedentary behaviour is differentially associated with premenopausal and postmenopausal breast cancer. We pooled data from 2 case-control studies from Australia and Canada to investigate this association. This pooled analysis included 1762 incident breast cancer cases and 2532 controls. Participants in both studies completed a lifetime occupational history and self-rated occupational physical activity level. A job-exposure matrix (JEM) was also applied to job titles to assess sedentary work. Logistic regression analyses (6 pooled and 12 study-specific) were conducted to estimate associations between both self-reported and JEM-assessed sedentary work and breast cancer risk among premenopausal and postmenopausal women. No association was observed in the 6 pooled analyses, and 10 of the study-specific analyses also showed null results. 2 study-specific analyses provided inconsistent and contradictory results, with 1 showing statistically significant increased risk of breast cancer for self-reported sedentary work among premenopausal women cancer in the Canadian study, and the other a non-significant inverse association between JEM-assessed sedentary work and breast cancer risk among postmenopausal women in the Australian study. While a suggestion of increased risk was seen for premenopausal women in the Canadian study when using the self-reported measure, overall this pooled study does not provide evidence that sedentary work is associated with breast cancer risk. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  19. On the Use of Local Assessments for Monitoring Centrally Reviewed Endpoints with Missing Data in Clinical Trials*

    PubMed Central

    Brummel, Sean S.; Gillen, Daniel L.

    2014-01-01

    Due to ethical and logistical concerns it is common for data monitoring committees to periodically monitor accruing clinical trial data to assess the safety, and possibly efficacy, of a new experimental treatment. When formalized, monitoring is typically implemented using group sequential methods. In some cases regulatory agencies have required that primary trial analyses should be based solely on the judgment of an independent review committee (IRC). The IRC assessments can produce difficulties for trial monitoring given the time lag typically associated with receiving assessments from the IRC. This results in a missing data problem wherein a surrogate measure of response may provide useful information for interim decisions and future monitoring strategies. In this paper, we present statistical tools that are helpful for monitoring a group sequential clinical trial with missing IRC data. We illustrate the proposed methodology in the case of binary endpoints under various missingness mechanisms including missing completely at random assessments and when missingness depends on the IRC’s measurement. PMID:25540717

  20. When ab ≠ c - c': published errors in the reports of single-mediator models.

    PubMed

    Petrocelli, John V; Clarkson, Joshua J; Whitmire, Melanie B; Moon, Paul E

    2013-06-01

    Accurate reports of mediation analyses are critical to the assessment of inferences related to causality, since these inferences are consequential for both the evaluation of previous research (e.g., meta-analyses) and the progression of future research. However, upon reexamination, approximately 15% of published articles in psychology contain at least one incorrect statistical conclusion (Bakker & Wicherts, Behavior research methods, 43, 666-678 2011), disparities that beget the question of inaccuracy in mediation reports. To quantify this question of inaccuracy, articles reporting standard use of single-mediator models in three high-impact journals in personality and social psychology during 2011 were examined. More than 24% of the 156 models coded failed an equivalence test (i.e., ab = c - c'), suggesting that one or more regression coefficients in mediation analyses are frequently misreported. The authors cite common sources of errors, provide recommendations for enhanced accuracy in reports of single-mediator models, and discuss implications for alternative methods.

Top