2011-01-01
Background Clinical researchers have often preferred to use a fixed effects model for the primary interpretation of a meta-analysis. Heterogeneity is usually assessed via the well known Q and I2 statistics, along with the random effects estimate they imply. In recent years, alternative methods for quantifying heterogeneity have been proposed, that are based on a 'generalised' Q statistic. Methods We review 18 IPD meta-analyses of RCTs into treatments for cancer, in order to quantify the amount of heterogeneity present and also to discuss practical methods for explaining heterogeneity. Results Differing results were obtained when the standard Q and I2 statistics were used to test for the presence of heterogeneity. The two meta-analyses with the largest amount of heterogeneity were investigated further, and on inspection the straightforward application of a random effects model was not deemed appropriate. Compared to the standard Q statistic, the generalised Q statistic provided a more accurate platform for estimating the amount of heterogeneity in the 18 meta-analyses. Conclusions Explaining heterogeneity via the pre-specification of trial subgroups, graphical diagnostic tools and sensitivity analyses produced a more desirable outcome than an automatic application of the random effects model. Generalised Q statistic methods for quantifying and adjusting for heterogeneity should be incorporated as standard into statistical software. Software is provided to help achieve this aim. PMID:21473747
Tang, Qi-Yi; Zhang, Chuan-Xi
2013-04-01
A comprehensive but simple-to-use software package called DPS (Data Processing System) has been developed to execute a range of standard numerical analyses and operations used in experimental design, statistics and data mining. This program runs on standard Windows computers. Many of the functions are specific to entomological and other biological research and are not found in standard statistical software. This paper presents applications of DPS to experimental design, statistical analysis and data mining in entomology. © 2012 The Authors Insect Science © 2012 Institute of Zoology, Chinese Academy of Sciences.
Test 6, Test 7, and Gas Standard Analysis Results
NASA Technical Reports Server (NTRS)
Perez, Horacio, III
2007-01-01
This viewgraph presentation shows results of analyses on odor, toxic off gassing and gas standards. The topics include: 1) Statistical Analysis Definitions; 2) Odor Analysis Results NASA Standard 6001 Test 6; 3) Toxic Off gassing Analysis Results NASA Standard 6001 Test 7; and 4) Gas Standard Results NASA Standard 6001 Test 7;
Using DEWIS and R for Multi-Staged Statistics e-Assessments
ERIC Educational Resources Information Center
Gwynllyw, D. Rhys; Weir, Iain S.; Henderson, Karen L.
2016-01-01
We demonstrate how the DEWIS e-Assessment system may use embedded R code to facilitate the assessment of students' ability to perform involved statistical analyses. The R code has been written to emulate SPSS output and thus the statistical results for each bespoke data set can be generated efficiently and accurately using standard R routines.…
Weir, Christopher J; Butcher, Isabella; Assi, Valentina; Lewis, Stephanie C; Murray, Gordon D; Langhorne, Peter; Brady, Marian C
2018-03-07
Rigorous, informative meta-analyses rely on availability of appropriate summary statistics or individual participant data. For continuous outcomes, especially those with naturally skewed distributions, summary information on the mean or variability often goes unreported. While full reporting of original trial data is the ideal, we sought to identify methods for handling unreported mean or variability summary statistics in meta-analysis. We undertook two systematic literature reviews to identify methodological approaches used to deal with missing mean or variability summary statistics. Five electronic databases were searched, in addition to the Cochrane Colloquium abstract books and the Cochrane Statistics Methods Group mailing list archive. We also conducted cited reference searching and emailed topic experts to identify recent methodological developments. Details recorded included the description of the method, the information required to implement the method, any underlying assumptions and whether the method could be readily applied in standard statistical software. We provided a summary description of the methods identified, illustrating selected methods in example meta-analysis scenarios. For missing standard deviations (SDs), following screening of 503 articles, fifteen methods were identified in addition to those reported in a previous review. These included Bayesian hierarchical modelling at the meta-analysis level; summary statistic level imputation based on observed SD values from other trials in the meta-analysis; a practical approximation based on the range; and algebraic estimation of the SD based on other summary statistics. Following screening of 1124 articles for methods estimating the mean, one approximate Bayesian computation approach and three papers based on alternative summary statistics were identified. Illustrative meta-analyses showed that when replacing a missing SD the approximation using the range minimised loss of precision and generally performed better than omitting trials. When estimating missing means, a formula using the median, lower quartile and upper quartile performed best in preserving the precision of the meta-analysis findings, although in some scenarios, omitting trials gave superior results. Methods based on summary statistics (minimum, maximum, lower quartile, upper quartile, median) reported in the literature facilitate more comprehensive inclusion of randomised controlled trials with missing mean or variability summary statistics within meta-analyses.
COGNATE: comparative gene annotation characterizer.
Wilbrandt, Jeanne; Misof, Bernhard; Niehuis, Oliver
2017-07-17
The comparison of gene and genome structures across species has the potential to reveal major trends of genome evolution. However, such a comparative approach is currently hampered by a lack of standardization (e.g., Elliott TA, Gregory TR, Philos Trans Royal Soc B: Biol Sci 370:20140331, 2015). For example, testing the hypothesis that the total amount of coding sequences is a reliable measure of potential proteome diversity (Wang M, Kurland CG, Caetano-Anollés G, PNAS 108:11954, 2011) requires the application of standardized definitions of coding sequence and genes to create both comparable and comprehensive data sets and corresponding summary statistics. However, such standard definitions either do not exist or are not consistently applied. These circumstances call for a standard at the descriptive level using a minimum of parameters as well as an undeviating use of standardized terms, and for software that infers the required data under these strict definitions. The acquisition of a comprehensive, descriptive, and standardized set of parameters and summary statistics for genome publications and further analyses can thus greatly benefit from the availability of an easy to use standard tool. We developed a new open-source command-line tool, COGNATE (Comparative Gene Annotation Characterizer), which uses a given genome assembly and its annotation of protein-coding genes for a detailed description of the respective gene and genome structure parameters. Additionally, we revised the standard definitions of gene and genome structures and provide the definitions used by COGNATE as a working draft suggestion for further reference. Complete parameter lists and summary statistics are inferred using this set of definitions to allow down-stream analyses and to provide an overview of the genome and gene repertoire characteristics. COGNATE is written in Perl and freely available at the ZFMK homepage ( https://www.zfmk.de/en/COGNATE ) and on github ( https://github.com/ZFMK/COGNATE ). The tool COGNATE allows comparing genome assemblies and structural elements on multiples levels (e.g., scaffold or contig sequence, gene). It clearly enhances comparability between analyses. Thus, COGNATE can provide the important standardization of both genome and gene structure parameter disclosure as well as data acquisition for future comparative analyses. With the establishment of comprehensive descriptive standards and the extensive availability of genomes, an encompassing database will become possible.
2015-01-01
The goal of this study was to analyse perceptually and acoustically the voices of patients with Unilateral Vocal Fold Paralysis (UVFP) and compare them to the voices of normal subjects. These voices were analysed perceptually with the GRBAS scale and acoustically using the following parameters: mean fundamental frequency (F0), standard-deviation of F0, jitter (ppq5), shimmer (apq11), mean harmonics-to-noise ratio (HNR), mean first (F1) and second (F2) formants frequency, and standard-deviation of F1 and F2 frequencies. Statistically significant differences were found in all of the perceptual parameters. Also the jitter, shimmer, HNR, standard-deviation of F0, and standard-deviation of the frequency of F2 were statistically different between groups, for both genders. In the male data differences were also found in F1 and F2 frequencies values and in the standard-deviation of the frequency of F1. This study allowed the documentation of the alterations resulting from UVFP and addressed the exploration of parameters with limited information for this pathology. PMID:26557690
WAIS-IV Subtest Covariance Structure: Conceptual and Statistical Considerations
ERIC Educational Resources Information Center
Ward, L. Charles; Bergman, Maria A.; Hebert, Katina R.
2012-01-01
D. Wechsler (2008b) reported confirmatory factor analyses (CFAs) with standardization data (ages 16-69 years) for 10 core and 5 supplemental subtests from the Wechsler Adult Intelligence Scale-Fourth Edition (WAIS-IV). Analyses of the 15 subtests supported 4 hypothesized oblique factors (Verbal Comprehension, Working Memory, Perceptual Reasoning,…
DOT National Transportation Integrated Search
2011-03-01
"NHTSA selected the vehicle footprint (the measure of a vehicles wheelbase multiplied by its average track width) as the attribute upon which to base the CAFE standards for model year 2012-2016 passenger cars and light trucks. These standards are ...
Formalizing the definition of meta-analysis in Molecular Ecology.
ArchMiller, Althea A; Bauer, Eric F; Koch, Rebecca E; Wijayawardena, Bhagya K; Anil, Ammu; Kottwitz, Jack J; Munsterman, Amelia S; Wilson, Alan E
2015-08-01
Meta-analysis, the statistical synthesis of pertinent literature to develop evidence-based conclusions, is relatively new to the field of molecular ecology, with the first meta-analysis published in the journal Molecular Ecology in 2003 (Slate & Phua 2003). The goal of this article is to formalize the definition of meta-analysis for the authors, editors, reviewers and readers of Molecular Ecology by completing a review of the meta-analyses previously published in this journal. We also provide a brief overview of the many components required for meta-analysis with a more specific discussion of the issues related to the field of molecular ecology, including the use and statistical considerations of Wright's FST and its related analogues as effect sizes in meta-analysis. We performed a literature review to identify articles published as 'meta-analyses' in Molecular Ecology, which were then evaluated by at least two reviewers. We specifically targeted Molecular Ecology publications because as a flagship journal in this field, meta-analyses published in Molecular Ecology have the potential to set the standard for meta-analyses in other journals. We found that while many of these reviewed articles were strong meta-analyses, others failed to follow standard meta-analytical techniques. One of these unsatisfactory meta-analyses was in fact a secondary analysis. Other studies attempted meta-analyses but lacked the fundamental statistics that are considered necessary for an effective and powerful meta-analysis. By drawing attention to the inconsistency of studies labelled as meta-analyses, we emphasize the importance of understanding the components of traditional meta-analyses to fully embrace the strengths of quantitative data synthesis in the field of molecular ecology. © 2015 John Wiley & Sons Ltd.
Henderson, G; Fahey, T; McGuire, W
2007-10-17
Preterm infants are often growth-restricted at hospital discharge. Feeding infants after hospital discharge with nutrient-enriched formula rather than standard term formula might facilitate "catch-up" growth and improve development. To determine the effect of feeding nutrient-enriched formula compared with standard term formula on growth and development for preterm infants following hospital discharge. The standard search strategy of the Cochrane Neonatal Review Group were used. This included searches of the Cochrane Central Register of Controlled Trials (CENTRAL, The Cochrane Library, Issue 2, 2007), MEDLINE (1966 - May 2007), EMBASE (1980 - May 2007), CINAHL (1982 - May 2007), conference proceedings, and previous reviews. Randomised or quasi-randomised controlled trials that compared the effect of feeding preterm infants following hospital discharge with nutrient-enriched formula compared with standard term formula. Data was extracted using the standard methods of the Cochrane Neonatal Review Group, with separate evaluation of trial quality and data extraction by two authors, and synthesis of data using weighted mean difference and a fixed effects model for meta-analysis. Seven trials were found that were eligible for inclusion. These recruited a total of 631 infants and were generally of good methodological quality. The trials found little evidence that feeding with nutrient-enriched formula milk affected growth and development. Because of differences in the way individual trials measured and presented outcomes, data synthesis was limited. Growth data from two trials found that, at six months post-term, infants fed with nutrient-enriched formula had statistically significantly lower weights [weighted mean difference: -601 (95% confidence interval -1028, -174) grams], lengths [-18.8 (-30.0, -7.6) millimetres], and head circumferences [-10.2 ( -18.0, -2.4) millimetres], than infants fed standard term formula. At 12 to 18 months post-term, meta-analyses of data from three trials did not find any statistically significant differences in growth parameters. However, examination of these meta-analyses demonstrated statistical heterogeneity. Meta-analyses of data from two trials did not reveal a statistically significant difference in Bayley Mental Development or Psychomotor Development Indices. There are not yet any data on growth or development through later childhood. The available data do not provide strong evidence that feeding preterm infants following hospital discharge with nutrient-enriched formula compared with standard term formula affects growth rates or development up to 18 months post-term.
Comparing Visual and Statistical Analysis of Multiple Baseline Design Graphs.
Wolfe, Katie; Dickenson, Tammiee S; Miller, Bridget; McGrath, Kathleen V
2018-04-01
A growing number of statistical analyses are being developed for single-case research. One important factor in evaluating these methods is the extent to which each corresponds to visual analysis. Few studies have compared statistical and visual analysis, and information about more recently developed statistics is scarce. Therefore, our purpose was to evaluate the agreement between visual analysis and four statistical analyses: improvement rate difference (IRD); Tau-U; Hedges, Pustejovsky, Shadish (HPS) effect size; and between-case standardized mean difference (BC-SMD). Results indicate that IRD and BC-SMD had the strongest overall agreement with visual analysis. Although Tau-U had strong agreement with visual analysis on raw values, it had poorer agreement when those values were dichotomized to represent the presence or absence of a functional relation. Overall, visual analysis appeared to be more conservative than statistical analysis, but further research is needed to evaluate the nature of these disagreements.
Applying Beliefs and Resources Frameworks to the Psychometric Analyses of an Epistemology Survey
ERIC Educational Resources Information Center
Yerdelen-Damar, Sevda; Elby, Andrew; Eryilmaz, Ali
2012-01-01
This study explored how researchers' views about the form of students' epistemologies influence how the researchers develop and refine surveys and how they interpret survey results. After running standard statistical analyses on 505 physics students' responses to the Turkish version of the Maryland Physics Expectations-II survey, probing students'…
Coordinate based random effect size meta-analysis of neuroimaging studies.
Tench, C R; Tanasescu, Radu; Constantinescu, C S; Auer, D P; Cottam, W J
2017-06-01
Low power in neuroimaging studies can make them difficult to interpret, and Coordinate based meta-analysis (CBMA) may go some way to mitigating this issue. CBMA has been used in many analyses to detect where published functional MRI or voxel-based morphometry studies testing similar hypotheses report significant summary results (coordinates) consistently. Only the reported coordinates and possibly t statistics are analysed, and statistical significance of clusters is determined by coordinate density. Here a method of performing coordinate based random effect size meta-analysis and meta-regression is introduced. The algorithm (ClusterZ) analyses both coordinates and reported t statistic or Z score, standardised by the number of subjects. Statistical significance is determined not by coordinate density, but by a random effects meta-analyses of reported effects performed cluster-wise using standard statistical methods and taking account of censoring inherent in the published summary results. Type 1 error control is achieved using the false cluster discovery rate (FCDR), which is based on the false discovery rate. This controls both the family wise error rate under the null hypothesis that coordinates are randomly drawn from a standard stereotaxic space, and the proportion of significant clusters that are expected under the null. Such control is necessary to avoid propagating and even amplifying the very issues motivating the meta-analysis in the first place. ClusterZ is demonstrated on both numerically simulated data and on real data from reports of grey matter loss in multiple sclerosis (MS) and syndromes suggestive of MS, and of painful stimulus in healthy controls. The software implementation is available to download and use freely. Copyright © 2017 Elsevier Inc. All rights reserved.
A Comparison of Readability in Science-Based Texts: Implications for Elementary Teachers
ERIC Educational Resources Information Center
Gallagher, Tiffany; Fazio, Xavier; Ciampa, Katia
2017-01-01
Science curriculum standards were mapped onto various texts (literacy readers, trade books, online articles). Statistical analyses highlighted the inconsistencies among readability formulae for Grades 2-6 levels of the standards. There was a lack of correlation among the readability measures, and also when comparing different text sources. Online…
ERIC Educational Resources Information Center
Fulmer, Gavin W.; Polikoff, Morgan S.
2014-01-01
An essential component in school accountability efforts is for assessments to be well-aligned with the standards or curriculum they are intended to measure. However, relatively little prior research has explored methods to determine statistical significance of alignment or misalignment. This study explores analyses of alignment as a special case…
Lu, Z. Q. J.; Lowhorn, N. D.; Wong-Ng, W.; Zhang, W.; Thomas, E. L.; Otani, M.; Green, M. L.; Tran, T. N.; Caylor, C.; Dilley, N. R.; Downey, A.; Edwards, B.; Elsner, N.; Ghamaty, S.; Hogan, T.; Jie, Q.; Li, Q.; Martin, J.; Nolas, G.; Obara, H.; Sharp, J.; Venkatasubramanian, R.; Willigan, R.; Yang, J.; Tritt, T.
2009-01-01
In an effort to develop a Standard Reference Material (SRM™) for Seebeck coefficient, we have conducted a round-robin measurement survey of two candidate materials—undoped Bi2Te3 and Constantan (55 % Cu and 45 % Ni alloy). Measurements were performed in two rounds by twelve laboratories involved in active thermoelectric research using a number of different commercial and custom-built measurement systems and techniques. In this paper we report the detailed statistical analyses on the interlaboratory measurement results and the statistical methodology for analysis of irregularly sampled measurement curves in the interlaboratory study setting. Based on these results, we have selected Bi2Te3 as the prototype standard material. Once available, this SRM will be useful for future interlaboratory data comparison and instrument calibrations. PMID:27504212
Analysis and meta-analysis of single-case designs: an introduction.
Shadish, William R
2014-04-01
The last 10 years have seen great progress in the analysis and meta-analysis of single-case designs (SCDs). This special issue includes five articles that provide an overview of current work on that topic, including standardized mean difference statistics, multilevel models, Bayesian statistics, and generalized additive models. Each article analyzes a common example across articles and presents syntax or macros for how to do them. These articles are followed by commentaries from single-case design researchers and journal editors. This introduction briefly describes each article and then discusses several issues that must be addressed before we can know what analyses will eventually be best to use in SCD research. These issues include modeling trend, modeling error covariances, computing standardized effect size estimates, assessing statistical power, incorporating more accurate models of outcome distributions, exploring whether Bayesian statistics can improve estimation given the small samples common in SCDs, and the need for annotated syntax and graphical user interfaces that make complex statistics accessible to SCD researchers. The article then discusses reasons why SCD researchers are likely to incorporate statistical analyses into their research more often in the future, including changing expectations and contingencies regarding SCD research from outside SCD communities, changes and diversity within SCD communities, corrections of erroneous beliefs about the relationship between SCD research and statistics, and demonstrations of how statistics can help SCD researchers better meet their goals. Copyright © 2013 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved.
Edjabou, Maklawe Essonanawe; Martín-Fernández, Josep Antoni; Scheutz, Charlotte; Astrup, Thomas Fruergaard
2017-11-01
Data for fractional solid waste composition provide relative magnitudes of individual waste fractions, the percentages of which always sum to 100, thereby connecting them intrinsically. Due to this sum constraint, waste composition data represent closed data, and their interpretation and analysis require statistical methods, other than classical statistics that are suitable only for non-constrained data such as absolute values. However, the closed characteristics of waste composition data are often ignored when analysed. The results of this study showed, for example, that unavoidable animal-derived food waste amounted to 2.21±3.12% with a confidence interval of (-4.03; 8.45), which highlights the problem of the biased negative proportions. A Pearson's correlation test, applied to waste fraction generation (kg mass), indicated a positive correlation between avoidable vegetable food waste and plastic packaging. However, correlation tests applied to waste fraction compositions (percentage values) showed a negative association in this regard, thus demonstrating that statistical analyses applied to compositional waste fraction data, without addressing the closed characteristics of these data, have the potential to generate spurious or misleading results. Therefore, ¨compositional data should be transformed adequately prior to any statistical analysis, such as computing mean, standard deviation and correlation coefficients. Copyright © 2017 Elsevier Ltd. All rights reserved.
Rhodes, Kirsty M; Turner, Rebecca M; White, Ian R; Jackson, Dan; Spiegelhalter, David J; Higgins, Julian P T
2016-12-20
Many meta-analyses combine results from only a small number of studies, a situation in which the between-study variance is imprecisely estimated when standard methods are applied. Bayesian meta-analysis allows incorporation of external evidence on heterogeneity, providing the potential for more robust inference on the effect size of interest. We present a method for performing Bayesian meta-analysis using data augmentation, in which we represent an informative conjugate prior for between-study variance by pseudo data and use meta-regression for estimation. To assist in this, we derive predictive inverse-gamma distributions for the between-study variance expected in future meta-analyses. These may serve as priors for heterogeneity in new meta-analyses. In a simulation study, we compare approximate Bayesian methods using meta-regression and pseudo data against fully Bayesian approaches based on importance sampling techniques and Markov chain Monte Carlo (MCMC). We compare the frequentist properties of these Bayesian methods with those of the commonly used frequentist DerSimonian and Laird procedure. The method is implemented in standard statistical software and provides a less complex alternative to standard MCMC approaches. An importance sampling approach produces almost identical results to standard MCMC approaches, and results obtained through meta-regression and pseudo data are very similar. On average, data augmentation provides closer results to MCMC, if implemented using restricted maximum likelihood estimation rather than DerSimonian and Laird or maximum likelihood estimation. The methods are applied to real datasets, and an extension to network meta-analysis is described. The proposed method facilitates Bayesian meta-analysis in a way that is accessible to applied researchers. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.
Statistical study of air pollutant concentrations via generalized gamma distribution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marani, A.; Lavagnini, I.; Buttazzoni, C.
1986-11-01
This paper deals with modeling observed frequency distributions of air quality data measured in the area of Venice, Italy. The paper discusses the application of the generalized gamma distribution (ggd) which has not been commonly applied to air quality data notwithstanding the fact that it embodies most distribution models used for air quality analyses. The approach yields important simplifications for statistical analyses. A comparison among the ggd and other relevant models (standard gamma, Weibull, lognormal), carried out on daily sulfur dioxide concentrations in the area of Venice underlines the efficiency of ggd models in portraying experimental data.
Living systematic reviews: 3. Statistical methods for updating meta-analyses.
Simmonds, Mark; Salanti, Georgia; McKenzie, Joanne; Elliott, Julian
2017-11-01
A living systematic review (LSR) should keep the review current as new research evidence emerges. Any meta-analyses included in the review will also need updating as new material is identified. If the aim of the review is solely to present the best current evidence standard meta-analysis may be sufficient, provided reviewers are aware that results may change at later updates. If the review is used in a decision-making context, more caution may be needed. When using standard meta-analysis methods, the chance of incorrectly concluding that any updated meta-analysis is statistically significant when there is no effect (the type I error) increases rapidly as more updates are performed. Inaccurate estimation of any heterogeneity across studies may also lead to inappropriate conclusions. This paper considers four methods to avoid some of these statistical problems when updating meta-analyses: two methods, that is, law of the iterated logarithm and the Shuster method control primarily for inflation of type I error and two other methods, that is, trial sequential analysis and sequential meta-analysis control for type I and II errors (failing to detect a genuine effect) and take account of heterogeneity. This paper compares the methods and considers how they could be applied to LSRs. Copyright © 2017 Elsevier Inc. All rights reserved.
Brown, Geoffrey W.; Sandstrom, Mary M.; Preston, Daniel N.; ...
2014-11-17
In this study, the Integrated Data Collection Analysis (IDCA) program has conducted a proficiency test for small-scale safety and thermal (SSST) testing of homemade explosives (HMEs). Described here are statistical analyses of the results from this test for impact, friction, electrostatic discharge, and differential scanning calorimetry analysis of the RDX Class 5 Type II standard. The material was tested as a well-characterized standard several times during the proficiency test to assess differences among participants and the range of results that may arise for well-behaved explosive materials.
How can my research paper be useful for future meta-analyses on forest restoration practices?
Enrique Andivia; Pedro Villar‑Salvador; Juan A. Oliet; Jaime Puertolas; R. Kasten Dumroese
2018-01-01
Statistical meta-analysis is a powerful and useful tool to quantitatively synthesize the information conveyed in published studies on a particular topic. It allows identifying and quantifying overall patterns and exploring causes of variation. The inclusion of published works in meta-analyses requires, however, a minimum quality standard of the reported data and...
Thompson, Ronald E.; Hoffman, Scott A.
2006-01-01
A suite of 28 streamflow statistics, ranging from extreme low to high flows, was computed for 17 continuous-record streamflow-gaging stations and predicted for 20 partial-record stations in Monroe County and contiguous counties in north-eastern Pennsylvania. The predicted statistics for the partial-record stations were based on regression analyses relating inter-mittent flow measurements made at the partial-record stations indexed to concurrent daily mean flows at continuous-record stations during base-flow conditions. The same statistics also were predicted for 134 ungaged stream locations in Monroe County on the basis of regression analyses relating the statistics to GIS-determined basin characteristics for the continuous-record station drainage areas. The prediction methodology for developing the regression equations used to estimate statistics was developed for estimating low-flow frequencies. This study and a companion study found that the methodology also has application potential for predicting intermediate- and high-flow statistics. The statistics included mean monthly flows, mean annual flow, 7-day low flows for three recurrence intervals, nine flow durations, mean annual base flow, and annual mean base flows for two recurrence intervals. Low standard errors of prediction and high coefficients of determination (R2) indicated good results in using the regression equations to predict the statistics. Regression equations for the larger flow statistics tended to have lower standard errors of prediction and higher coefficients of determination (R2) than equations for the smaller flow statistics. The report discusses the methodologies used in determining the statistics and the limitations of the statistics and the equations used to predict the statistics. Caution is indicated in using the predicted statistics for small drainage area situations. Study results constitute input needed by water-resource managers in Monroe County for planning purposes and evaluation of water-resources availability.
ERIC Educational Resources Information Center
Schiel, Jeff L.; King, Jason E.
Analyses of data from operational course placement systems are subject to the effects of truncation; students with low placement test scores may enroll in a remedial course, rather than a standard-level course, and therefore will not have outcome data from the standard course. In "soft" truncation, some (but not all) students who score…
Dark matter constraints from a joint analysis of dwarf Spheroidal galaxy observations with VERITAS
Archambault, S.; Archer, A.; Benbow, W.; ...
2017-04-05
We present constraints on the annihilation cross section of weakly interacting massive particles dark matter based on the joint statistical analysis of four dwarf galaxies with VERITAS. These results are derived from an optimized photon weighting statistical technique that improves on standard imaging atmospheric Cherenkov telescope (IACT) analyses by utilizing the spectral and spatial properties of individual photon events.
Chaisinanunkul, Napasri; Adeoye, Opeolu; Lewis, Roger J.; Grotta, James C.; Broderick, Joseph; Jovin, Tudor G.; Nogueira, Raul G.; Elm, Jordan; Graves, Todd; Berry, Scott; Lees, Kennedy R.; Barreto, Andrew D.; Saver, Jeffrey L.
2015-01-01
Background and Purpose Although the modified Rankin Scale (mRS) is the most commonly employed primary endpoint in acute stroke trials, its power is limited when analyzed in dichotomized fashion and its indication of effect size challenging to interpret when analyzed ordinally. Weighting the seven Rankin levels by utilities may improve scale interpretability while preserving statistical power. Methods A utility weighted mRS (UW-mRS) was derived by averaging values from time-tradeoff (patient centered) and person-tradeoff (clinician centered) studies. The UW-mRS, standard ordinal mRS, and dichotomized mRS were applied to 11 trials or meta-analyses of acute stroke treatments, including lytic, endovascular reperfusion, blood pressure moderation, and hemicraniectomy interventions. Results Utility values were: mRS 0–1.0; mRS 1 - 0.91; mRS 2 - 0.76; mRS 3 - 0.65; mRS 4 - 0.33; mRS 5 & 6 - 0. For trials with unidirectional treatment effects, the UW-mRS paralleled the ordinal mRS and outperformed dichotomous mRS analyses. Both the UW-mRS and the ordinal mRS were statistically significant in six of eight unidirectional effect trials, while dichotomous analyses were statistically significant in two to four of eight. In bidirectional effect trials, both the UW-mRS and ordinal tests captured the divergent treatment effects by showing neutral results whereas some dichotomized analyses showed positive results. Mean utility differences in trials with statistically significant positive results ranged from 0.026 to 0.249. Conclusion A utility-weighted mRS performs similarly to the standard ordinal mRS in detecting treatment effects in actual stroke trials and ensures the quantitative outcome is a valid reflection of patient-centered benefits. PMID:26138130
Barry, Samantha J; Pham, Tran N; Borman, Phil J; Edwards, Andrew J; Watson, Simon A
2012-01-27
The DMAIC (Define, Measure, Analyse, Improve and Control) framework and associated statistical tools have been applied to both identify and reduce variability observed in a quantitative (19)F solid-state NMR (SSNMR) analytical method. The method had been developed to quantify levels of an additional polymorph (Form 3) in batches of an active pharmaceutical ingredient (API), where Form 1 is the predominant polymorph. In order to validate analyses of the polymorphic form, a single batch of API was used as a standard each time the method was used. The level of Form 3 in this standard was observed to gradually increase over time, the effect not being immediately apparent due to method variability. In order to determine the cause of this unexpected increase and to reduce method variability, a risk-based statistical investigation was performed to identify potential factors which could be responsible for these effects. Factors identified by the risk assessment were investigated using a series of designed experiments to gain a greater understanding of the method. The increase of the level of Form 3 in the standard was primarily found to correlate with the number of repeat analyses, an effect not previously reported in SSNMR literature. Differences in data processing (phasing and linewidth) were found to be responsible for the variability in the method. After implementing corrective actions the variability was reduced such that the level of Form 3 was within an acceptable range of ±1% ww(-1) in fresh samples of API. Copyright © 2011. Published by Elsevier B.V.
Schulz, Marcus; Neumann, Daniel; Fleet, David M; Matthies, Michael
2013-12-01
During the last decades, marine pollution with anthropogenic litter has become a worldwide major environmental concern. Standardized monitoring of litter since 2001 on 78 beaches selected within the framework of the Convention for the Protection of the Marine Environment of the North-East Atlantic (OSPAR) has been used to identify temporal trends of marine litter. Based on statistical analyses of this dataset a two-part multi-criteria evaluation system for beach litter pollution of the North-East Atlantic and the North Sea is proposed. Canonical correlation analyses, linear regression analyses, and non-parametric analyses of variance were used to identify different temporal trends. A classification of beaches was derived from cluster analyses and served to define different states of beach quality according to abundances of 17 input variables. The evaluation system is easily applicable and relies on the above-mentioned classification and on significant temporal trends implied by significant rank correlations. Copyright © 2013 Elsevier Ltd. All rights reserved.
Hutton, Brian; Wolfe, Dianna; Moher, David; Shamseer, Larissa
2017-05-01
Research waste has received considerable attention from the biomedical community. One noteworthy contributor is incomplete reporting in research publications. When detailing statistical methods and results, ensuring analytic methods and findings are completely documented improves transparency. For publications describing randomised trials and systematic reviews, guidelines have been developed to facilitate complete reporting. This overview summarises aspects of statistical reporting in trials and systematic reviews of health interventions. A narrative approach to summarise features regarding statistical methods and findings from reporting guidelines for trials and reviews was taken. We aim to enhance familiarity of statistical details that should be reported in biomedical research among statisticians and their collaborators. We summarise statistical reporting considerations for trials and systematic reviews from guidance documents including the Consolidated Standards of Reporting Trials (CONSORT) Statement for reporting of trials, the Standard Protocol Items: Recommendations for Interventional Trials (SPIRIT) Statement for trial protocols, the Statistical Analyses and Methods in the Published Literature (SAMPL) Guidelines for statistical reporting principles, the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) Statement for systematic reviews and PRISMA for Protocols (PRISMA-P). Considerations regarding sharing of study data and statistical code are also addressed. Reporting guidelines provide researchers with minimum criteria for reporting. If followed, they can enhance research transparency and contribute improve quality of biomedical publications. Authors should employ these tools for planning and reporting of their research. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
ERIC Educational Resources Information Center
Gidey, Mu'uz
2015-01-01
This action research is carried out in a practical class room setting to devise an innovative way of administering tutorial classes to improve students' learning competence with particular reference to gendered test scores. A before-after test score analyses of mean and standard deviations along with t-statistical tests of hypotheses of second…
Conceptual and statistical problems associated with the use of diversity indices in ecology.
Barrantes, Gilbert; Sandoval, Luis
2009-09-01
Diversity indices, particularly the Shannon-Wiener index, have extensively been used in analyzing patterns of diversity at different geographic and ecological scales. These indices have serious conceptual and statistical problems which make comparisons of species richness or species abundances across communities nearly impossible. There is often no a single statistical method that retains all information needed to answer even a simple question. However, multivariate analyses could be used instead of diversity indices, such as cluster analyses or multiple regressions. More complex multivariate analyses, such as Canonical Correspondence Analysis, provide very valuable information on environmental variables associated to the presence and abundance of the species in a community. In addition, particular hypotheses associated to changes in species richness across localities, or change in abundance of one, or a group of species can be tested using univariate, bivariate, and/or rarefaction statistical tests. The rarefaction method has proved to be robust to standardize all samples to a common size. Even the simplest method as reporting the number of species per taxonomic category possibly provides more information than a diversity index value.
NASA Astrophysics Data System (ADS)
Rubin, D.; Aldering, G.; Barbary, K.; Boone, K.; Chappell, G.; Currie, M.; Deustua, S.; Fagrelius, P.; Fruchter, A.; Hayden, B.; Lidman, C.; Nordin, J.; Perlmutter, S.; Saunders, C.; Sofiatti, C.; Supernova Cosmology Project, The
2015-11-01
While recent supernova (SN) cosmology research has benefited from improved measurements, current analysis approaches are not statistically optimal and will prove insufficient for future surveys. This paper discusses the limitations of current SN cosmological analyses in treating outliers, selection effects, shape- and color-standardization relations, unexplained dispersion, and heterogeneous observations. We present a new Bayesian framework, called UNITY (Unified Nonlinear Inference for Type-Ia cosmologY), that incorporates significant improvements in our ability to confront these effects. We apply the framework to real SN observations and demonstrate smaller statistical and systematic uncertainties. We verify earlier results that SNe Ia require nonlinear shape and color standardizations, but we now include these nonlinear relations in a statistically well-justified way. This analysis was primarily performed blinded, in that the basic framework was first validated on simulated data before transitioning to real data. We also discuss possible extensions of the method.
Glass-Kaastra, Shiona K.; Pearl, David L.; Reid-Smith, Richard J.; McEwen, Beverly; Slavic, Durda; McEwen, Scott A.; Fairles, Jim
2014-01-01
Antimicrobial susceptibility data on Escherichia coli F4, Pasteurella multocida, and Streptococcus suis isolates from Ontario swine (January 1998 to October 2010) were acquired from a comprehensive diagnostic veterinary laboratory in Ontario, Canada. In relation to the possible development of a surveillance system for antimicrobial resistance, data were assessed for ease of management, completeness, consistency, and applicability for temporal and spatial statistical analyses. Limited farm location data precluded spatial analyses and missing demographic data limited their use as predictors within multivariable statistical models. Changes in the standard panel of antimicrobials used for susceptibility testing reduced the number of antimicrobials available for temporal analyses. Data consistency and quality could improve over time in this and similar diagnostic laboratory settings by encouraging complete reporting with sample submission and by modifying database systems to limit free-text data entry. These changes could make more statistical methods available for disease surveillance and cluster detection. PMID:24688133
Glass-Kaastra, Shiona K; Pearl, David L; Reid-Smith, Richard J; McEwen, Beverly; Slavic, Durda; McEwen, Scott A; Fairles, Jim
2014-04-01
Antimicrobial susceptibility data on Escherichia coli F4, Pasteurella multocida, and Streptococcus suis isolates from Ontario swine (January 1998 to October 2010) were acquired from a comprehensive diagnostic veterinary laboratory in Ontario, Canada. In relation to the possible development of a surveillance system for antimicrobial resistance, data were assessed for ease of management, completeness, consistency, and applicability for temporal and spatial statistical analyses. Limited farm location data precluded spatial analyses and missing demographic data limited their use as predictors within multivariable statistical models. Changes in the standard panel of antimicrobials used for susceptibility testing reduced the number of antimicrobials available for temporal analyses. Data consistency and quality could improve over time in this and similar diagnostic laboratory settings by encouraging complete reporting with sample submission and by modifying database systems to limit free-text data entry. These changes could make more statistical methods available for disease surveillance and cluster detection.
Huedo-Medina, Tania B; Garcia, Marissa; Bihuniak, Jessica D; Kenny, Anne; Kerstetter, Jane
2016-03-01
Several systematic reviews/meta-analyses published within the past 10 y have examined the associations of Mediterranean-style diets (MedSDs) on cardiovascular disease (CVD) risk. However, these reviews have not been evaluated for satisfying contemporary methodologic quality standards. This study evaluated the quality of recent systematic reviews/meta-analyses on MedSD and CVD risk outcomes by using an established methodologic quality scale. The relation between review quality and impact per publication value of the journal in which the article had been published was also evaluated. To assess compliance with current standards, we applied a modified version of the Assessment of Multiple Systematic Reviews (AMSTARMedSD) quality scale to systematic reviews/meta-analyses retrieved from electronic databases that had met our selection criteria: 1) used systematic or meta-analytic procedures to review the literature, 2) examined MedSD trials, and 3) had MedSD interventions independently or combined with other interventions. Reviews completely satisfied from 8% to 75% of the AMSTARMedSD items (mean ± SD: 31.2% ± 19.4%), with those published in higher-impact journals having greater quality scores. At a minimum, 60% of the 24 reviews did not disclose full search details or apply appropriate statistical methods to combine study findings. Only 5 of the reviews included participant or study characteristics in their analyses, and none evaluated MedSD diet characteristics. These data suggest that current meta-analyses/systematic reviews evaluating the effect of MedSD on CVD risk do not fully comply with contemporary methodologic quality standards. As a result, there are more research questions to answer to enhance our understanding of how MedSD affects CVD risk or how these effects may be modified by the participant or MedSD characteristics. To clarify the associations between MedSD and CVD risk, future meta-analyses and systematic reviews should not only follow methodologic quality standards but also include more statistical modeling results when data allow. © 2016 American Society for Nutrition.
Reporting Practices and Use of Quantitative Methods in Canadian Journal Articles in Psychology.
Counsell, Alyssa; Harlow, Lisa L
2017-05-01
With recent focus on the state of research in psychology, it is essential to assess the nature of the statistical methods and analyses used and reported by psychological researchers. To that end, we investigated the prevalence of different statistical procedures and the nature of statistical reporting practices in recent articles from the four major Canadian psychology journals. The majority of authors evaluated their research hypotheses through the use of analysis of variance (ANOVA), t -tests, and multiple regression. Multivariate approaches were less common. Null hypothesis significance testing remains a popular strategy, but the majority of authors reported a standardized or unstandardized effect size measure alongside their significance test results. Confidence intervals on effect sizes were infrequently employed. Many authors provided minimal details about their statistical analyses and less than a third of the articles presented on data complications such as missing data and violations of statistical assumptions. Strengths of and areas needing improvement for reporting quantitative results are highlighted. The paper concludes with recommendations for how researchers and reviewers can improve comprehension and transparency in statistical reporting.
Recent evaluations of crack-opening-area in circumferentially cracked pipes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rahman, S.; Brust, F.; Ghadiali, N.
1997-04-01
Leak-before-break (LBB) analyses for circumferentially cracked pipes are currently being conducted in the nuclear industry to justify elimination of pipe whip restraints and jet shields which are present because of the expected dynamic effects from pipe rupture. The application of the LBB methodology frequently requires calculation of leak rates. The leak rates depend on the crack-opening area of the through-wall crack in the pipe. In addition to LBB analyses which assume a hypothetical flaw size, there is also interest in the integrity of actual leaking cracks corresponding to current leakage detection requirements in NRC Regulatory Guide 1.45, or for assessingmore » temporary repair of Class 2 and 3 pipes that have leaks as are being evaluated in ASME Section XI. The objectives of this study were to review, evaluate, and refine current predictive models for performing crack-opening-area analyses of circumferentially cracked pipes. The results from twenty-five full-scale pipe fracture experiments, conducted in the Degraded Piping Program, the International Piping Integrity Research Group Program, and the Short Cracks in Piping and Piping Welds Program, were used to verify the analytical models. Standard statistical analyses were performed to assess used to verify the analytical models. Standard statistical analyses were performed to assess quantitatively the accuracy of the predictive models. The evaluation also involved finite element analyses for determining the crack-opening profile often needed to perform leak-rate calculations.« less
Wu, Robert; Glen, Peter; Ramsay, Tim; Martel, Guillaume
2014-06-28
Observational studies dominate the surgical literature. Statistical adjustment is an important strategy to account for confounders in observational studies. Research has shown that published articles are often poor in statistical quality, which may jeopardize their conclusions. The Statistical Analyses and Methods in the Published Literature (SAMPL) guidelines have been published to help establish standards for statistical reporting.This study will seek to determine whether the quality of statistical adjustment and the reporting of these methods are adequate in surgical observational studies. We hypothesize that incomplete reporting will be found in all surgical observational studies, and that the quality and reporting of these methods will be of lower quality in surgical journals when compared with medical journals. Finally, this work will seek to identify predictors of high-quality reporting. This work will examine the top five general surgical and medical journals, based on a 5-year impact factor (2007-2012). All observational studies investigating an intervention related to an essential component area of general surgery (defined by the American Board of Surgery), with an exposure, outcome, and comparator, will be included in this systematic review. Essential elements related to statistical reporting and quality were extracted from the SAMPL guidelines and include domains such as intent of analysis, primary analysis, multiple comparisons, numbers and descriptive statistics, association and correlation analyses, linear regression, logistic regression, Cox proportional hazard analysis, analysis of variance, survival analysis, propensity analysis, and independent and correlated analyses. Each article will be scored as a proportion based on fulfilling criteria in relevant analyses used in the study. A logistic regression model will be built to identify variables associated with high-quality reporting. A comparison will be made between the scores of surgical observational studies published in medical versus surgical journals. Secondary outcomes will pertain to individual domains of analysis. Sensitivity analyses will be conducted. This study will explore the reporting and quality of statistical analyses in surgical observational studies published in the most referenced surgical and medical journals in 2013 and examine whether variables (including the type of journal) can predict high-quality reporting.
1980-12-01
to sound pressure level in decibels assuming a fre- quency of 1000 Hz. 249 The perceived noisiness values are derived from a formula specified in...Analyses .......... 244 6.i.16 Perceived Noise Level Analysis .............249 6.1.17 Acoustic Weighting Networks ................250 6.2 DERIVATIONS...BAND ANALYSIS BASIC STATISTICAL ANALYSES: *OCTAVE ANALYSIS MEAN *THIRD OCTAVE ANALYSIS VARIANCE *PERCEIVED NOISE LEVEL STANDARD DEVIATION CALCULATION
Statistical Data Editing in Scientific Articles.
Habibzadeh, Farrokh
2017-07-01
Scientific journals are important scholarly forums for sharing research findings. Editors have important roles in safeguarding standards of scientific publication and should be familiar with correct presentation of results, among other core competencies. Editors do not have access to the raw data and should thus rely on clues in the submitted manuscripts. To identify probable errors, they should look for inconsistencies in presented results. Common statistical problems that can be picked up by a knowledgeable manuscript editor are discussed in this article. Manuscripts should contain a detailed section on statistical analyses of the data. Numbers should be reported with appropriate precisions. Standard error of the mean (SEM) should not be reported as an index of data dispersion. Mean (standard deviation [SD]) and median (interquartile range [IQR]) should be used for description of normally and non-normally distributed data, respectively. If possible, it is better to report 95% confidence interval (CI) for statistics, at least for main outcome variables. And, P values should be presented, and interpreted with caution, if there is a hypothesis. To advance knowledge and skills of their members, associations of journal editors are better to develop training courses on basic statistics and research methodology for non-experts. This would in turn improve research reporting and safeguard the body of scientific evidence. © 2017 The Korean Academy of Medical Sciences.
Lociciro, S; Esseiva, P; Hayoz, P; Dujourdy, L; Besacier, F; Margot, P
2008-05-20
Harmonisation and optimization of analytical and statistical methodologies were carried out between two forensic laboratories (Lausanne, Switzerland and Lyon, France) in order to provide drug intelligence for cross-border cocaine seizures. Part I dealt with the optimization of the analytical method and its robustness. This second part investigates statistical methodologies that will provide reliable comparison of cocaine seizures analysed on two different gas chromatographs interfaced with a flame ionisation detectors (GC-FIDs) in two distinct laboratories. Sixty-six statistical combinations (ten data pre-treatments followed by six different distance measurements and correlation coefficients) were applied. One pre-treatment (N+S: area of each peak is divided by its standard deviation calculated from the whole data set) followed by the Cosine or Pearson correlation coefficients were found to be the best statistical compromise for optimal discrimination of linked and non-linked samples. The centralisation of the analyses in one single laboratory is not a required condition anymore to compare samples seized in different countries. This allows collaboration, but also, jurisdictional control over data.
Pike, Katie; Nash, Rachel L; Murphy, Gavin J; Reeves, Barnaby C; Rogers, Chris A
2015-02-22
The Transfusion Indication Threshold Reduction (TITRe2) trial is the largest randomized controlled trial to date to compare red blood cell transfusion strategies following cardiac surgery. This update presents the statistical analysis plan, detailing how the study will be analyzed and presented. The statistical analysis plan has been written following recommendations from the International Conference on Harmonisation of Technical Requirements for Registration of Pharmaceuticals for Human Use, prior to database lock and the final analysis of trial data. Outlined analyses are in line with the Consolidated Standards of Reporting Trials (CONSORT). The study aims to randomize 2000 patients from 17 UK centres. Patients are randomized to either a restrictive (transfuse if haemoglobin concentration <7.5 g/dl) or liberal (transfuse if haemoglobin concentration <9 g/dl) transfusion strategy. The primary outcome is a binary composite outcome of any serious infectious or ischaemic event in the first 3 months following randomization. The statistical analysis plan details how non-adherence with the intervention, withdrawals from the study, and the study population will be derived and dealt with in the analysis. The planned analyses of the trial primary and secondary outcome measures are described in detail, including approaches taken to deal with multiple testing, model assumptions not being met and missing data. Details of planned subgroup and sensitivity analyses and pre-specified ancillary analyses are given, along with potential issues that have been identified with such analyses and possible approaches to overcome such issues. ISRCTN70923932 .
Chaisinanunkul, Napasri; Adeoye, Opeolu; Lewis, Roger J; Grotta, James C; Broderick, Joseph; Jovin, Tudor G; Nogueira, Raul G; Elm, Jordan J; Graves, Todd; Berry, Scott; Lees, Kennedy R; Barreto, Andrew D; Saver, Jeffrey L
2015-08-01
Although the modified Rankin Scale (mRS) is the most commonly used primary end point in acute stroke trials, its power is limited when analyzed in dichotomized fashion and its indication of effect size challenging to interpret when analyzed ordinally. Weighting the 7 Rankin levels by utilities may improve scale interpretability while preserving statistical power. A utility-weighted mRS (UW-mRS) was derived by averaging values from time-tradeoff (patient centered) and person-tradeoff (clinician centered) studies. The UW-mRS, standard ordinal mRS, and dichotomized mRS were applied to 11 trials or meta-analyses of acute stroke treatments, including lytic, endovascular reperfusion, blood pressure moderation, and hemicraniectomy interventions. Utility values were 1.0 for mRS level 0; 0.91 for mRS level 1; 0.76 for mRS level 2; 0.65 for mRS level 3; 0.33 for mRS level 4; 0 for mRS level 5; and 0 for mRS level 6. For trials with unidirectional treatment effects, the UW-mRS paralleled the ordinal mRS and outperformed dichotomous mRS analyses. Both the UW-mRS and the ordinal mRS were statistically significant in 6 of 8 unidirectional effect trials, whereas dichotomous analyses were statistically significant in 2 to 4 of 8. In bidirectional effect trials, both the UW-mRS and ordinal tests captured the divergent treatment effects by showing neutral results, whereas some dichotomized analyses showed positive results. Mean utility differences in trials with statistically significant positive results ranged from 0.026 to 0.249. A UW-mRS performs similar to the standard ordinal mRS in detecting treatment effects in actual stroke trials and ensures the quantitative outcome is a valid reflection of patient-centered benefits. © 2015 American Heart Association, Inc.
OSPAR standard method and software for statistical analysis of beach litter data.
Schulz, Marcus; van Loon, Willem; Fleet, David M; Baggelaar, Paul; van der Meulen, Eit
2017-09-15
The aim of this study is to develop standard statistical methods and software for the analysis of beach litter data. The optimal ensemble of statistical methods comprises the Mann-Kendall trend test, the Theil-Sen slope estimation, the Wilcoxon step trend test and basic descriptive statistics. The application of Litter Analyst, a tailor-made software for analysing the results of beach litter surveys, to OSPAR beach litter data from seven beaches bordering on the south-eastern North Sea, revealed 23 significant trends in the abundances of beach litter types for the period 2009-2014. Litter Analyst revealed a large variation in the abundance of litter types between beaches. To reduce the effects of spatial variation, trend analysis of beach litter data can most effectively be performed at the beach or national level. Spatial aggregation of beach litter data within a region is possible, but resulted in a considerable reduction in the number of significant trends. Copyright © 2017 Elsevier Ltd. All rights reserved.
Rao, Goutham; Lopez-Jimenez, Francisco; Boyd, Jack; D'Amico, Frank; Durant, Nefertiti H; Hlatky, Mark A; Howard, George; Kirley, Katherine; Masi, Christopher; Powell-Wiley, Tiffany M; Solomonides, Anthony E; West, Colin P; Wessel, Jennifer
2017-09-05
Meta-analyses are becoming increasingly popular, especially in the fields of cardiovascular disease prevention and treatment. They are often considered to be a reliable source of evidence for making healthcare decisions. Unfortunately, problems among meta-analyses such as the misapplication and misinterpretation of statistical methods and tests are long-standing and widespread. The purposes of this statement are to review key steps in the development of a meta-analysis and to provide recommendations that will be useful for carrying out meta-analyses and for readers and journal editors, who must interpret the findings and gauge methodological quality. To make the statement practical and accessible, detailed descriptions of statistical methods have been omitted. Based on a survey of cardiovascular meta-analyses, published literature on methodology, expert consultation, and consensus among the writing group, key recommendations are provided. Recommendations reinforce several current practices, including protocol registration; comprehensive search strategies; methods for data extraction and abstraction; methods for identifying, measuring, and dealing with heterogeneity; and statistical methods for pooling results. Other practices should be discontinued, including the use of levels of evidence and evidence hierarchies to gauge the value and impact of different study designs (including meta-analyses) and the use of structured tools to assess the quality of studies to be included in a meta-analysis. We also recommend choosing a pooling model for conventional meta-analyses (fixed effect or random effects) on the basis of clinical and methodological similarities among studies to be included, rather than the results of a test for statistical heterogeneity. © 2017 American Heart Association, Inc.
ERIC Educational Resources Information Center
Salamy, A.
1981-01-01
Determines the frequency distribution of Brainstem Auditory Evoked Potential variables (BAEP) for premature babies at different stages of development--normal newborns, infants, young children, and adults. The author concludes that the assumption of normality underlying most "standard" statistical analyses can be met for many BAEP…
School Libraries and Science Achievement: A View from Michigan's Middle Schools
ERIC Educational Resources Information Center
Mardis, Marcia
2007-01-01
If strong school library media centers (SLMCs) positively impact middle school student reading achievement, as measured on standardized tests, are they also beneficial for middle school science achievement? To answer this question, the researcher built upon the statistical analyses used in previous school library impact studies with qualitative…
Rognoni, Carla; Tarricone, Rosanna
2017-01-10
Intermittent catheterisation is the method of choice for the management of bladder dysfunctions. Different urinary catheters are available, but there is conflicting evidence on which type of catheter is best. The present study provides an objective evaluation of the clinical effectiveness of different subsets of urinary catheters. A systematic literature review was performed for published RCTs regarding hydrophilic coated and PVC (standard) catheters for intermittent catheterisation. Separate meta-analyses were conducted to combine data on frequencies of urinary tract infections (UTIs) and haematuria. Two separate analyses were performed, including or excluding reused standard catheters. Seven studies were eligible for inclusion in the review. The meta-analyses exploring UTI frequencies showed a lower risk ratio associated with hydrophilic catheters in comparison to standard ones (RR = 0.84; 95% CI, 0.75-0.94; p = 0.003). Results for the "reuse" scenario were consistent with the ones related to "single-use" scenario in terms of frequency of UTIs. The meta-analyses exploring haematuria were not able to demonstrate any statistically significant difference between hydrophilic catheters in comparison to standard ones. The findings confirm previously reported benefits of hydrophilic catheters but a broader evaluation that takes into account also patient preferences, compliance of therapy, quality of life and costs would be needed to assess the economic sustainability of these advanced devices.
Statistics for Radiology Research.
Obuchowski, Nancy A; Subhas, Naveen; Polster, Joshua
2017-02-01
Biostatistics is an essential component in most original research studies in imaging. In this article we discuss five key statistical concepts for study design and analyses in modern imaging research: statistical hypothesis testing, particularly focusing on noninferiority studies; imaging outcomes especially when there is no reference standard; dealing with the multiplicity problem without spending all your study power; relevance of confidence intervals in reporting and interpreting study results; and finally tools for assessing quantitative imaging biomarkers. These concepts are presented first as examples of conversations between investigator and biostatistician, and then more detailed discussions of the statistical concepts follow. Three skeletal radiology examples are used to illustrate the concepts. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.
NASA Technical Reports Server (NTRS)
Ziff, Howard L; Rathert, George A; Gadeberg, Burnett L
1953-01-01
Standard air-to-air-gunnery tracking runs were conducted with F-51H, F8F-1, F-86A, and F-86E airplanes equipped with fixed gunsights. The tracking performances were documented over the normal operating range of altitude, Mach number, and normal acceleration factor for each airplane. The sources of error were studied by statistical analyses of the aim wander.
Gruber, Bernd; Unmack, Peter J; Berry, Oliver F; Georges, Arthur
2018-05-01
Although vast technological advances have been made and genetic software packages are growing in number, it is not a trivial task to analyse SNP data. We announce a new r package, dartr, enabling the analysis of single nucleotide polymorphism data for population genomic and phylogenomic applications. dartr provides user-friendly functions for data quality control and marker selection, and permits rigorous evaluations of conformation to Hardy-Weinberg equilibrium, gametic-phase disequilibrium and neutrality. The package reports standard descriptive statistics, permits exploration of patterns in the data through principal components analysis and conducts standard F-statistics, as well as basic phylogenetic analyses, population assignment, isolation by distance and exports data to a variety of commonly used downstream applications (e.g., newhybrids, faststructure and phylogeny applications) outside of the r environment. The package serves two main purposes: first, a user-friendly approach to lower the hurdle to analyse such data-therefore, the package comes with a detailed tutorial targeted to the r beginner to allow data analysis without requiring deep knowledge of r. Second, we use a single, well-established format-genlight from the adegenet package-as input for all our functions to avoid data reformatting. By strictly using the genlight format, we hope to facilitate this format as the de facto standard of future software developments and hence reduce the format jungle of genetic data sets. The dartr package is available via the r CRAN network and GitHub. © 2017 John Wiley & Sons Ltd.
Descriptive and inferential statistical methods used in burns research.
Al-Benna, Sammy; Al-Ajam, Yazan; Way, Benjamin; Steinstraesser, Lars
2010-05-01
Burns research articles utilise a variety of descriptive and inferential methods to present and analyse data. The aim of this study was to determine the descriptive methods (e.g. mean, median, SD, range, etc.) and survey the use of inferential methods (statistical tests) used in articles in the journal Burns. This study defined its population as all original articles published in the journal Burns in 2007. Letters to the editor, brief reports, reviews, and case reports were excluded. Study characteristics, use of descriptive statistics and the number and types of statistical methods employed were evaluated. Of the 51 articles analysed, 11(22%) were randomised controlled trials, 18(35%) were cohort studies, 11(22%) were case control studies and 11(22%) were case series. The study design and objectives were defined in all articles. All articles made use of continuous and descriptive data. Inferential statistics were used in 49(96%) articles. Data dispersion was calculated by standard deviation in 30(59%). Standard error of the mean was quoted in 19(37%). The statistical software product was named in 33(65%). Of the 49 articles that used inferential statistics, the tests were named in 47(96%). The 6 most common tests used (Student's t-test (53%), analysis of variance/co-variance (33%), chi(2) test (27%), Wilcoxon & Mann-Whitney tests (22%), Fisher's exact test (12%)) accounted for the majority (72%) of statistical methods employed. A specified significance level was named in 43(88%) and the exact significance levels were reported in 28(57%). Descriptive analysis and basic statistical techniques account for most of the statistical tests reported. This information should prove useful in deciding which tests should be emphasised in educating burn care professionals. These results highlight the need for burn care professionals to have a sound understanding of basic statistics, which is crucial in interpreting and reporting data. Advice should be sought from professionals in the fields of biostatistics and epidemiology when using more advanced statistical techniques. Copyright 2009 Elsevier Ltd and ISBI. All rights reserved.
Sequi, Marco; Campi, Rita; Clavenna, Antonio; Bonati, Maurizio
2013-03-01
To evaluate the quality of data reporting and statistical methods performed in drug utilization studies in the pediatric population. Drug utilization studies evaluating all drug prescriptions to children and adolescents published between January 1994 and December 2011 were retrieved and analyzed. For each study, information on measures of exposure/consumption, the covariates considered, descriptive and inferential analyses, statistical tests, and methods of data reporting was extracted. An overall quality score was created for each study using a 12-item checklist that took into account the presence of outcome measures, covariates of measures, descriptive measures, statistical tests, and graphical representation. A total of 22 studies were reviewed and analyzed. Of these, 20 studies reported at least one descriptive measure. The mean was the most commonly used measure (18 studies), but only five of these also reported the standard deviation. Statistical analyses were performed in 12 studies, with the chi-square test being the most commonly performed test. Graphs were presented in 14 papers. Sixteen papers reported the number of drug prescriptions and/or packages, and ten reported the prevalence of the drug prescription. The mean quality score was 8 (median 9). Only seven of the 22 studies received a score of ≥10, while four studies received a score of <6. Our findings document that only a few of the studies reviewed applied statistical methods and reported data in a satisfactory manner. We therefore conclude that the methodology of drug utilization studies needs to be improved.
Incorporating Experience Curves in Appliance Standards Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garbesi, Karina; Chan, Peter; Greenblatt, Jeffery
2011-10-31
The technical analyses in support of U.S. energy conservation standards for residential appliances and commercial equipment have typically assumed that manufacturing costs and retail prices remain constant during the projected 30-year analysis period. There is, however, considerable evidence that this assumption does not reflect real market prices. Costs and prices generally fall in relation to cumulative production, a phenomenon known as experience and modeled by a fairly robust empirical experience curve. Using price data from the Bureau of Labor Statistics, and shipment data obtained as part of the standards analysis process, we present U.S. experience curves for room air conditioners,more » clothes dryers, central air conditioners, furnaces, and refrigerators and freezers. These allow us to develop more representative appliance price projections than the assumption-based approach of constant prices. These experience curves were incorporated into recent energy conservation standards for these products. The impact on the national modeling can be significant, often increasing the net present value of potential standard levels in the analysis. In some cases a previously cost-negative potential standard level demonstrates a benefit when incorporating experience. These results imply that past energy conservation standards analyses may have undervalued the economic benefits of potential standard levels.« less
Anatomy of the Higgs fits: A first guide to statistical treatments of the theoretical uncertainties
NASA Astrophysics Data System (ADS)
Fichet, Sylvain; Moreau, Grégory
2016-04-01
The studies of the Higgs boson couplings based on the recent and upcoming LHC data open up a new window on physics beyond the Standard Model. In this paper, we propose a statistical guide to the consistent treatment of the theoretical uncertainties entering the Higgs rate fits. Both the Bayesian and frequentist approaches are systematically analysed in a unified formalism. We present analytical expressions for the marginal likelihoods, useful to implement simultaneously the experimental and theoretical uncertainties. We review the various origins of the theoretical errors (QCD, EFT, PDF, production mode contamination…). All these individual uncertainties are thoroughly combined with the help of moment-based considerations. The theoretical correlations among Higgs detection channels appear to affect the location and size of the best-fit regions in the space of Higgs couplings. We discuss the recurrent question of the shape of the prior distributions for the individual theoretical errors and find that a nearly Gaussian prior arises from the error combinations. We also develop the bias approach, which is an alternative to marginalisation providing more conservative results. The statistical framework to apply the bias principle is introduced and two realisations of the bias are proposed. Finally, depending on the statistical treatment, the Standard Model prediction for the Higgs signal strengths is found to lie within either the 68% or 95% confidence level region obtained from the latest analyses of the 7 and 8 TeV LHC datasets.
[Triple-type theory of statistics and its application in the scientific research of biomedicine].
Hu, Liang-ping; Liu, Hui-gang
2005-07-20
To point out the crux of why so many people failed to grasp statistics and to bring forth a "triple-type theory of statistics" to solve the problem in a creative way. Based on the experience in long-time teaching and research in statistics, the "three-type theory" was raised and clarified. Examples were provided to demonstrate that the 3 types, i.e., expressive type, prototype and the standardized type are the essentials for people to apply statistics rationally both in theory and practice, and moreover, it is demonstrated by some instances that the "three types" are correlated with each other. It can help people to see the essence by interpreting and analyzing the problems of experimental designs and statistical analyses in medical research work. Investigations reveal that for some questions, the three types are mutually identical; for some questions, the prototype is their standardized type; however, for some others, the three types are distinct from each other. It has been shown that in some multifactor experimental researches, it leads to the nonexistence of the standardized type corresponding to the prototype at all, because some researchers have committed the mistake of "incomplete control" in setting experimental groups. This is a problem which should be solved by the concept and method of "division". Once the "triple-type" for each question is clarified, a proper experimental design and statistical method can be carried out easily. "Triple-type theory of statistics" can help people to avoid committing statistical mistakes or at least to decrease the misuse rate dramatically and improve the quality, level and speed of biomedical research during the process of applying statistics. It can also help people to improve the quality of statistical textbooks and the teaching effect of statistics and it has demonstrated how to advance biomedical statistics.
A Comparison of Imputation Methods for Bayesian Factor Analysis Models
ERIC Educational Resources Information Center
Merkle, Edgar C.
2011-01-01
Imputation methods are popular for the handling of missing data in psychology. The methods generally consist of predicting missing data based on observed data, yielding a complete data set that is amiable to standard statistical analyses. In the context of Bayesian factor analysis, this article compares imputation under an unrestricted…
Statistics Report on TEQSA Registered Higher Education Providers
ERIC Educational Resources Information Center
Australian Government Tertiary Education Quality and Standards Agency, 2014
2014-01-01
This report is the first release of selected data held and analysed by The Tertiary Education Quality Standards Agency (TEQSA) for its regulatory purposes that provides a complete view of the sector that has not previously been disseminated. As the national regulator of all Australian higher education providers, TEQSA is in a unique position in…
How Historical Information Can Improve Extreme Value Analysis of Coastal Water Levels
NASA Astrophysics Data System (ADS)
Le Cozannet, G.; Bulteau, T.; Idier, D.; Lambert, J.; Garcin, M.
2016-12-01
The knowledge of extreme coastal water levels is useful for coastal flooding studies or the design of coastal defences. While deriving such extremes with standard analyses using tide gauge measurements, one often needs to deal with limited effective duration of observation which can result in large statistical uncertainties. This is even truer when one faces outliers, those particularly extreme values distant from the others. In a recent work (Bulteau et al., 2015), we investigated how historical information of past events reported in archives can reduce statistical uncertainties and relativize such outlying observations. We adapted a Bayesian Markov Chain Monte Carlo method, initially developed in the hydrology field (Reis and Stedinger, 2005), to the specific case of coastal water levels. We applied this method to the site of La Rochelle (France), where the storm Xynthia in 2010 generated a water level considered so far as an outlier. Based on 30 years of tide gauge measurements and 8 historical events since 1890, the results showed a significant decrease in statistical uncertainties on return levels when historical information is used. Also, Xynthia's water level no longer appeared as an outlier and we could have reasonably predicted the annual exceedance probability of that level beforehand (predictive probability for 2010 based on data until the end of 2009 of the same order of magnitude as the standard estimative probability using data until the end of 2010). Such results illustrate the usefulness of historical information in extreme value analyses of coastal water levels, as well as the relevance of the proposed method to integrate heterogeneous data in such analyses.
Towards interoperable and reproducible QSAR analyses: Exchange of datasets.
Spjuth, Ola; Willighagen, Egon L; Guha, Rajarshi; Eklund, Martin; Wikberg, Jarl Es
2010-06-30
QSAR is a widely used method to relate chemical structures to responses or properties based on experimental observations. Much effort has been made to evaluate and validate the statistical modeling in QSAR, but these analyses treat the dataset as fixed. An overlooked but highly important issue is the validation of the setup of the dataset, which comprises addition of chemical structures as well as selection of descriptors and software implementations prior to calculations. This process is hampered by the lack of standards and exchange formats in the field, making it virtually impossible to reproduce and validate analyses and drastically constrain collaborations and re-use of data. We present a step towards standardizing QSAR analyses by defining interoperable and reproducible QSAR datasets, consisting of an open XML format (QSAR-ML) which builds on an open and extensible descriptor ontology. The ontology provides an extensible way of uniquely defining descriptors for use in QSAR experiments, and the exchange format supports multiple versioned implementations of these descriptors. Hence, a dataset described by QSAR-ML makes its setup completely reproducible. We also provide a reference implementation as a set of plugins for Bioclipse which simplifies setup of QSAR datasets, and allows for exporting in QSAR-ML as well as old-fashioned CSV formats. The implementation facilitates addition of new descriptor implementations from locally installed software and remote Web services; the latter is demonstrated with REST and XMPP Web services. Standardized QSAR datasets open up new ways to store, query, and exchange data for subsequent analyses. QSAR-ML supports completely reproducible creation of datasets, solving the problems of defining which software components were used and their versions, and the descriptor ontology eliminates confusions regarding descriptors by defining them crisply. This makes is easy to join, extend, combine datasets and hence work collectively, but also allows for analyzing the effect descriptors have on the statistical model's performance. The presented Bioclipse plugins equip scientists with graphical tools that make QSAR-ML easily accessible for the community.
Towards interoperable and reproducible QSAR analyses: Exchange of datasets
2010-01-01
Background QSAR is a widely used method to relate chemical structures to responses or properties based on experimental observations. Much effort has been made to evaluate and validate the statistical modeling in QSAR, but these analyses treat the dataset as fixed. An overlooked but highly important issue is the validation of the setup of the dataset, which comprises addition of chemical structures as well as selection of descriptors and software implementations prior to calculations. This process is hampered by the lack of standards and exchange formats in the field, making it virtually impossible to reproduce and validate analyses and drastically constrain collaborations and re-use of data. Results We present a step towards standardizing QSAR analyses by defining interoperable and reproducible QSAR datasets, consisting of an open XML format (QSAR-ML) which builds on an open and extensible descriptor ontology. The ontology provides an extensible way of uniquely defining descriptors for use in QSAR experiments, and the exchange format supports multiple versioned implementations of these descriptors. Hence, a dataset described by QSAR-ML makes its setup completely reproducible. We also provide a reference implementation as a set of plugins for Bioclipse which simplifies setup of QSAR datasets, and allows for exporting in QSAR-ML as well as old-fashioned CSV formats. The implementation facilitates addition of new descriptor implementations from locally installed software and remote Web services; the latter is demonstrated with REST and XMPP Web services. Conclusions Standardized QSAR datasets open up new ways to store, query, and exchange data for subsequent analyses. QSAR-ML supports completely reproducible creation of datasets, solving the problems of defining which software components were used and their versions, and the descriptor ontology eliminates confusions regarding descriptors by defining them crisply. This makes is easy to join, extend, combine datasets and hence work collectively, but also allows for analyzing the effect descriptors have on the statistical model's performance. The presented Bioclipse plugins equip scientists with graphical tools that make QSAR-ML easily accessible for the community. PMID:20591161
Trends in study design and the statistical methods employed in a leading general medicine journal.
Gosho, M; Sato, Y; Nagashima, K; Takahashi, S
2018-02-01
Study design and statistical methods have become core components of medical research, and the methodology has become more multifaceted and complicated over time. The study of the comprehensive details and current trends of study design and statistical methods is required to support the future implementation of well-planned clinical studies providing information about evidence-based medicine. Our purpose was to illustrate study design and statistical methods employed in recent medical literature. This was an extension study of Sato et al. (N Engl J Med 2017; 376: 1086-1087), which reviewed 238 articles published in 2015 in the New England Journal of Medicine (NEJM) and briefly summarized the statistical methods employed in NEJM. Using the same database, we performed a new investigation of the detailed trends in study design and individual statistical methods that were not reported in the Sato study. Due to the CONSORT statement, prespecification and justification of sample size are obligatory in planning intervention studies. Although standard survival methods (eg Kaplan-Meier estimator and Cox regression model) were most frequently applied, the Gray test and Fine-Gray proportional hazard model for considering competing risks were sometimes used for a more valid statistical inference. With respect to handling missing data, model-based methods, which are valid for missing-at-random data, were more frequently used than single imputation methods. These methods are not recommended as a primary analysis, but they have been applied in many clinical trials. Group sequential design with interim analyses was one of the standard designs, and novel design, such as adaptive dose selection and sample size re-estimation, was sometimes employed in NEJM. Model-based approaches for handling missing data should replace single imputation methods for primary analysis in the light of the information found in some publications. Use of adaptive design with interim analyses is increasing after the presentation of the FDA guidance for adaptive design. © 2017 John Wiley & Sons Ltd.
Body Weight Reducing Effect of Oral Boric Acid Intake
Aysan, Erhan; Sahin, Fikrettin; Telci, Dilek; Yalvac, Mehmet Emir; Emre, Sinem Hocaoglu; Karaca, Cetin; Muslumanoglu, Mahmut
2011-01-01
Background: Boric acid is widely used in biology, but its body weight reducing effect is not researched. Methods: Twenty mice were divided into two equal groups. Control group mice drank standard tap water, but study group mice drank 0.28mg/250ml boric acid added tap water over five days. Total body weight changes, major organ histopathology, blood biochemistry, urine and feces analyses were compared. Results: Study group mice lost body weight mean 28.1% but in control group no weight loss and also weight gained mean 0.09% (p<0.001). Total drinking water and urine outputs were not statistically different. Cholesterol, LDL, AST, ALT, LDH, amylase and urobilinogen levels were statistically significantly high in the study group. Other variables were not statistically different. No histopathologic differences were detected in evaluations of all resected major organs. Conclusion: Low dose oral boric acid intake cause serious body weight reduction. Blood and urine analyses support high glucose, lipid and middle protein catabolisms, but the mechanism is unclear. PMID:22135611
Body weight reducing effect of oral boric acid intake.
Aysan, Erhan; Sahin, Fikrettin; Telci, Dilek; Yalvac, Mehmet Emir; Emre, Sinem Hocaoglu; Karaca, Cetin; Muslumanoglu, Mahmut
2011-01-01
Boric acid is widely used in biology, but its body weight reducing effect is not researched. Twenty mice were divided into two equal groups. Control group mice drank standard tap water, but study group mice drank 0.28mg/250ml boric acid added tap water over five days. Total body weight changes, major organ histopathology, blood biochemistry, urine and feces analyses were compared. Study group mice lost body weight mean 28.1% but in control group no weight loss and also weight gained mean 0.09% (p<0.001). Total drinking water and urine outputs were not statistically different. Cholesterol, LDL, AST, ALT, LDH, amylase and urobilinogen levels were statistically significantly high in the study group. Other variables were not statistically different. No histopathologic differences were detected in evaluations of all resected major organs. Low dose oral boric acid intake cause serious body weight reduction. Blood and urine analyses support high glucose, lipid and middle protein catabolisms, but the mechanism is unclear.
The Impact of APA and AERA Guidelines on Effect Size Reporting
ERIC Educational Resources Information Center
Peng, Chao-Ying Joanne; Chen, Li-Ting; Chiang, Hsu-Min; Chiang, Yi-Chen
2013-01-01
Given the long history of effect size (ES) indices (Olejnik and Algina, "Contemporary Educational Psychology," 25, 241-286 2000) and various attempts by APA and AERA to encourage the reporting and interpretation of ES to supplement findings from inferential statistical analyses, it is essential to document the impact of APA and AERA standards on…
Redmond, Tony; O'Leary, Neil; Hutchison, Donna M; Nicolela, Marcelo T; Artes, Paul H; Chauhan, Balwantray C
2013-12-01
A new analysis method called permutation of pointwise linear regression measures the significance of deterioration over time at each visual field location, combines the significance values into an overall statistic, and then determines the likelihood of change in the visual field. Because the outcome is a single P value, individualized to that specific visual field and independent of the scale of the original measurement, the method is well suited for comparing techniques with different stimuli and scales. To test the hypothesis that frequency-doubling matrix perimetry (FDT2) is more sensitive than standard automated perimetry (SAP) in identifying visual field progression in glaucoma. Patients with open-angle glaucoma and healthy controls were examined by FDT2 and SAP, both with the 24-2 test pattern, on the same day at 6-month intervals in a longitudinal prospective study conducted in a hospital-based setting. Only participants with at least 5 examinations were included. Data were analyzed with permutation of pointwise linear regression. Permutation of pointwise linear regression is individualized to each participant, in contrast to current analyses in which the statistical significance is inferred from population-based approaches. Analyses were performed with both total deviation and pattern deviation. Sixty-four patients and 36 controls were included in the study. The median age, SAP mean deviation, and follow-up period were 65 years, -2.6 dB, and 5.4 years, respectively, in patients and 62 years, +0.4 dB, and 5.2 years, respectively, in controls. Using total deviation analyses, statistically significant deterioration was identified in 17% of patients with FDT2, in 34% of patients with SAP, and in 14% of patients with both techniques; in controls these percentages were 8% with FDT2, 31% with SAP, and 8% with both. Using pattern deviation analyses, statistically significant deterioration was identified in 16% of patients with FDT2, in 17% of patients with SAP, and in 3% of patients with both techniques; in controls these values were 3% with FDT2 and none with SAP. No evidence was found that FDT2 is more sensitive than SAP in identifying visual field deterioration. In about one-third of healthy controls, age-related deterioration with SAP reached statistical significance.
Campos-Filho, N; Franco, E L
1989-02-01
A frequent procedure in matched case-control studies is to report results from the multivariate unmatched analyses if they do not differ substantially from the ones obtained after conditioning on the matching variables. Although conceptually simple, this rule requires that an extensive series of logistic regression models be evaluated by both the conditional and unconditional maximum likelihood methods. Most computer programs for logistic regression employ only one maximum likelihood method, which requires that the analyses be performed in separate steps. This paper describes a Pascal microcomputer (IBM PC) program that performs multiple logistic regression by both maximum likelihood estimation methods, which obviates the need for switching between programs to obtain relative risk estimates from both matched and unmatched analyses. The program calculates most standard statistics and allows factoring of categorical or continuous variables by two distinct methods of contrast. A built-in, descriptive statistics option allows the user to inspect the distribution of cases and controls across categories of any given variable.
Do regional methods really help reduce uncertainties in flood frequency analyses?
NASA Astrophysics Data System (ADS)
Cong Nguyen, Chi; Payrastre, Olivier; Gaume, Eric
2013-04-01
Flood frequency analyses are often based on continuous measured series at gauge sites. However, the length of the available data sets is usually too short to provide reliable estimates of extreme design floods. To reduce the estimation uncertainties, the analyzed data sets have to be extended either in time, making use of historical and paleoflood data, or in space, merging data sets considered as statistically homogeneous to build large regional data samples. Nevertheless, the advantage of the regional analyses, the important increase of the size of the studied data sets, may be counterbalanced by the possible heterogeneities of the merged sets. The application and comparison of four different flood frequency analysis methods to two regions affected by flash floods in the south of France (Ardèche and Var) illustrates how this balance between the number of records and possible heterogeneities plays in real-world applications. The four tested methods are: (1) a local statistical analysis based on the existing series of measured discharges, (2) a local analysis valuating the existing information on historical floods, (3) a standard regional flood frequency analysis based on existing measured series at gauged sites and (4) a modified regional analysis including estimated extreme peak discharges at ungauged sites. Monte Carlo simulations are conducted to simulate a large number of discharge series with characteristics similar to the observed ones (type of statistical distributions, number of sites and records) to evaluate to which extent the results obtained on these case studies can be generalized. These two case studies indicate that even small statistical heterogeneities, which are not detected by the standard homogeneity tests implemented in regional flood frequency studies, may drastically limit the usefulness of such approaches. On the other hand, these result show that the valuation of information on extreme events, either historical flood events at gauged sites or estimated extremes at ungauged sites in the considered region, is an efficient way to reduce uncertainties in flood frequency studies.
Information filtering via biased heat conduction.
Liu, Jian-Guo; Zhou, Tao; Guo, Qiang
2011-09-01
The process of heat conduction has recently found application in personalized recommendation [Zhou et al., Proc. Natl. Acad. Sci. USA 107, 4511 (2010)], which is of high diversity but low accuracy. By decreasing the temperatures of small-degree objects, we present an improved algorithm, called biased heat conduction, which could simultaneously enhance the accuracy and diversity. Extensive experimental analyses demonstrate that the accuracy on MovieLens, Netflix, and Delicious datasets could be improved by 43.5%, 55.4% and 19.2%, respectively, compared with the standard heat conduction algorithm and also the diversity is increased or approximately unchanged. Further statistical analyses suggest that the present algorithm could simultaneously identify users' mainstream and special tastes, resulting in better performance than the standard heat conduction algorithm. This work provides a creditable way for highly efficient information filtering.
Winzer, Klaus-Jürgen; Buchholz, Anika; Schumacher, Martin; Sauerbrei, Willi
2016-01-01
Background Prognostic factors and prognostic models play a key role in medical research and patient management. The Nottingham Prognostic Index (NPI) is a well-established prognostic classification scheme for patients with breast cancer. In a very simple way, it combines the information from tumor size, lymph node stage and tumor grade. For the resulting index cutpoints are proposed to classify it into three to six groups with different prognosis. As not all prognostic information from the three and other standard factors is used, we will consider improvement of the prognostic ability using suitable analysis approaches. Methods and Findings Reanalyzing overall survival data of 1560 patients from a clinical database by using multivariable fractional polynomials and further modern statistical methods we illustrate suitable multivariable modelling and methods to derive and assess the prognostic ability of an index. Using a REMARK type profile we summarize relevant steps of the analysis. Adding the information from hormonal receptor status and using the full information from the three NPI components, specifically concerning the number of positive lymph nodes, an extended NPI with improved prognostic ability is derived. Conclusions The prognostic ability of even one of the best established prognostic index in medicine can be improved by using suitable statistical methodology to extract the full information from standard clinical data. This extended version of the NPI can serve as a benchmark to assess the added value of new information, ranging from a new single clinical marker to a derived index from omics data. An established benchmark would also help to harmonize the statistical analyses of such studies and protect against the propagation of many false promises concerning the prognostic value of new measurements. Statistical methods used are generally available and can be used for similar analyses in other diseases. PMID:26938061
Kelechi, Teresa J; Mueller, Martina; Zapka, Jane G; King, Dana E
2011-11-01
The aim of this randomized clinical trial was to investigate a cryotherapy (cooling) gel wrap applied to lower leg skin affected by chronic venous disorders to determine whether therapeutic cooling improves skin microcirculation. Chronic venous disorders are under-recognized vascular health problems that result in severe skin damage and ulcerations of the lower legs. Impaired skin microcirculation contributes to venous leg ulcer development, thus new prevention therapies should address the microcirculation to prevent venous leg ulcers. Sixty participants (n = 30 per group) were randomized to receive one of two daily 30-minute interventions for four weeks. The treatment group applied the cryotherapy gel wrap around the affected lower leg skin, or compression and elevated the legs on a special pillow each evening at bedtime. The standard care group wore compression and elevated the legs only. Laboratory pre- and post-measures included microcirculation measures of skin temperature with a thermistor, blood flow with a laser Doppler flowmeter, and venous refill time with a photoplethysmograph. Data were collected between 2008 2009 and analysed using descriptive statistics, paired t-tests or Wilcoxon signed ranks tests, logistic regression analyses, and mixed model analyses. Fifty-seven participants (treatment = 28; standard care = 29) completed the study. The mean age was 62 years, 70% female, 50% African American. In the final adjusted model, there was a statistically significant decrease in blood flow between the two groups (-6.2[-11.8; -0.6], P = 0.03). No statistically significant differences were noted in temperature or venous refill time. Study findings suggest that cryotherapy improves blood flow by slowing movement within the microcirculation and thus might potentially provide a therapeutic benefit to prevent leg ulcers. © 2011 Blackwell Publishing Ltd.
Mueller, Martina; Zapka, Jane G.; King, Dana E.
2011-01-01
Aim This randomized clinical trial was conducted 2008 – 2009 to investigate a cryotherapy (cooling) gel wrap applied to lower leg skin affected by chronic venous disorders to determine whether therapeutic cooling improves skin microcirculation. Impaired skin microcirculation contributes to venous leg ulcer development, thus new prevention therapies should address the microcirculation to prevent venous leg ulcers. Data Sources Sixty participants (n = 30 per group) were randomized to receive one of two daily 30-minute interventions for four weeks. The treatment group applied the cryotherapy gel wrap around the affected lower leg skin, or compression and elevated the legs on a special pillow each evening at bedtime. The standard care group wore compression and elevated the legs only. Laboratory pre- and post-measures included microcirculation measures of skin temperature with a thermistor, blood flow with a laser Doppler flowmeter, and venous refill time with a photoplethysmograph. Review methods Data were analysed using descriptive statistics, paired t-tests or Wilcoxon signed ranks tests, logistic regression analyses, and mixed model analyses. Results Fifty-seven participants (treatment = 28; standard care = 29) completed the study. The mean age was 62 years, 70% female, 50% African American. In the final adjusted model, there was a statistically significant decrease in blood flow between the two groups (−6.2[−11.8; −0.6], P = 0.03). No statistically significant differences were noted in temperature or venous refill time. Conclusion Study findings suggest that cryotherapy improves blood flow by slowing movement within the microcirculation and thus might potentially provide a therapeutic benefit to prevent leg ulcers. PMID:21592186
Integrated Data Collection Analysis (IDCA) Program - Statistical Analysis of RDX Standard Data Sets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sandstrom, Mary M.; Brown, Geoffrey W.; Preston, Daniel N.
2015-10-30
The Integrated Data Collection Analysis (IDCA) program is conducting a Proficiency Test for Small- Scale Safety and Thermal (SSST) testing of homemade explosives (HMEs). Described here are statistical analyses of the results for impact, friction, electrostatic discharge, and differential scanning calorimetry analysis of the RDX Type II Class 5 standard. The material was tested as a well-characterized standard several times during the proficiency study to assess differences among participants and the range of results that may arise for well-behaved explosive materials. The analyses show that there are detectable differences among the results from IDCA participants. While these differences are statisticallymore » significant, most of them can be disregarded for comparison purposes to assess potential variability when laboratories attempt to measure identical samples using methods assumed to be nominally the same. The results presented in this report include the average sensitivity results for the IDCA participants and the ranges of values obtained. The ranges represent variation about the mean values of the tests of between 26% and 42%. The magnitude of this variation is attributed to differences in operator, method, and environment as well as the use of different instruments that are also of varying age. The results appear to be a good representation of the broader safety testing community based on the range of methods, instruments, and environments included in the IDCA Proficiency Test.« less
Qu, Shu-Gen; Gao, Jin; Tang, Bo; Yu, Bo; Shen, Yue-Ping; Tu, Yu
2018-05-01
Low-dose ionizing radiation (LDIR) may increase the mortality of solid cancers in nuclear industry workers, but only few individual cohort studies exist, and the available reports have low statistical power. The aim of the present study was to focus on solid cancer mortality risk from LDIR in the nuclear industry using standard mortality ratios (SMRs) and 95% confidence intervals. A systematic literature search through the PubMed and Embase databases identified 27 studies relevant to this meta-analysis. There was statistical significance for total, solid and lung cancers, with meta-SMR values of 0.88, 0.80, and 0.89, respectively. There was evidence of stochastic effects by IR, but more definitive conclusions require additional analyses using standardized protocols to determine whether LDIR increases the risk of solid cancer-related mortality.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kelly, Elizabeth J.; Dewart, Jean Marie; Deola, Regina
This report provides site-specific return level analyses for rain, snow, and straight-line wind extreme events. These analyses are in support of the 10-year review plan for the assessment of meteorological natural phenomena hazards at Los Alamos National Laboratory (LANL). These analyses follow guidance from Department of Energy, DOE Standard, Natural Phenomena Hazards Analysis and Design Criteria for DOE Facilities (DOE-STD-1020-2012), Nuclear Regulatory Commission Standard Review Plan (NUREG-0800, 2007) and ANSI/ ANS-2.3-2011, Estimating Tornado, Hurricane, and Extreme Straight-Line Wind Characteristics at Nuclear Facility Sites. LANL precipitation and snow level data have been collected since 1910, although not all years are complete.more » In this report the results from the more recent data (1990–2014) are compared to those of past analyses and a 2004 National Oceanographic and Atmospheric Administration report. Given the many differences in the data sets used in these different analyses, the lack of statistically significant differences in return level estimates increases confidence in the data and in the modeling and analysis approach.« less
Huang, Shi; MacKinnon, David P.; Perrino, Tatiana; Gallo, Carlos; Cruden, Gracelyn; Brown, C Hendricks
2016-01-01
Mediation analysis often requires larger sample sizes than main effect analysis to achieve the same statistical power. Combining results across similar trials may be the only practical option for increasing statistical power for mediation analysis in some situations. In this paper, we propose a method to estimate: 1) marginal means for mediation path a, the relation of the independent variable to the mediator; 2) marginal means for path b, the relation of the mediator to the outcome, across multiple trials; and 3) the between-trial level variance-covariance matrix based on a bivariate normal distribution. We present the statistical theory and an R computer program to combine regression coefficients from multiple trials to estimate a combined mediated effect and confidence interval under a random effects model. Values of coefficients a and b, along with their standard errors from each trial are the input for the method. This marginal likelihood based approach with Monte Carlo confidence intervals provides more accurate inference than the standard meta-analytic approach. We discuss computational issues, apply the method to two real-data examples and make recommendations for the use of the method in different settings. PMID:28239330
Fukuda, Haruhisa; Kuroki, Manabu
2016-03-01
To develop and internally validate a surgical site infection (SSI) prediction model for Japan. Retrospective observational cohort study. We analyzed surveillance data submitted to the Japan Nosocomial Infections Surveillance system for patients who had undergone target surgical procedures from January 1, 2010, through December 31, 2012. Logistic regression analyses were used to develop statistical models for predicting SSIs. An SSI prediction model was constructed for each of the procedure categories by statistically selecting the appropriate risk factors from among the collected surveillance data and determining their optimal categorization. Standard bootstrapping techniques were applied to assess potential overfitting. The C-index was used to compare the predictive performances of the new statistical models with those of models based on conventional risk index variables. The study sample comprised 349,987 cases from 428 participant hospitals throughout Japan, and the overall SSI incidence was 7.0%. The C-indices of the new statistical models were significantly higher than those of the conventional risk index models in 21 (67.7%) of the 31 procedure categories (P<.05). No significant overfitting was detected. Japan-specific SSI prediction models were shown to generally have higher accuracy than conventional risk index models. These new models may have applications in assessing hospital performance and identifying high-risk patients in specific procedure categories.
Information filtering via biased heat conduction
NASA Astrophysics Data System (ADS)
Liu, Jian-Guo; Zhou, Tao; Guo, Qiang
2011-09-01
The process of heat conduction has recently found application in personalized recommendation [Zhou , Proc. Natl. Acad. Sci. USA PNASA60027-842410.1073/pnas.1000488107107, 4511 (2010)], which is of high diversity but low accuracy. By decreasing the temperatures of small-degree objects, we present an improved algorithm, called biased heat conduction, which could simultaneously enhance the accuracy and diversity. Extensive experimental analyses demonstrate that the accuracy on MovieLens, Netflix, and Delicious datasets could be improved by 43.5%, 55.4% and 19.2%, respectively, compared with the standard heat conduction algorithm and also the diversity is increased or approximately unchanged. Further statistical analyses suggest that the present algorithm could simultaneously identify users' mainstream and special tastes, resulting in better performance than the standard heat conduction algorithm. This work provides a creditable way for highly efficient information filtering.
Bosman, L; Herselman, M G; Kruger, H S; Labadarios, D
2011-11-01
The National Center for Health Statistics (NCHS) references were used to analyse anthropometric data from the 1999 National Food Consumption Survey (NFCS) of South Africa. Since then, however, The Centers for Disease Control and Prevention (CDC) 2000 reference and the World Health Organization (WHO) 2006 standards were released. It was anticipated that these reference and standards may lead to differences in the previous estimates of stunting, wasting, underweight and obesity in the study population. The aim was to compare the anthropometric status of children using the 1977 NCHS, the 2000 CDC growth references and the 2006 WHO standards. All children 12-60 months of age with a complete set of anthropometric data were included in the analyses. Data for 1,512 children were analysed with SAS 9.1 for Windows. A Z-score was calculated for each child for weight-for-age (W/A), weight-for-length/height (W/H), length/height-for-age (H/A) and body mass index (BMI)-for-age, using each of the three reference or standards for comparison. The prevalence of stunting, obesity and overweight were significantly higher and the prevalence of underweight and wasting were lower when using the WHO standards compared to the NCHS and the CDC references. The higher than previously established prevalence of stunting at 20.1% and combined overweight/obesity at 30% poses a challenge to South African policy makers to implement nutrition programmes to decrease the prevalence of both stunting and overweight. The 2006 WHO growth standard should be the standard used for assessment of growth of infants and children younger than 5 years in developing countries.
Haile, Tariku Gebre
2017-01-01
Background. In many studies, compliance with standard precautions among healthcare workers was reported to be inadequate. Objective. The aim of this study was to assess compliance with standard precautions and associated factors among healthcare workers in northwest Ethiopia. Methods. An institution-based cross-sectional study was conducted from March 01 to April 30, 2014. Simple random sampling technique was used to select participants. Data were entered into Epi info 3.5.1 and were exported to SPSS version 20.0 for statistical analysis. Multivariate logistic regression analyses were computed and adjusted odds ratio with 95% confidence interval was calculated to identify associated factors. Results. The proportion of healthcare workers who always comply with standard precautions was found to be 12%. Being a female healthcare worker (AOR [95% CI] 2.18 [1.12–4.23]), higher infection risk perception (AOR [95% CI] 3.46 [1.67–7.18]), training on standard precautions (AOR [95% CI] 2.90 [1.20–7.02]), accessibility of personal protective equipment (AOR [95% CI] 2.87 [1.41–5.86]), and management support (AOR [95% CI] 2.23 [1.11–4.53]) were found to be statistically significant. Conclusion and Recommendation. Compliance with standard precautions among the healthcare workers is very low. Interventions which include training of healthcare workers on standard precautions and consistent management support are recommended. PMID:28191020
Competing risks models and time-dependent covariates
Barnett, Adrian; Graves, Nick
2008-01-01
New statistical models for analysing survival data in an intensive care unit context have recently been developed. Two models that offer significant advantages over standard survival analyses are competing risks models and multistate models. Wolkewitz and colleagues used a competing risks model to examine survival times for nosocomial pneumonia and mortality. Their model was able to incorporate time-dependent covariates and so examine how risk factors that changed with time affected the chances of infection or death. We briefly explain how an alternative modelling technique (using logistic regression) can more fully exploit time-dependent covariates for this type of data. PMID:18423067
A decade of individual participant data meta-analyses: A review of current practice.
Simmonds, Mark; Stewart, Gavin; Stewart, Lesley
2015-11-01
Individual participant data (IPD) systematic reviews and meta-analyses are often considered to be the gold standard for meta-analysis. In the ten years since the first review into the methodology and reporting practice of IPD reviews was published much has changed in the field. This paper investigates current reporting and statistical practice in IPD systematic reviews. A systematic review was performed to identify systematic reviews that collected and analysed IPD. Data were extracted from each included publication on a variety of issues related to the reporting of IPD review process, and the statistical methods used. There has been considerable growth in the use of "one-stage" methods to perform IPD meta-analyses. The majority of reviews consider at least one covariate other than the primary intervention, either using subgroup analysis or including covariates in one-stage regression models. Random-effects analyses, however, are not often used. Reporting of review methods was often limited, with few reviews presenting a risk-of-bias assessment. Details on issues specific to the use of IPD were little reported, including how IPD were obtained; how data was managed and checked for consistency and errors; and for how many studies and participants IPD were sought and obtained. While the last ten years have seen substantial changes in how IPD meta-analyses are performed there remains considerable scope for improving the quality of reporting for both the process of IPD systematic reviews, and the statistical methods employed in them. It is to be hoped that the publication of the PRISMA-IPD guidelines specific to IPD reviews will improve reporting in this area. Copyright © 2015 Elsevier Inc. All rights reserved.
How Big of a Problem is Analytic Error in Secondary Analyses of Survey Data?
West, Brady T; Sakshaug, Joseph W; Aurelien, Guy Alain S
2016-01-01
Secondary analyses of survey data collected from large probability samples of persons or establishments further scientific progress in many fields. The complex design features of these samples improve data collection efficiency, but also require analysts to account for these features when conducting analysis. Unfortunately, many secondary analysts from fields outside of statistics, biostatistics, and survey methodology do not have adequate training in this area, and as a result may apply incorrect statistical methods when analyzing these survey data sets. This in turn could lead to the publication of incorrect inferences based on the survey data that effectively negate the resources dedicated to these surveys. In this article, we build on the results of a preliminary meta-analysis of 100 peer-reviewed journal articles presenting analyses of data from a variety of national health surveys, which suggested that analytic errors may be extremely prevalent in these types of investigations. We first perform a meta-analysis of a stratified random sample of 145 additional research products analyzing survey data from the Scientists and Engineers Statistical Data System (SESTAT), which describes features of the U.S. Science and Engineering workforce, and examine trends in the prevalence of analytic error across the decades used to stratify the sample. We once again find that analytic errors appear to be quite prevalent in these studies. Next, we present several example analyses of real SESTAT data, and demonstrate that a failure to perform these analyses correctly can result in substantially biased estimates with standard errors that do not adequately reflect complex sample design features. Collectively, the results of this investigation suggest that reviewers of this type of research need to pay much closer attention to the analytic methods employed by researchers attempting to publish or present secondary analyses of survey data.
How Big of a Problem is Analytic Error in Secondary Analyses of Survey Data?
West, Brady T.; Sakshaug, Joseph W.; Aurelien, Guy Alain S.
2016-01-01
Secondary analyses of survey data collected from large probability samples of persons or establishments further scientific progress in many fields. The complex design features of these samples improve data collection efficiency, but also require analysts to account for these features when conducting analysis. Unfortunately, many secondary analysts from fields outside of statistics, biostatistics, and survey methodology do not have adequate training in this area, and as a result may apply incorrect statistical methods when analyzing these survey data sets. This in turn could lead to the publication of incorrect inferences based on the survey data that effectively negate the resources dedicated to these surveys. In this article, we build on the results of a preliminary meta-analysis of 100 peer-reviewed journal articles presenting analyses of data from a variety of national health surveys, which suggested that analytic errors may be extremely prevalent in these types of investigations. We first perform a meta-analysis of a stratified random sample of 145 additional research products analyzing survey data from the Scientists and Engineers Statistical Data System (SESTAT), which describes features of the U.S. Science and Engineering workforce, and examine trends in the prevalence of analytic error across the decades used to stratify the sample. We once again find that analytic errors appear to be quite prevalent in these studies. Next, we present several example analyses of real SESTAT data, and demonstrate that a failure to perform these analyses correctly can result in substantially biased estimates with standard errors that do not adequately reflect complex sample design features. Collectively, the results of this investigation suggest that reviewers of this type of research need to pay much closer attention to the analytic methods employed by researchers attempting to publish or present secondary analyses of survey data. PMID:27355817
ERIC Educational Resources Information Center
Juan, Wu Xiao; Abidin, Mohamad Jafre Zainol; Eng, Lin Siew
2013-01-01
This survey aims at studying the relationship between English vocabulary threshold and word guessing strategy that is used in reading comprehension learning among 80 pre-university Chinese students in Malaysia. T-test is the main statistical test for this research, and the collected data is analysed using SPSS. From the standard deviation test…
Qu, Shu-Gen; Gao, Jin; Tang, Bo; Yu, Bo; Shen, Yue-Ping; Tu, Yu
2018-01-01
Low-dose ionizing radiation (LDIR) may increase the mortality of solid cancers in nuclear industry workers, but only few individual cohort studies exist, and the available reports have low statistical power. The aim of the present study was to focus on solid cancer mortality risk from LDIR in the nuclear industry using standard mortality ratios (SMRs) and 95% confidence intervals. A systematic literature search through the PubMed and Embase databases identified 27 studies relevant to this meta-analysis. There was statistical significance for total, solid and lung cancers, with meta-SMR values of 0.88, 0.80, and 0.89, respectively. There was evidence of stochastic effects by IR, but more definitive conclusions require additional analyses using standardized protocols to determine whether LDIR increases the risk of solid cancer-related mortality. PMID:29725540
Kent, David M; Dahabreh, Issa J; Ruthazer, Robin; Furlan, Anthony J; Weimar, Christian; Serena, Joaquín; Meier, Bernhard; Mattle, Heinrich P; Di Angelantonio, Emanuele; Paciaroni, Maurizio; Schuchlenz, Herwig; Homma, Shunichi; Lutz, Jennifer S; Thaler, David E
2015-09-14
The preferred antithrombotic strategy for secondary prevention in patients with cryptogenic stroke (CS) and patent foramen ovale (PFO) is unknown. We pooled multiple observational studies and used propensity score-based methods to estimate the comparative effectiveness of oral anticoagulation (OAC) compared with antiplatelet therapy (APT). Individual participant data from 12 databases of medically treated patients with CS and PFO were analysed with Cox regression models, to estimate database-specific hazard ratios (HRs) comparing OAC with APT, for both the primary composite outcome [recurrent stroke, transient ischaemic attack (TIA), or death] and stroke alone. Propensity scores were applied via inverse probability of treatment weighting to control for confounding. We synthesized database-specific HRs using random-effects meta-analysis models. This analysis included 2385 (OAC = 804 and APT = 1581) patients with 227 composite endpoints (stroke/TIA/death). The difference between OAC and APT was not statistically significant for the primary composite outcome [adjusted HR = 0.76, 95% confidence interval (CI) 0.52-1.12] or for the secondary outcome of stroke alone (adjusted HR = 0.75, 95% CI 0.44-1.27). Results were consistent in analyses applying alternative weighting schemes, with the exception that OAC had a statistically significant beneficial effect on the composite outcome in analyses standardized to the patient population who actually received APT (adjusted HR = 0.64, 95% CI 0.42-0.99). Subgroup analyses did not detect statistically significant heterogeneity of treatment effects across clinically important patient groups. We did not find a statistically significant difference comparing OAC with APT; our results justify randomized trials comparing different antithrombotic approaches in these patients. Published on behalf of the European Society of Cardiology. All rights reserved. © The Author 2015. For permissions please email: journals.permissions@oup.com.
Talarczyk-Desole, Joanna; Berger, Anna; Taszarek-Hauke, Grażyna; Hauke, Jan; Pawelczyk, Leszek; Jedrzejczak, Piotr
2017-01-01
The aim of the study was to check the quality of computer-assisted sperm analysis (CASA) system in comparison to the reference manual method as well as standardization of the computer-assisted semen assessment. The study was conducted between January and June 2015 at the Andrology Laboratory of the Division of Infertility and Reproductive Endocrinology, Poznań University of Medical Sciences, Poland. The study group consisted of 230 men who gave sperm samples for the first time in our center as part of an infertility investigation. The samples underwent manual and computer-assisted assessment of concentration, motility and morphology. A total of 184 samples were examined twice: manually, according to the 2010 WHO recommendations, and with CASA, using the program set-tings provided by the manufacturer. Additionally, 46 samples underwent two manual analyses and two computer-assisted analyses. The p-value of p < 0.05 was considered as statistically significant. Statistically significant differences were found between all of the investigated sperm parameters, except for non-progressive motility, measured with CASA and manually. In the group of patients where all analyses with each method were performed twice on the same sample we found no significant differences between both assessments of the same probe, neither in the samples analyzed manually nor with CASA, although standard deviation was higher in the CASA group. Our results suggest that computer-assisted sperm analysis requires further improvement for a wider application in clinical practice.
Aad, G.
2015-12-02
The strength and tensor structure of the Higgs boson's interactions are investigated using an effective Lagrangian, which introduces additional CP-even and CP-odd interactions that lead to changes in the kinematic properties of the Higgs boson and associated jet spectra with respect to the Standard Model. We found that the parameters of the effective Lagrangian are probed using a fit to five differential cross sections previously measured by the ATLAS experiment in the H→γγ decay channel with an integrated luminosity of 20.3 fb -1 at \\(\\sqrt{s} = 8\\) TeV. In order to perform a simultaneous fit to the five distributions, themore » statistical correlations between them are determined by re-analysing the H→γγ candidate events in the proton–proton collision data. No significant deviations from the Standard Model predictions are observed and limits on the effective Lagrangian parameters are derived. These statistical correlations are made publicly available to allow for future analysis of theories with non-Standard Model interactions.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aad, G.
The strength and tensor structure of the Higgs boson's interactions are investigated using an effective Lagrangian, which introduces additional CP-even and CP-odd interactions that lead to changes in the kinematic properties of the Higgs boson and associated jet spectra with respect to the Standard Model. We found that the parameters of the effective Lagrangian are probed using a fit to five differential cross sections previously measured by the ATLAS experiment in the H→γγ decay channel with an integrated luminosity of 20.3 fb -1 at \\(\\sqrt{s} = 8\\) TeV. In order to perform a simultaneous fit to the five distributions, themore » statistical correlations between them are determined by re-analysing the H→γγ candidate events in the proton–proton collision data. No significant deviations from the Standard Model predictions are observed and limits on the effective Lagrangian parameters are derived. These statistical correlations are made publicly available to allow for future analysis of theories with non-Standard Model interactions.« less
Statistical definition of relapse: case of family drug court.
Alemi, Farrokh; Haack, Mary; Nemes, Susanna
2004-06-01
At any point in time, a patient's return to drug use can be seen either as a temporary event or as a return to persistent use. There is no formal standard for distinguishing persistent drug use from an occasional relapse. This lack of standardization persists although the consequences of either interpretation can be life altering. In a drug court or regulatory situation, for example, misinterpreting relapse as return to drug use could lead to incarceration, loss of child custody, or loss of employment. A clinician who mistakes a client's relapse for persistent drug use may fail to adjust treatment intensity to client's needs. An empirical and standardized method for distinguishing relapse from persistent drug use is needed. This paper provides a tool for clinicians and judges to distinguish relapse from persistent use based on statistical analyses of patterns of client's drug use. To accomplish this, a control chart is created for time-in-between relapses. This paper shows how a statistical limit can be calculated by examining either the client's history or other clients in the same program. If client's time-in-between relapse exceeds the statistical limit, then the client has returned to persistent use. Otherwise, the drug use is temporary. To illustrate the method, it is applied to data from three family drug courts. The approach allows the estimation of control limits based on the client's as well as the court's historical patterns. The approach also allows comparison of courts based on recovery rates.
Perry, Charles A.; Wolock, David M.; Artman, Joshua C.
2004-01-01
Streamflow statistics of flow duration and peak-discharge frequency were estimated for 4,771 individual locations on streams listed on the 1999 Kansas Surface Water Register. These statistics included the flow-duration values of 90, 75, 50, 25, and 10 percent, as well as the mean flow value. Peak-discharge frequency values were estimated for the 2-, 5-, 10-, 25-, 50-, and 100-year floods. Least-squares multiple regression techniques were used, along with Tobit analyses, to develop equations for estimating flow-duration values of 90, 75, 50, 25, and 10 percent and the mean flow for uncontrolled flow stream locations. The contributing-drainage areas of 149 U.S. Geological Survey streamflow-gaging stations in Kansas and parts of surrounding States that had flow uncontrolled by Federal reservoirs and used in the regression analyses ranged from 2.06 to 12,004 square miles. Logarithmic transformations of climatic and basin data were performed to yield the best linear relation for developing equations to compute flow durations and mean flow. In the regression analyses, the significant climatic and basin characteristics, in order of importance, were contributing-drainage area, mean annual precipitation, mean basin permeability, and mean basin slope. The analyses yielded a model standard error of prediction range of 0.43 logarithmic units for the 90-percent duration analysis to 0.15 logarithmic units for the 10-percent duration analysis. The model standard error of prediction was 0.14 logarithmic units for the mean flow. Regression equations used to estimate peak-discharge frequency values were obtained from a previous report, and estimates for the 2-, 5-, 10-, 25-, 50-, and 100-year floods were determined for this report. The regression equations and an interpolation procedure were used to compute flow durations, mean flow, and estimates of peak-discharge frequency for locations along uncontrolled flow streams on the 1999 Kansas Surface Water Register. Flow durations, mean flow, and peak-discharge frequency values determined at available gaging stations were used to interpolate the regression-estimated flows for the stream locations where available. Streamflow statistics for locations that had uncontrolled flow were interpolated using data from gaging stations weighted according to the drainage area and the bias between the regression-estimated and gaged flow information. On controlled reaches of Kansas streams, the streamflow statistics were interpolated between gaging stations using only gaged data weighted by drainage area.
20 CFR 634.4 - Statistical standards.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 20 Employees' Benefits 3 2011-04-01 2011-04-01 false Statistical standards. 634.4 Section 634.4... System § 634.4 Statistical standards. Recipients shall agree to provide required data following the statistical standards prescribed by the Bureau of Labor Statistics for cooperative statistical programs. ...
20 CFR 634.4 - Statistical standards.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 20 Employees' Benefits 3 2010-04-01 2010-04-01 false Statistical standards. 634.4 Section 634.4... System § 634.4 Statistical standards. Recipients shall agree to provide required data following the statistical standards prescribed by the Bureau of Labor Statistics for cooperative statistical programs. ...
Bowden, Jack; Del Greco M, Fabiola; Minelli, Cosetta; Davey Smith, George; Sheehan, Nuala A; Thompson, John R
2016-12-01
: MR-Egger regression has recently been proposed as a method for Mendelian randomization (MR) analyses incorporating summary data estimates of causal effect from multiple individual variants, which is robust to invalid instruments. It can be used to test for directional pleiotropy and provides an estimate of the causal effect adjusted for its presence. MR-Egger regression provides a useful additional sensitivity analysis to the standard inverse variance weighted (IVW) approach that assumes all variants are valid instruments. Both methods use weights that consider the single nucleotide polymorphism (SNP)-exposure associations to be known, rather than estimated. We call this the `NO Measurement Error' (NOME) assumption. Causal effect estimates from the IVW approach exhibit weak instrument bias whenever the genetic variants utilized violate the NOME assumption, which can be reliably measured using the F-statistic. The effect of NOME violation on MR-Egger regression has yet to be studied. An adaptation of the I2 statistic from the field of meta-analysis is proposed to quantify the strength of NOME violation for MR-Egger. It lies between 0 and 1, and indicates the expected relative bias (or dilution) of the MR-Egger causal estimate in the two-sample MR context. We call it IGX2 . The method of simulation extrapolation is also explored to counteract the dilution. Their joint utility is evaluated using simulated data and applied to a real MR example. In simulated two-sample MR analyses we show that, when a causal effect exists, the MR-Egger estimate of causal effect is biased towards the null when NOME is violated, and the stronger the violation (as indicated by lower values of IGX2 ), the stronger the dilution. When additionally all genetic variants are valid instruments, the type I error rate of the MR-Egger test for pleiotropy is inflated and the causal effect underestimated. Simulation extrapolation is shown to substantially mitigate these adverse effects. We demonstrate our proposed approach for a two-sample summary data MR analysis to estimate the causal effect of low-density lipoprotein on heart disease risk. A high value of IGX2 close to 1 indicates that dilution does not materially affect the standard MR-Egger analyses for these data. : Care must be taken to assess the NOME assumption via the IGX2 statistic before implementing standard MR-Egger regression in the two-sample summary data context. If IGX2 is sufficiently low (less than 90%), inferences from the method should be interpreted with caution and adjustment methods considered. © The Author 2016. Published by Oxford University Press on behalf of the International Epidemiological Association.
Statistical analysis of fNIRS data: a comprehensive review.
Tak, Sungho; Ye, Jong Chul
2014-01-15
Functional near-infrared spectroscopy (fNIRS) is a non-invasive method to measure brain activities using the changes of optical absorption in the brain through the intact skull. fNIRS has many advantages over other neuroimaging modalities such as positron emission tomography (PET), functional magnetic resonance imaging (fMRI), or magnetoencephalography (MEG), since it can directly measure blood oxygenation level changes related to neural activation with high temporal resolution. However, fNIRS signals are highly corrupted by measurement noises and physiology-based systemic interference. Careful statistical analyses are therefore required to extract neuronal activity-related signals from fNIRS data. In this paper, we provide an extensive review of historical developments of statistical analyses of fNIRS signal, which include motion artifact correction, short source-detector separation correction, principal component analysis (PCA)/independent component analysis (ICA), false discovery rate (FDR), serially-correlated errors, as well as inference techniques such as the standard t-test, F-test, analysis of variance (ANOVA), and statistical parameter mapping (SPM) framework. In addition, to provide a unified view of various existing inference techniques, we explain a linear mixed effect model with restricted maximum likelihood (ReML) variance estimation, and show that most of the existing inference methods for fNIRS analysis can be derived as special cases. Some of the open issues in statistical analysis are also described. Copyright © 2013 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Slaski, G.; Ohde, B.
2016-09-01
The article presents the results of a statistical dispersion analysis of an energy and power demand for tractive purposes of a battery electric vehicle. The authors compare data distribution for different values of an average speed in two approaches, namely a short and long period of observation. The short period of observation (generally around several hundred meters) results from a previously proposed macroscopic energy consumption model based on an average speed per road section. This approach yielded high values of standard deviation and coefficient of variation (the ratio between standard deviation and the mean) around 0.7-1.2. The long period of observation (about several kilometers long) is similar in length to standardized speed cycles used in testing a vehicle energy consumption and available range. The data were analysed to determine the impact of observation length on the energy and power demand variation. The analysis was based on a simulation of electric power and energy consumption performed with speed profiles data recorded in Poznan agglomeration.
Spatial Autocorrelation Approaches to Testing Residuals from Least Squares Regression.
Chen, Yanguang
2016-01-01
In geo-statistics, the Durbin-Watson test is frequently employed to detect the presence of residual serial correlation from least squares regression analyses. However, the Durbin-Watson statistic is only suitable for ordered time or spatial series. If the variables comprise cross-sectional data coming from spatial random sampling, the test will be ineffectual because the value of Durbin-Watson's statistic depends on the sequence of data points. This paper develops two new statistics for testing serial correlation of residuals from least squares regression based on spatial samples. By analogy with the new form of Moran's index, an autocorrelation coefficient is defined with a standardized residual vector and a normalized spatial weight matrix. Then by analogy with the Durbin-Watson statistic, two types of new serial correlation indices are constructed. As a case study, the two newly presented statistics are applied to a spatial sample of 29 China's regions. These results show that the new spatial autocorrelation models can be used to test the serial correlation of residuals from regression analysis. In practice, the new statistics can make up for the deficiencies of the Durbin-Watson test.
[The role of meta-analysis in assessing the treatment of advanced non-small cell lung cancer].
Pérol, M; Pérol, D
2004-02-01
Meta-analysis is a statistical method allowing an evaluation of the direction and quantitative importance of a treatment effect observed in randomized trials which have tested the treatment but have not provided a definitive conclusion. In the present review, we discuss the methodology and the contribution of meta-analyses to the treatment of advanced-stage or metastatic non-small-cell lung cancer. In this area of cancerology, meta-analyses have provided determining information demonstrating the impact of chemotherapy on patient survival. They have also helped define a two-drug regimen based on cisplatin as the gold standard treatment for patients with a satisfactory general status. Recently, the meta-analysis method was used to measure the influence of gemcitabin in combination with platinium salts and demonstrated a small but significant benefit in survival, confirming that gemcitabin remains the gold standard treatment in combination with cisplatin.
The Australasian Resuscitation in Sepsis Evaluation (ARISE) trial statistical analysis plan.
Delaney, Anthony P; Peake, Sandra L; Bellomo, Rinaldo; Cameron, Peter; Holdgate, Anna; Howe, Belinda; Higgins, Alisa; Presneill, Jeffrey; Webb, Steve
2013-09-01
The Australasian Resuscitation in Sepsis Evaluation (ARISE) study is an international, multicentre, randomised, controlled trial designed to evaluate the effectiveness of early goal-directed therapy compared with standard care for patients presenting to the emergency department with severe sepsis. In keeping with current practice, and considering aspects of trial design and reporting specific to non-pharmacological interventions, our plan outlines the principles and methods for analysing and reporting the trial results. The document is prepared before completion of recruitment into the ARISE study, without knowledge of the results of the interim analysis conducted by the data safety and monitoring committee and before completion of the two related international studies. Our statistical analysis plan was designed by the ARISE chief investigators, and reviewed and approved by the ARISE steering committee. We reviewed the data collected by the research team as specified in the study protocol and detailed in the study case report form. We describe information related to baseline characteristics, characteristics of delivery of the trial interventions, details of resuscitation, other related therapies and other relevant data with appropriate comparisons between groups. We define the primary, secondary and tertiary outcomes for the study, with description of the planned statistical analyses. We have developed a statistical analysis plan with a trial profile, mock-up tables and figures. We describe a plan for presenting baseline characteristics, microbiological and antibiotic therapy, details of the interventions, processes of care and concomitant therapies and adverse events. We describe the primary, secondary and tertiary outcomes with identification of subgroups to be analysed. We have developed a statistical analysis plan for the ARISE study, available in the public domain, before the completion of recruitment into the study. This will minimise analytical bias and conforms to current best practice in conducting clinical trials.
ERIC Educational Resources Information Center
Whittington, David H.
2012-01-01
This study included a literature review of juried research studies of student achievement factors that affect African American achievements tracked in the No Child Left Behind Legislative Act. Statistical correlation analyses were performed to determine if the absence or presence of one or two-parents in the household affected student achievement…
ERIC Educational Resources Information Center
Federal Trade Commission, Washington, DC. Bureau of Consumer Protection.
The effect of commercial coaching on Scholastic Aptitude Test (SAT) scores was analyzed, using 1974-1977 test results of 2,500 non-coached students and 1,568 enrollees in two coaching schools. (The Stanley H. Kaplan Educational Center, Inc., and the Test Preparation Center, Inc.). Multiple regression analysis was used to control for student…
Bias and inference from misspecified mixed-effect models in stepped wedge trial analysis.
Thompson, Jennifer A; Fielding, Katherine L; Davey, Calum; Aiken, Alexander M; Hargreaves, James R; Hayes, Richard J
2017-10-15
Many stepped wedge trials (SWTs) are analysed by using a mixed-effect model with a random intercept and fixed effects for the intervention and time periods (referred to here as the standard model). However, it is not known whether this model is robust to misspecification. We simulated SWTs with three groups of clusters and two time periods; one group received the intervention during the first period and two groups in the second period. We simulated period and intervention effects that were either common-to-all or varied-between clusters. Data were analysed with the standard model or with additional random effects for period effect or intervention effect. In a second simulation study, we explored the weight given to within-cluster comparisons by simulating a larger intervention effect in the group of the trial that experienced both the control and intervention conditions and applying the three analysis models described previously. Across 500 simulations, we computed bias and confidence interval coverage of the estimated intervention effect. We found up to 50% bias in intervention effect estimates when period or intervention effects varied between clusters and were treated as fixed effects in the analysis. All misspecified models showed undercoverage of 95% confidence intervals, particularly the standard model. A large weight was given to within-cluster comparisons in the standard model. In the SWTs simulated here, mixed-effect models were highly sensitive to departures from the model assumptions, which can be explained by the high dependence on within-cluster comparisons. Trialists should consider including a random effect for time period in their SWT analysis model. © 2017 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. © 2017 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.
Evidence-based orthodontics. Current statistical trends in published articles in one journal.
Law, Scott V; Chudasama, Dipak N; Rinchuse, Donald J
2010-09-01
To ascertain the number, type, and overall usage of statistics in American Journal of Orthodontics and Dentofacial (AJODO) articles for 2008. These data were then compared to data from three previous years: 1975, 1985, and 2003. The frequency and distribution of statistics used in the AJODO original articles for 2008 were dichotomized into those using statistics and those not using statistics. Statistical procedures were then broadly divided into descriptive statistics (mean, standard deviation, range, percentage) and inferential statistics (t-test, analysis of variance). Descriptive statistics were used to make comparisons. In 1975, 1985, 2003, and 2008, AJODO published 72, 87, 134, and 141 original articles, respectively. The percentage of original articles using statistics was 43.1% in 1975, 75.9% in 1985, 94.0% in 2003, and 92.9% in 2008; original articles using statistics stayed relatively the same from 2003 to 2008, with only a small 1.1% decrease. The percentage of articles using inferential statistical analyses was 23.7% in 1975, 74.2% in 1985, 92.9% in 2003, and 84.4% in 2008. Comparing AJODO publications in 2003 and 2008, there was an 8.5% increase in the use of descriptive articles (from 7.1% to 15.6%), and there was an 8.5% decrease in articles using inferential statistics (from 92.9% to 84.4%).
Scattone, Dorothy; Raggio, Donald J; May, Warren
2011-10-01
The Vineland Adaptive Behavior Scales, Second Edition (Vineland-II), and Bayley Scales of Infant and Toddler Development, Third Edition (Bayley-III) were administered to 65 children between the ages of 12 and 42 months referred for developmental delays. Standard scores and age equivalents were compared across instruments. Analyses showed no statistical difference between Vineland-II ABC standard scores and cognitive levels obtained from the Bayley-III. However, Vineland-II Communication and Motor domain standard scores were significantly higher than corresponding scores on the Bayley-III. In addition, age equivalent scores were significantly higher on the Vineland-II for the fine motor subdomain. Implications for early intervention are discussed.
Across-cohort QC analyses of GWAS summary statistics from complex traits.
Chen, Guo-Bo; Lee, Sang Hong; Robinson, Matthew R; Trzaskowski, Maciej; Zhu, Zhi-Xiang; Winkler, Thomas W; Day, Felix R; Croteau-Chonka, Damien C; Wood, Andrew R; Locke, Adam E; Kutalik, Zoltán; Loos, Ruth J F; Frayling, Timothy M; Hirschhorn, Joel N; Yang, Jian; Wray, Naomi R; Visscher, Peter M
2016-01-01
Genome-wide association studies (GWASs) have been successful in discovering SNP trait associations for many quantitative traits and common diseases. Typically, the effect sizes of SNP alleles are very small and this requires large genome-wide association meta-analyses (GWAMAs) to maximize statistical power. A trend towards ever-larger GWAMA is likely to continue, yet dealing with summary statistics from hundreds of cohorts increases logistical and quality control problems, including unknown sample overlap, and these can lead to both false positive and false negative findings. In this study, we propose four metrics and visualization tools for GWAMA, using summary statistics from cohort-level GWASs. We propose methods to examine the concordance between demographic information, and summary statistics and methods to investigate sample overlap. (I) We use the population genetics F st statistic to verify the genetic origin of each cohort and their geographic location, and demonstrate using GWAMA data from the GIANT Consortium that geographic locations of cohorts can be recovered and outlier cohorts can be detected. (II) We conduct principal component analysis based on reported allele frequencies, and are able to recover the ancestral information for each cohort. (III) We propose a new statistic that uses the reported allelic effect sizes and their standard errors to identify significant sample overlap or heterogeneity between pairs of cohorts. (IV) To quantify unknown sample overlap across all pairs of cohorts, we propose a method that uses randomly generated genetic predictors that does not require the sharing of individual-level genotype data and does not breach individual privacy.
Across-cohort QC analyses of GWAS summary statistics from complex traits
Chen, Guo-Bo; Lee, Sang Hong; Robinson, Matthew R; Trzaskowski, Maciej; Zhu, Zhi-Xiang; Winkler, Thomas W; Day, Felix R; Croteau-Chonka, Damien C; Wood, Andrew R; Locke, Adam E; Kutalik, Zoltán; Loos, Ruth J F; Frayling, Timothy M; Hirschhorn, Joel N; Yang, Jian; Wray, Naomi R; Visscher, Peter M
2017-01-01
Genome-wide association studies (GWASs) have been successful in discovering SNP trait associations for many quantitative traits and common diseases. Typically, the effect sizes of SNP alleles are very small and this requires large genome-wide association meta-analyses (GWAMAs) to maximize statistical power. A trend towards ever-larger GWAMA is likely to continue, yet dealing with summary statistics from hundreds of cohorts increases logistical and quality control problems, including unknown sample overlap, and these can lead to both false positive and false negative findings. In this study, we propose four metrics and visualization tools for GWAMA, using summary statistics from cohort-level GWASs. We propose methods to examine the concordance between demographic information, and summary statistics and methods to investigate sample overlap. (I) We use the population genetics Fst statistic to verify the genetic origin of each cohort and their geographic location, and demonstrate using GWAMA data from the GIANT Consortium that geographic locations of cohorts can be recovered and outlier cohorts can be detected. (II) We conduct principal component analysis based on reported allele frequencies, and are able to recover the ancestral information for each cohort. (III) We propose a new statistic that uses the reported allelic effect sizes and their standard errors to identify significant sample overlap or heterogeneity between pairs of cohorts. (IV) To quantify unknown sample overlap across all pairs of cohorts, we propose a method that uses randomly generated genetic predictors that does not require the sharing of individual-level genotype data and does not breach individual privacy. PMID:27552965
Spatial Ensemble Postprocessing of Precipitation Forecasts Using High Resolution Analyses
NASA Astrophysics Data System (ADS)
Lang, Moritz N.; Schicker, Irene; Kann, Alexander; Wang, Yong
2017-04-01
Ensemble prediction systems are designed to account for errors or uncertainties in the initial and boundary conditions, imperfect parameterizations, etc. However, due to sampling errors and underestimation of the model errors, these ensemble forecasts tend to be underdispersive, and to lack both reliability and sharpness. To overcome such limitations, statistical postprocessing methods are commonly applied to these forecasts. In this study, a full-distributional spatial post-processing method is applied to short-range precipitation forecasts over Austria using Standardized Anomaly Model Output Statistics (SAMOS). Following Stauffer et al. (2016), observation and forecast fields are transformed into standardized anomalies by subtracting a site-specific climatological mean and dividing by the climatological standard deviation. Due to the need of fitting only a single regression model for the whole domain, the SAMOS framework provides a computationally inexpensive method to create operationally calibrated probabilistic forecasts for any arbitrary location or for all grid points in the domain simultaneously. Taking advantage of the INCA system (Integrated Nowcasting through Comprehensive Analysis), high resolution analyses are used for the computation of the observed climatology and for model training. The INCA system operationally combines station measurements and remote sensing data into real-time objective analysis fields at 1 km-horizontal resolution and 1 h-temporal resolution. The precipitation forecast used in this study is obtained from a limited area model ensemble prediction system also operated by ZAMG. The so called ALADIN-LAEF provides, by applying a multi-physics approach, a 17-member forecast at a horizontal resolution of 10.9 km and a temporal resolution of 1 hour. The performed SAMOS approach statistically combines the in-house developed high resolution analysis and ensemble prediction system. The station-based validation of 6 hour precipitation sums shows a mean improvement of more than 40% in CRPS when compared to bilinearly interpolated uncalibrated ensemble forecasts. The validation on randomly selected grid points, representing the true height distribution over Austria, still indicates a mean improvement of 35%. The applied statistical model is currently set up for 6-hourly and daily accumulation periods, but will be extended to a temporal resolution of 1-3 hours within a new probabilistic nowcasting system operated by ZAMG.
Hamilton, Alexander J; Pruthi, Rishi; Maxwell, Heather; Casula, Anna; Braddon, Fiona; Inward, Carol; Lewis, Malcolm; O'Brien, Catherine; Stojanovic, Jelena; Tse, Yincent; Sinha, Manish D
2015-01-01
The Paediatric Registry analyses renal replacement therapy (RRT) data in children. All 13 UK paediatric nephrology centres submit electronic data. To provide centre specific data and to determine adherence to relevant audit standards. Data analysis to calculate summary statistics and achievement of an audit standard. The median height z-score for children on dialysis was -2.0 and for children with a functioning transplant -1.3. Children transplanted before age 11 years improved their height z score subsequently, whereas those >11 maintained their height z-score, with all transplanted patients having a similar height z-score after 3 years of starting RRT.The median weight z-score for children on dialysis was -1.2, and for children with a functioning transplant -0.2.Of those with data, 75% of the prevalent paediatric RRT population had .1 risk factors for cardiovascular disease, with 1 in 10 having all three risk factors evaluated. For transplant patients, 76% achieved the systolic blood pressure (SBP)standard and 91% achieved the haemoglobin standard. For haemodialysis patients, 53% achieved the SBP standard,66% the haemoglobin standard, 84% the calcium standard,43% the phosphate standard and 43% achieved the parathyroid hormone (PTH) standard. For peritoneal dialysis patients, 61% achieved the SBP standard, 83% the haemoglobin standard, 71% the calcium standard, 56% the phosphate standard and 36% achieved the PTH standard. Quarterly data collection will improve quality and reporting. Continued focus on improving height and avoiding obesity is needed. Awareness and management of cardiovascular risk is an important long term strategy.
Shadish, William R; Hedges, Larry V; Pustejovsky, James E
2014-04-01
This article presents a d-statistic for single-case designs that is in the same metric as the d-statistic used in between-subjects designs such as randomized experiments and offers some reasons why such a statistic would be useful in SCD research. The d has a formal statistical development, is accompanied by appropriate power analyses, and can be estimated using user-friendly SPSS macros. We discuss both advantages and disadvantages of d compared to other approaches such as previous d-statistics, overlap statistics, and multilevel modeling. It requires at least three cases for computation and assumes normally distributed outcomes and stationarity, assumptions that are discussed in some detail. We also show how to test these assumptions. The core of the article then demonstrates in depth how to compute d for one study, including estimation of the autocorrelation and the ratio of between case variance to total variance (between case plus within case variance), how to compute power using a macro, and how to use the d to conduct a meta-analysis of studies using single-case designs in the free program R, including syntax in an appendix. This syntax includes how to read data, compute fixed and random effect average effect sizes, prepare a forest plot and a cumulative meta-analysis, estimate various influence statistics to identify studies contributing to heterogeneity and effect size, and do various kinds of publication bias analyses. This d may prove useful for both the analysis and meta-analysis of data from SCDs. Copyright © 2013 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved.
Willis, Brian H; Riley, Richard D
2017-09-20
An important question for clinicians appraising a meta-analysis is: are the findings likely to be valid in their own practice-does the reported effect accurately represent the effect that would occur in their own clinical population? To this end we advance the concept of statistical validity-where the parameter being estimated equals the corresponding parameter for a new independent study. Using a simple ('leave-one-out') cross-validation technique, we demonstrate how we may test meta-analysis estimates for statistical validity using a new validation statistic, Vn, and derive its distribution. We compare this with the usual approach of investigating heterogeneity in meta-analyses and demonstrate the link between statistical validity and homogeneity. Using a simulation study, the properties of Vn and the Q statistic are compared for univariate random effects meta-analysis and a tailored meta-regression model, where information from the setting (included as model covariates) is used to calibrate the summary estimate to the setting of application. Their properties are found to be similar when there are 50 studies or more, but for fewer studies Vn has greater power but a higher type 1 error rate than Q. The power and type 1 error rate of Vn are also shown to depend on the within-study variance, between-study variance, study sample size, and the number of studies in the meta-analysis. Finally, we apply Vn to two published meta-analyses and conclude that it usefully augments standard methods when deciding upon the likely validity of summary meta-analysis estimates in clinical practice. © 2017 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. © 2017 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.
Green, Esther; Yuen, Dora; Chasen, Martin; Amernic, Heidi; Shabestari, Omid; Brundage, Michael; Krzyzanowska, Monika K; Klinger, Christopher; Ismail, Zahra; Pereira, José
2017-01-01
To examine oncology nurses' attitudes toward and reported use of the Edmonton Symptom Assessment System (ESAS) and to determine whether the length of work experience and presence of oncology certification are associated with their attitudes and reported usage. . Exploratory, mixed-methods study employing a questionnaire approach. . 14 regional cancer centers (RCCs) in Ontario, Canada. . Oncology nurses who took part in a larger province-wide study that surveyed 960 interdisciplinary providers in oncology care settings at all of Ontario's 14 RCCs. . Oncology nurses' attitudes and use of ESAS were measured using a 21-item investigator-developed questionnaire. Descriptive statistics and Kendall's tau-b or tau-c test were used for data analyses. Qualitative responses were analyzed using content analysis. . Attitudes toward and self-reported use of standardized symptom screening and ESAS. . More than half of the participants agreed that ESAS improves symptom screening, most said they would encourage their patients to complete ESAS, and most felt that managing symptoms is within their scope of practice and clinical responsibilities. Qualitative comments provided additional information elucidating the quantitative responses. Statistical analyses revealed that oncology nurses who have 10 years or less of work experience were more likely to agree that the use of standardized, valid instruments to screen for and assess symptoms should be considered best practice, ESAS improves symptom screening, and ESAS enables them to better manage patients' symptoms. No statistically significant difference was found between oncology-certified RNs and noncertified RNs on attitudes or reported use of ESAS. . Implementing a population-based symptom screening approach is a major undertaking. The current study found that oncology nurses recognize the value of standardized screening, as demonstrated by their attitudes toward ESAS. . Oncology nurses are integral to providing high-quality person-centered care. Using standardized approaches that enable patients to self-report symptoms and understanding barriers and enablers to optimal use of patient-reported outcome tools can improve the quality of patient care.
[Methods, challenges and opportunities for big data analyses of microbiome].
Sheng, Hua-Fang; Zhou, Hong-Wei
2015-07-01
Microbiome is a novel research field related with a variety of chronic inflamatory diseases. Technically, there are two major approaches to analysis of microbiome: metataxonome by sequencing the 16S rRNA variable tags, and metagenome by shot-gun sequencing of the total microbial (mainly bacterial) genome mixture. The 16S rRNA sequencing analyses pipeline includes sequence quality control, diversity analyses, taxonomy and statistics; metagenome analyses further includes gene annotation and functional analyses. With the development of the sequencing techniques, the cost of sequencing will decrease, and big data analyses will become the central task. Data standardization, accumulation, modeling and disease prediction are crucial for future exploit of these data. Meanwhile, the information property in these data, and the functional verification with culture-dependent and culture-independent experiments remain the focus in future research. Studies of human microbiome will bring a better understanding of the relations between the human body and the microbiome, especially in the context of disease diagnosis and therapy, which promise rich research opportunities.
Chevance, Aurélie; Schuster, Tibor; Steele, Russell; Ternès, Nils; Platt, Robert W
2015-10-01
Robustness of an existing meta-analysis can justify decisions on whether to conduct an additional study addressing the same research question. We illustrate the graphical assessment of the potential impact of an additional study on an existing meta-analysis using published data on statin use and the risk of acute kidney injury. A previously proposed graphical augmentation approach is used to assess the sensitivity of the current test and heterogeneity statistics extracted from existing meta-analysis data. In addition, we extended the graphical augmentation approach to assess potential changes in the pooled effect estimate after updating a current meta-analysis and applied the three graphical contour definitions to data from meta-analyses on statin use and acute kidney injury risk. In the considered example data, the pooled effect estimates and heterogeneity indices demonstrated to be considerably robust to the addition of a future study. Supportingly, for some previously inconclusive meta-analyses, a study update might yield statistically significant kidney injury risk increase associated with higher statin exposure. The illustrated contour approach should become a standard tool for the assessment of the robustness of meta-analyses. It can guide decisions on whether to conduct additional studies addressing a relevant research question. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Profe, Jörn; Ohlendorf, Christian
2017-04-01
XRF-scanning is the state-of-the-art technique for geochemical analyses in marine and lacustrine sedimentology for more than a decade. However, little attention has been paid to data precision and technical limitations so far. Using homogenized, dried and powdered samples (certified geochemical reference standards and samples from a lithologically-contrasting loess-paleosol sequence) minimizes many adverse effects that influence the XRF-signal when analyzing wet sediment cores. This allows the investigation of data precision under ideal conditions and documents a new application of the XRF core-scanner technology at the same time. Reliable interpretations of XRF results require data precision evaluation of single elements as a function of X-ray tube, measurement time, sample compaction and quality of peak fitting. Ten-fold measurement of each sample constitutes data precision. Data precision of XRF measurements theoretically obeys Poisson statistics. Fe and Ca exhibit largest deviations from Poisson statistics. The same elements show the least mean relative standard deviations in the range from 0.5% to 1%. This represents the technical limit of data precision achievable by the installed detector. Measurement times ≥ 30 s reveal mean relative standard deviations below 4% for most elements. The quality of peak fitting is only relevant for elements with overlapping fluorescence lines such as Ba, Ti and Mn or for elements with low concentrations such as Y, for example. Differences in sample compaction are marginal and do not change mean relative standard deviation considerably. Data precision is in the range reported for geochemical reference standards measured by conventional techniques. Therefore, XRF scanning of discrete samples provide a cost- and time-efficient alternative to conventional multi-element analyses. As best trade-off between economical operation and data quality, we recommend a measurement time of 30 s resulting in a total scan time of 30 minutes for 30 samples.
Rainfall Results of the Florida Area Cumulus Experiment, 1970-76.
NASA Astrophysics Data System (ADS)
Woodley, William L.; Jordan, Jill; Barnston, Anthony; Simpson, Joanne; Biondini, Ron; Flueck, John
1982-02-01
The Florida Area Cumulus Experiment of 1970-76 (FACE-1) is a single-area, randomized, exploratory experiment to determine whether seeding cumuli for dynamic effects (dynamic seeding) can be used to augment convective rainfall over a substantial target area (1.3 × 104 km2) in south Florida. Rainfall is estimated using S-band radar observations after adjustment by raingages. The two primary response variables are rain volumes in the total target (TT) and in the floating target (FT), the most intensely treated portion of the target. The experimental unit is the day and the main observational period is the 6 h after initiation of treatment (silver iodide flares on seed days and either no flares or placebos on control days). Analyses without predictors suggest apparent increases in both the location (means and medians) and the dispersion (standard deviation and interquartile range) characteristics of rainfall due to seeding in the FT and TT variables with substantial statistical support for the FT results and lesser statistical support for the TT results. Analyses of covariance using meteorologically meaningful predictor variables suggest a somewhat larger effect of seeding with stronger statistical support. These results are interpreted in terms of the FACE conceptual model.
Improved score statistics for meta-analysis in single-variant and gene-level association studies.
Yang, Jingjing; Chen, Sai; Abecasis, Gonçalo
2018-06-01
Meta-analysis is now an essential tool for genetic association studies, allowing them to combine large studies and greatly accelerating the pace of genetic discovery. Although the standard meta-analysis methods perform equivalently as the more cumbersome joint analysis under ideal settings, they result in substantial power loss under unbalanced settings with various case-control ratios. Here, we investigate the power loss problem by the standard meta-analysis methods for unbalanced studies, and further propose novel meta-analysis methods performing equivalently to the joint analysis under both balanced and unbalanced settings. We derive improved meta-score-statistics that can accurately approximate the joint-score-statistics with combined individual-level data, for both linear and logistic regression models, with and without covariates. In addition, we propose a novel approach to adjust for population stratification by correcting for known population structures through minor allele frequencies. In the simulated gene-level association studies under unbalanced settings, our method recovered up to 85% power loss caused by the standard methods. We further showed the power gain of our methods in gene-level tests with 26 unbalanced studies of age-related macular degeneration . In addition, we took the meta-analysis of three unbalanced studies of type 2 diabetes as an example to discuss the challenges of meta-analyzing multi-ethnic samples. In summary, our improved meta-score-statistics with corrections for population stratification can be used to construct both single-variant and gene-level association studies, providing a useful framework for ensuring well-powered, convenient, cross-study analyses. © 2018 WILEY PERIODICALS, INC.
Time Series Expression Analyses Using RNA-seq: A Statistical Approach
Oh, Sunghee; Song, Seongho; Grabowski, Gregory; Zhao, Hongyu; Noonan, James P.
2013-01-01
RNA-seq is becoming the de facto standard approach for transcriptome analysis with ever-reducing cost. It has considerable advantages over conventional technologies (microarrays) because it allows for direct identification and quantification of transcripts. Many time series RNA-seq datasets have been collected to study the dynamic regulations of transcripts. However, statistically rigorous and computationally efficient methods are needed to explore the time-dependent changes of gene expression in biological systems. These methods should explicitly account for the dependencies of expression patterns across time points. Here, we discuss several methods that can be applied to model timecourse RNA-seq data, including statistical evolutionary trajectory index (SETI), autoregressive time-lagged regression (AR(1)), and hidden Markov model (HMM) approaches. We use three real datasets and simulation studies to demonstrate the utility of these dynamic methods in temporal analysis. PMID:23586021
Time series expression analyses using RNA-seq: a statistical approach.
Oh, Sunghee; Song, Seongho; Grabowski, Gregory; Zhao, Hongyu; Noonan, James P
2013-01-01
RNA-seq is becoming the de facto standard approach for transcriptome analysis with ever-reducing cost. It has considerable advantages over conventional technologies (microarrays) because it allows for direct identification and quantification of transcripts. Many time series RNA-seq datasets have been collected to study the dynamic regulations of transcripts. However, statistically rigorous and computationally efficient methods are needed to explore the time-dependent changes of gene expression in biological systems. These methods should explicitly account for the dependencies of expression patterns across time points. Here, we discuss several methods that can be applied to model timecourse RNA-seq data, including statistical evolutionary trajectory index (SETI), autoregressive time-lagged regression (AR(1)), and hidden Markov model (HMM) approaches. We use three real datasets and simulation studies to demonstrate the utility of these dynamic methods in temporal analysis.
Ocean data assimilation using optimal interpolation with a quasi-geostrophic model
NASA Technical Reports Server (NTRS)
Rienecker, Michele M.; Miller, Robert N.
1991-01-01
A quasi-geostrophic (QG) stream function is analyzed by optimal interpolation (OI) over a 59-day period in a 150-km-square domain off northern California. Hydrographic observations acquired over five surveys were assimilated into a QG open boundary ocean model. Assimilation experiments were conducted separately for individual surveys to investigate the sensitivity of the OI analyses to parameters defining the decorrelation scale of an assumed error covariance function. The analyses were intercompared through dynamical hindcasts between surveys. The best hindcast was obtained using the smooth analyses produced with assumed error decorrelation scales identical to those of the observed stream function. The rms difference between the hindcast stream function and the final analysis was only 23 percent of the observation standard deviation. The two sets of OI analyses were temporally smoother than the fields from statistical objective analysis and in good agreement with the only independent data available for comparison.
Third International Standard for Posterior Pituitary
Bangham, D. R.; Mussett, Marjorie V.
1958-01-01
In October 1955, stocks of the Second International Standard for Posterior Pituitary were running low and the Department of Biological Standards of the National Institute for Medical Research, London, was asked to proceed with the arrangements for an international collaborative assay of material for the Third Standard. A single 142-g batch of posterior-pituitary-lobe powder was obtained and distributed in ampoules, in approximately 30-mg quantities. Samples were sent to 19 laboratories in 10 countries. In all, 185 assays were carried out, 122 for oxytocic activity, 53 for vasopressor activity and 10 for antidiuretic activity. On the basis of the results, which were analysed statistically at the National Institute for Medical Research, it was agreed that the potency of the Third Standard (re-named International Standard for Oxytocic, Vasopressor and Antidiuretic Substances in 1956, in view of the recent synthesis of oxytocin and vasopressin) should be expressed as 2.0 International Units per milligram. The International Unit therefore remains unchanged as 0.5 mg of the dry powder. PMID:13585079
GPU-computing in econophysics and statistical physics
NASA Astrophysics Data System (ADS)
Preis, T.
2011-03-01
A recent trend in computer science and related fields is general purpose computing on graphics processing units (GPUs), which can yield impressive performance. With multiple cores connected by high memory bandwidth, today's GPUs offer resources for non-graphics parallel processing. This article provides a brief introduction into the field of GPU computing and includes examples. In particular computationally expensive analyses employed in financial market context are coded on a graphics card architecture which leads to a significant reduction of computing time. In order to demonstrate the wide range of possible applications, a standard model in statistical physics - the Ising model - is ported to a graphics card architecture as well, resulting in large speedup values.
Haack, Robert A.; Britton, Kerry O.; Brockerhoff, Eckehard G.; Cavey, Joseph F.; Garrett, Lynn J.; Kimberley, Mark; Lowenstein, Frank; Nuding, Amelia; Olson, Lars J.; Turner, James; Vasilaky, Kathryn N.
2014-01-01
Numerous bark- and wood-infesting insects have been introduced to new countries by international trade where some have caused severe environmental and economic damage. Wood packaging material (WPM), such as pallets, is one of the high risk pathways for the introduction of wood pests. International recognition of this risk resulted in adoption of International Standards for Phytosanitary Measures No. 15 (ISPM15) in 2002, which provides treatment standards for WPM used in international trade. ISPM15 was originally developed by members of the International Plant Protection Convention to “practically eliminate” the risk of international transport of most bark and wood pests via WPM. The United States (US) implemented ISPM15 in three phases during 2005–2006. We compared pest interception rates of WPM inspected at US ports before and after US implementation of ISPM15 using the US Department of Agriculture AQIM (Agriculture Quarantine Inspection Monitoring) database. Analyses of records from 2003–2009 indicated that WPM infestation rates declined 36–52% following ISPM15 implementation, with results varying in statistical significance depending on the selected starting parameters. Power analyses of the AQIM data indicated there was at least a 95% chance of detecting a statistically significant reduction in infestation rates if they dropped by 90% post-ISPM15, but the probability fell as the impact of ISPM15 lessened. We discuss several factors that could have reduced the apparent impact of ISPM15 on lowering WPM infestation levels, and suggest ways that ISPM15 could be improved. The paucity of international interception data impeded our ability to conduct more thorough analyses of the impact of ISPM15, and demonstrates the need for well-planned sampling programs before and after implementation of major phytosanitary policies so that their effectiveness can be assessed. We also present summary data for bark- and wood-boring insects intercepted on WPM at US ports during 1984–2008. PMID:24827724
Sharing brain mapping statistical results with the neuroimaging data model
Maumet, Camille; Auer, Tibor; Bowring, Alexander; Chen, Gang; Das, Samir; Flandin, Guillaume; Ghosh, Satrajit; Glatard, Tristan; Gorgolewski, Krzysztof J.; Helmer, Karl G.; Jenkinson, Mark; Keator, David B.; Nichols, B. Nolan; Poline, Jean-Baptiste; Reynolds, Richard; Sochat, Vanessa; Turner, Jessica; Nichols, Thomas E.
2016-01-01
Only a tiny fraction of the data and metadata produced by an fMRI study is finally conveyed to the community. This lack of transparency not only hinders the reproducibility of neuroimaging results but also impairs future meta-analyses. In this work we introduce NIDM-Results, a format specification providing a machine-readable description of neuroimaging statistical results along with key image data summarising the experiment. NIDM-Results provides a unified representation of mass univariate analyses including a level of detail consistent with available best practices. This standardized representation allows authors to relay methods and results in a platform-independent regularized format that is not tied to a particular neuroimaging software package. Tools are available to export NIDM-Result graphs and associated files from the widely used SPM and FSL software packages, and the NeuroVault repository can import NIDM-Results archives. The specification is publically available at: http://nidm.nidash.org/specs/nidm-results.html. PMID:27922621
Quantitative Analysis of Venus Radar Backscatter Data in ArcGIS
NASA Technical Reports Server (NTRS)
Long, S. M.; Grosfils, E. B.
2005-01-01
Ongoing mapping of the Ganiki Planitia (V14) quadrangle of Venus and definition of material units has involved an integrated but qualitative analysis of Magellan radar backscatter images and topography using standard geomorphological mapping techniques. However, such analyses do not take full advantage of the quantitative information contained within the images. Analysis of the backscatter coefficient allows a much more rigorous statistical comparison between mapped units, permitting first order selfsimilarity tests of geographically separated materials assigned identical geomorphological labels. Such analyses cannot be performed directly on pixel (DN) values from Magellan backscatter images, because the pixels are scaled to the Muhleman law for radar echoes on Venus and are not corrected for latitudinal variations in incidence angle. Therefore, DN values must be converted based on pixel latitude back to their backscatter coefficient values before accurate statistical analysis can occur. Here we present a method for performing the conversions and analysis of Magellan backscatter data using commonly available ArcGIS software and illustrate the advantages of the process for geological mapping.
A marked correlation function for constraining modified gravity models
NASA Astrophysics Data System (ADS)
White, Martin
2016-11-01
Future large scale structure surveys will provide increasingly tight constraints on our cosmological model. These surveys will report results on the distance scale and growth rate of perturbations through measurements of Baryon Acoustic Oscillations and Redshift-Space Distortions. It is interesting to ask: what further analyses should become routine, so as to test as-yet-unknown models of cosmic acceleration? Models which aim to explain the accelerated expansion rate of the Universe by modifications to General Relativity often invoke screening mechanisms which can imprint a non-standard density dependence on their predictions. This suggests density-dependent clustering as a `generic' constraint. This paper argues that a density-marked correlation function provides a density-dependent statistic which is easy to compute and report and requires minimal additional infrastructure beyond what is routinely available to such survey analyses. We give one realization of this idea and study it using low order perturbation theory. We encourage groups developing modified gravity theories to see whether such statistics provide discriminatory power for their models.
Prodinger, Birgit; Ballert, Carolina S; Brach, Mirjam; Brinkhof, Martin W G; Cieza, Alarcos; Hug, Kerstin; Jordan, Xavier; Post, Marcel W M; Scheel-Sailer, Anke; Schubert, Martin; Tennant, Alan; Stucki, Gerold
2016-02-01
Functioning is an important outcome to measure in cohort studies. Clear and operational outcomes are needed to judge the quality of a cohort study. This paper outlines guiding principles for reporting functioning in cohort studies and addresses some outstanding issues. Principles of how to standardize reporting of data from a cohort study on functioning, by deriving scores that are most useful for further statistical analysis and reporting, are outlined. The Swiss Spinal Cord Injury Cohort Study Community Survey serves as a case in point to provide a practical application of these principles. Development of reporting scores must be conceptually coherent and metrically sound. The International Classification of Functioning, Disability and Health (ICF) can serve as the frame of reference for this, with its categories serving as reference units for reporting. To derive a score for further statistical analysis and reporting, items measuring a single latent trait must be invariant across groups. The Rasch measurement model is well suited to test these assumptions. Our approach is a valuable guide for researchers and clinicians, as it fosters comparability of data, strengthens the comprehensiveness of scope, and provides invariant, interval-scaled data for further statistical analyses of functioning.
Sonuga-Barke, Edmund J S; Brandeis, Daniel; Cortese, Samuele; Daley, David; Ferrin, Maite; Holtmann, Martin; Stevenson, Jim; Danckaerts, Marina; van der Oord, Saskia; Döpfner, Manfred; Dittmann, Ralf W; Simonoff, Emily; Zuddas, Alessandro; Banaschewski, Tobias; Buitelaar, Jan; Coghill, David; Hollis, Chris; Konofal, Eric; Lecendreux, Michel; Wong, Ian C K; Sergeant, Joseph
2013-03-01
Nonpharmacological treatments are available for attention deficit hyperactivity disorder (ADHD), although their efficacy remains uncertain. The authors undertook meta-analyses of the efficacy of dietary (restricted elimination diets, artificial food color exclusions, and free fatty acid supplementation) and psychological (cognitive training, neurofeedback, and behavioral interventions) ADHD treatments. Using a common systematic search and a rigorous coding and data extraction strategy across domains, the authors searched electronic databases to identify published randomized controlled trials that involved individuals who were diagnosed with ADHD (or who met a validated cutoff on a recognized rating scale) and that included an ADHD outcome. Fifty-four of the 2,904 nonduplicate screened records were included in the analyses. Two different analyses were performed. When the outcome measure was based on ADHD assessments by raters closest to the therapeutic setting, all dietary (standardized mean differences=0.21-0.48) and psychological (standardized mean differences=0.40-0.64) treatments produced statistically significant effects. However, when the best probably blinded assessment was employed, effects remained significant for free fatty acid supplementation (standardized mean difference=0.16) and artificial food color exclusion (standardized mean difference=0.42) but were substantially attenuated to nonsignificant levels for other treatments. Free fatty acid supplementation produced small but significant reductions in ADHD symptoms even with probably blinded assessments, although the clinical significance of these effects remains to be determined. Artificial food color exclusion produced larger effects but often in individuals selected for food sensitivities. Better evidence for efficacy from blinded assessments is required for behavioral interventions, neurofeedback, cognitive training, and restricted elimination diets before they can be supported as treatments for core ADHD symptoms.
Sudell, Maria; Kolamunnage-Dona, Ruwanthi; Tudur-Smith, Catrin
2016-12-05
Joint models for longitudinal and time-to-event data are commonly used to simultaneously analyse correlated data in single study cases. Synthesis of evidence from multiple studies using meta-analysis is a natural next step but its feasibility depends heavily on the standard of reporting of joint models in the medical literature. During this review we aim to assess the current standard of reporting of joint models applied in the literature, and to determine whether current reporting standards would allow or hinder future aggregate data meta-analyses of model results. We undertook a literature review of non-methodological studies that involved joint modelling of longitudinal and time-to-event medical data. Study characteristics were extracted and an assessment of whether separate meta-analyses for longitudinal, time-to-event and association parameters were possible was made. The 65 studies identified used a wide range of joint modelling methods in a selection of software. Identified studies concerned a variety of disease areas. The majority of studies reported adequate information to conduct a meta-analysis (67.7% for longitudinal parameter aggregate data meta-analysis, 69.2% for time-to-event parameter aggregate data meta-analysis, 76.9% for association parameter aggregate data meta-analysis). In some cases model structure was difficult to ascertain from the published reports. Whilst extraction of sufficient information to permit meta-analyses was possible in a majority of cases, the standard of reporting of joint models should be maintained and improved. Recommendations for future practice include clear statement of model structure, of values of estimated parameters, of software used and of statistical methods applied.
Southard, Rodney E.
2013-01-01
The weather and precipitation patterns in Missouri vary considerably from year to year. In 2008, the statewide average rainfall was 57.34 inches and in 2012, the statewide average rainfall was 30.64 inches. This variability in precipitation and resulting streamflow in Missouri underlies the necessity for water managers and users to have reliable streamflow statistics and a means to compute select statistics at ungaged locations for a better understanding of water availability. Knowledge of surface-water availability is dependent on the streamflow data that have been collected and analyzed by the U.S. Geological Survey for more than 100 years at approximately 350 streamgages throughout Missouri. The U.S. Geological Survey, in cooperation with the Missouri Department of Natural Resources, computed streamflow statistics at streamgages through the 2010 water year, defined periods of drought and defined methods to estimate streamflow statistics at ungaged locations, and developed regional regression equations to compute selected streamflow statistics at ungaged locations. Streamflow statistics and flow durations were computed for 532 streamgages in Missouri and in neighboring States of Missouri. For streamgages with more than 10 years of record, Kendall’s tau was computed to evaluate for trends in streamflow data. If trends were detected, the variable length method was used to define the period of no trend. Water years were removed from the dataset from the beginning of the record for a streamgage until no trend was detected. Low-flow frequency statistics were then computed for the entire period of record and for the period of no trend if 10 or more years of record were available for each analysis. Three methods are presented for computing selected streamflow statistics at ungaged locations. The first method uses power curve equations developed for 28 selected streams in Missouri and neighboring States that have multiple streamgages on the same streams. Statistical estimates on one of these streams can be calculated at an ungaged location that has a drainage area that is between 40 percent of the drainage area of the farthest upstream streamgage and within 150 percent of the drainage area of the farthest downstream streamgage along the stream of interest. The second method may be used on any stream with a streamgage that has operated for 10 years or longer and for which anthropogenic effects have not changed the low-flow characteristics at the ungaged location since collection of the streamflow data. A ratio of drainage area of the stream at the ungaged location to the drainage area of the stream at the streamgage was computed to estimate the statistic at the ungaged location. The range of applicability is between 40- and 150-percent of the drainage area of the streamgage, and the ungaged location must be located on the same stream as the streamgage. The third method uses regional regression equations to estimate selected low-flow frequency statistics for unregulated streams in Missouri. This report presents regression equations to estimate frequency statistics for the 10-year recurrence interval and for the N-day durations of 1, 2, 3, 7, 10, 30, and 60 days. Basin and climatic characteristics were computed using geographic information system software and digital geospatial data. A total of 35 characteristics were computed for use in preliminary statewide and regional regression analyses based on existing digital geospatial data and previous studies. Spatial analyses for geographical bias in the predictive accuracy of the regional regression equations defined three low-flow regions with the State representing the three major physiographic provinces in Missouri. Region 1 includes the Central Lowlands, Region 2 includes the Ozark Plateaus, and Region 3 includes the Mississippi Alluvial Plain. A total of 207 streamgages were used in the regression analyses for the regional equations. Of the 207 U.S. Geological Survey streamgages, 77 were located in Region 1, 120 were located in Region 2, and 10 were located in Region 3. Streamgages located outside of Missouri were selected to extend the range of data used for the independent variables in the regression analyses. Streamgages included in the regression analyses had 10 or more years of record and were considered to be affected minimally by anthropogenic activities or trends. Regional regression analyses identified three characteristics as statistically significant for the development of regional equations. For Region 1, drainage area, longest flow path, and streamflow-variability index were statistically significant. The range in the standard error of estimate for Region 1 is 79.6 to 94.2 percent. For Region 2, drainage area and streamflow variability index were statistically significant, and the range in the standard error of estimate is 48.2 to 72.1 percent. For Region 3, drainage area and streamflow-variability index also were statistically significant with a range in the standard error of estimate of 48.1 to 96.2 percent. Limitations on the use of estimating low-flow frequency statistics at ungaged locations are dependent on the method used. The first method outlined for use in Missouri, power curve equations, were developed to estimate the selected statistics for ungaged locations on 28 selected streams with multiple streamgages located on the same stream. A second method uses a drainage-area ratio to compute statistics at an ungaged location using data from a single streamgage on the same stream with 10 or more years of record. Ungaged locations on these streams may use the ratio of the drainage area at an ungaged location to the drainage area at a streamgage location to scale the selected statistic value from the streamgage location to the ungaged location. This method can be used if the drainage area of the ungaged location is within 40 to 150 percent of the streamgage drainage area. The third method is the use of the regional regression equations. The limits for the use of these equations are based on the ranges of the characteristics used as independent variables and that streams must be affected minimally by anthropogenic activities.
BTS statistical standards manual
DOT National Transportation Integrated Search
2005-10-01
The Bureau of Transportation Statistics (BTS), like other federal statistical agencies, establishes professional standards to guide the methods and procedures for the collection, processing, storage, and presentation of statistical data. Standards an...
STRengthening analytical thinking for observational studies: the STRATOS initiative.
Sauerbrei, Willi; Abrahamowicz, Michal; Altman, Douglas G; le Cessie, Saskia; Carpenter, James
2014-12-30
The validity and practical utility of observational medical research depends critically on good study design, excellent data quality, appropriate statistical methods and accurate interpretation of results. Statistical methodology has seen substantial development in recent times. Unfortunately, many of these methodological developments are ignored in practice. Consequently, design and analysis of observational studies often exhibit serious weaknesses. The lack of guidance on vital practical issues discourages many applied researchers from using more sophisticated and possibly more appropriate methods when analyzing observational studies. Furthermore, many analyses are conducted by researchers with a relatively weak statistical background and limited experience in using statistical methodology and software. Consequently, even 'standard' analyses reported in the medical literature are often flawed, casting doubt on their results and conclusions. An efficient way to help researchers to keep up with recent methodological developments is to develop guidance documents that are spread to the research community at large. These observations led to the initiation of the strengthening analytical thinking for observational studies (STRATOS) initiative, a large collaboration of experts in many different areas of biostatistical research. The objective of STRATOS is to provide accessible and accurate guidance in the design and analysis of observational studies. The guidance is intended for applied statisticians and other data analysts with varying levels of statistical education, experience and interests. In this article, we introduce the STRATOS initiative and its main aims, present the need for guidance documents and outline the planned approach and progress so far. We encourage other biostatisticians to become involved. © 2014 The Authors. Statistics in Medicine published by John Wiley & Sons, Ltd.
Baqué, Michèle; Amendt, Jens
2013-01-01
Developmental data of juvenile blow flies (Diptera: Calliphoridae) are typically used to calculate the age of immature stages found on or around a corpse and thus to estimate a minimum post-mortem interval (PMI(min)). However, many of those data sets don't take into account that immature blow flies grow in a non-linear fashion. Linear models do not supply a sufficient reliability on age estimates and may even lead to an erroneous determination of the PMI(min). According to the Daubert standard and the need for improvements in forensic science, new statistic tools like smoothing methods and mixed models allow the modelling of non-linear relationships and expand the field of statistical analyses. The present study introduces into the background and application of these statistical techniques by analysing a model which describes the development of the forensically important blow fly Calliphora vicina at different temperatures. The comparison of three statistical methods (linear regression, generalised additive modelling and generalised additive mixed modelling) clearly demonstrates that only the latter provided regression parameters that reflect the data adequately. We focus explicitly on both the exploration of the data--to assure their quality and to show the importance of checking it carefully prior to conducting the statistical tests--and the validation of the resulting models. Hence, we present a common method for evaluating and testing forensic entomological data sets by using for the first time generalised additive mixed models.
Neman, R
1975-03-01
The Zigler and Seitz (1975) critique was carefully examined with respect to the conclusions of the Neman et al. (1975) study. Particular attention was given to the following questions: (a) did experimenter bias or commitment account for the results, (b) were unreliable and invalid psychometric instruments used, (c) were the statistical analyses insufficient or incorrect, (d) did the results reflect no more than the operation of chance, and (e) were the results biased by artifactually inflated profile scores. Experimenter bias and commitment were shown to be insufficient to account for the results; a further review of Buros (1972) showed that there was no need for apprehension about the testing instruments; the statistical analyses were shown to exceed prevailing standards for research reporting; the results were shown to reflect valid findings at the .05 probability level; and the Neman et al. (1975) results for the profile measure were equally significant using either "raw" neurological scores or "scales" neurological age scores. Zigler, Seitz, and I agreed on the needs for (a) using multivariate analyses, where applicable, in studies having more than one dependent variable; (b) defining the population for which sensorimotor training procedures may be appropriately prescribed; and (c) validating the profile measure as a tool to assess neurological disorganization.
A practical and systematic review of Weibull statistics for reporting strengths of dental materials
Quinn, George D.; Quinn, Janet B.
2011-01-01
Objectives To review the history, theory and current applications of Weibull analyses sufficient to make informed decisions regarding practical use of the analysis in dental material strength testing. Data References are made to examples in the engineering and dental literature, but this paper also includes illustrative analyses of Weibull plots, fractographic interpretations, and Weibull distribution parameters obtained for a dense alumina, two feldspathic porcelains, and a zirconia. Sources Informational sources include Weibull's original articles, later articles specific to applications and theoretical foundations of Weibull analysis, texts on statistics and fracture mechanics and the international standards literature. Study Selection The chosen Weibull analyses are used to illustrate technique, the importance of flaw size distributions, physical meaning of Weibull parameters and concepts of “equivalent volumes” to compare measured strengths obtained from different test configurations. Conclusions Weibull analysis has a strong theoretical basis and can be of particular value in dental applications, primarily because of test specimen size limitations and the use of different test configurations. Also endemic to dental materials, however, is increased difficulty in satisfying application requirements, such as confirming fracture origin type and diligence in obtaining quality strength data. PMID:19945745
Meta-analysis of randomized clinical trials in the era of individual patient data sharing.
Kawahara, Takuya; Fukuda, Musashi; Oba, Koji; Sakamoto, Junichi; Buyse, Marc
2018-06-01
Individual patient data (IPD) meta-analysis is considered to be a gold standard when the results of several randomized trials are combined. Recent initiatives on sharing IPD from clinical trials offer unprecedented opportunities for using such data in IPD meta-analyses. First, we discuss the evidence generated and the benefits obtained by a long-established prospective IPD meta-analysis in early breast cancer. Next, we discuss a data-sharing system that has been adopted by several pharmaceutical sponsors. We review a number of retrospective IPD meta-analyses that have already been proposed using this data-sharing system. Finally, we discuss the role of data sharing in IPD meta-analysis in the future. Treatment effects can be more reliably estimated in both types of IPD meta-analyses than with summary statistics extracted from published papers. Specifically, with rich covariate information available on each patient, prognostic and predictive factors can be identified or confirmed. Also, when several endpoints are available, surrogate endpoints can be assessed statistically. Although there are difficulties in conducting, analyzing, and interpreting retrospective IPD meta-analysis utilizing the currently available data-sharing systems, data sharing will play an important role in IPD meta-analysis in the future.
Bodenburg, Sebastian; Dopslaff, Nina
2008-01-01
The Dysexecutive Questionnaire (DEX, , Behavioral assessment of the dysexecutive syndrome, 1996) is a standardized instrument to measure possible behavioral changes as a result of the dysexecutive syndrome. Although initially intended only as a qualitative instrument, the DEX has also been used increasingly to address quantitative problems. Until now there have not been more fundamental statistical analyses of the questionnaire's testing quality. The present study is based on an unselected sample of 191 patients with acquired brain injury and reports on the data relating to the quality of the items, the reliability and the factorial structure of the DEX. Item 3 displayed too great an item difficulty, whereas item 11 was not sufficiently discriminating. The DEX's reliability in self-rating is r = 0.85. In addition to presenting the statistical values of the tests, a clinical severity classification of the overall scores of the 4 found factors and of the questionnaire as a whole is carried out on the basis of quartile standards.
Riley, Richard D.
2017-01-01
An important question for clinicians appraising a meta‐analysis is: are the findings likely to be valid in their own practice—does the reported effect accurately represent the effect that would occur in their own clinical population? To this end we advance the concept of statistical validity—where the parameter being estimated equals the corresponding parameter for a new independent study. Using a simple (‘leave‐one‐out’) cross‐validation technique, we demonstrate how we may test meta‐analysis estimates for statistical validity using a new validation statistic, Vn, and derive its distribution. We compare this with the usual approach of investigating heterogeneity in meta‐analyses and demonstrate the link between statistical validity and homogeneity. Using a simulation study, the properties of Vn and the Q statistic are compared for univariate random effects meta‐analysis and a tailored meta‐regression model, where information from the setting (included as model covariates) is used to calibrate the summary estimate to the setting of application. Their properties are found to be similar when there are 50 studies or more, but for fewer studies Vn has greater power but a higher type 1 error rate than Q. The power and type 1 error rate of Vn are also shown to depend on the within‐study variance, between‐study variance, study sample size, and the number of studies in the meta‐analysis. Finally, we apply Vn to two published meta‐analyses and conclude that it usefully augments standard methods when deciding upon the likely validity of summary meta‐analysis estimates in clinical practice. © 2017 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. PMID:28620945
Spatial Autocorrelation Approaches to Testing Residuals from Least Squares Regression
Chen, Yanguang
2016-01-01
In geo-statistics, the Durbin-Watson test is frequently employed to detect the presence of residual serial correlation from least squares regression analyses. However, the Durbin-Watson statistic is only suitable for ordered time or spatial series. If the variables comprise cross-sectional data coming from spatial random sampling, the test will be ineffectual because the value of Durbin-Watson’s statistic depends on the sequence of data points. This paper develops two new statistics for testing serial correlation of residuals from least squares regression based on spatial samples. By analogy with the new form of Moran’s index, an autocorrelation coefficient is defined with a standardized residual vector and a normalized spatial weight matrix. Then by analogy with the Durbin-Watson statistic, two types of new serial correlation indices are constructed. As a case study, the two newly presented statistics are applied to a spatial sample of 29 China’s regions. These results show that the new spatial autocorrelation models can be used to test the serial correlation of residuals from regression analysis. In practice, the new statistics can make up for the deficiencies of the Durbin-Watson test. PMID:26800271
An operational definition of a statistically meaningful trend.
Bryhn, Andreas C; Dimberg, Peter H
2011-04-28
Linear trend analysis of time series is standard procedure in many scientific disciplines. If the number of data is large, a trend may be statistically significant even if data are scattered far from the trend line. This study introduces and tests a quality criterion for time trends referred to as statistical meaningfulness, which is a stricter quality criterion for trends than high statistical significance. The time series is divided into intervals and interval mean values are calculated. Thereafter, r(2) and p values are calculated from regressions concerning time and interval mean values. If r(2) ≥ 0.65 at p ≤ 0.05 in any of these regressions, then the trend is regarded as statistically meaningful. Out of ten investigated time series from different scientific disciplines, five displayed statistically meaningful trends. A Microsoft Excel application (add-in) was developed which can perform statistical meaningfulness tests and which may increase the operationality of the test. The presented method for distinguishing statistically meaningful trends should be reasonably uncomplicated for researchers with basic statistics skills and may thus be useful for determining which trends are worth analysing further, for instance with respect to causal factors. The method can also be used for determining which segments of a time trend may be particularly worthwhile to focus on.
Empirical evidence about inconsistency among studies in a pair‐wise meta‐analysis
Turner, Rebecca M.; Higgins, Julian P. T.
2015-01-01
This paper investigates how inconsistency (as measured by the I2 statistic) among studies in a meta‐analysis may differ, according to the type of outcome data and effect measure. We used hierarchical models to analyse data from 3873 binary, 5132 continuous and 880 mixed outcome meta‐analyses within the Cochrane Database of Systematic Reviews. Predictive distributions for inconsistency expected in future meta‐analyses were obtained, which can inform priors for between‐study variance. Inconsistency estimates were highest on average for binary outcome meta‐analyses of risk differences and continuous outcome meta‐analyses. For a planned binary outcome meta‐analysis in a general research setting, the predictive distribution for inconsistency among log odds ratios had median 22% and 95% CI: 12% to 39%. For a continuous outcome meta‐analysis, the predictive distribution for inconsistency among standardized mean differences had median 40% and 95% CI: 15% to 73%. Levels of inconsistency were similar for binary data measured by log odds ratios and log relative risks. Fitted distributions for inconsistency expected in continuous outcome meta‐analyses using mean differences were almost identical to those using standardized mean differences. The empirical evidence on inconsistency gives guidance on which outcome measures are most likely to be consistent in particular circumstances and facilitates Bayesian meta‐analysis with an informative prior for heterogeneity. © 2015 The Authors. Research Synthesis Methods published by John Wiley & Sons, Ltd. © 2015 The Authors. Research Synthesis Methods published by John Wiley & Sons, Ltd. PMID:26679486
A new principle for the standardization of long paragraphs for reading speed analysis.
Radner, Wolfgang; Radner, Stephan; Diendorfer, Gabriela
2016-01-01
To investigate the reliability, validity, and statistical comparability of long paragraphs that were developed to be equivalent in construction and difficulty. Seven long paragraphs were developed that were equal in syntax, morphology, and number and position of words (111), with the same number of syllables (179) and number of characters (660). For validity analyses, the paragraphs were compared with the mean reading speed of a set of seven sentence optotypes of the RADNER Reading Charts (mean of 7 × 14 = 98 words read). Reliability analyses were performed by calculating the Cronbach's alpha value and the corrected total item correlation. Sixty participants (aged 20-77 years) read the paragraphs and the sentences (distance 40 cm; font: Times New Roman 12 pt). Test items were presented randomly; reading length was measured with a stopwatch. Reliability analysis yielded a Cronbach's alpha value of 0.988. When the long paragraphs were compared in pairwise fashion, significant differences were found in 13 of the 21 pairs (p < 0.05). In two sequences of three paragraphs each and in eight pairs of paragraphs, the paragraphs did not differ significantly, and these paragraph combinations are therefore suitable for comparative research studies. The mean reading speed was 173.34 ± 24.01 words per minute (wpm) for the long paragraphs and 198.26 ± 28.60 wpm for the sentence optotypes. The maximum difference in reading speed was 5.55 % for the long paragraphs and 2.95 % for the short sentence optotypes. The correlation between long paragraphs and sentence optotypes was high (r = 0.9243). Despite good reliability and equivalence in construction and degree of difficulty, a statistically significant difference in reading speed can occur between long paragraphs. Since statistical significance should be dependent only on the persons tested, either standardizing long paragraphs for statistical equality of reading speed measurements or increasing the number of presented paragraphs is recommended for comparative investigations.
Lachowiec, Jennifer; Shen, Xia; Queitsch, Christine; Carlborg, Örjan
2015-01-01
Efforts to identify loci underlying complex traits generally assume that most genetic variance is additive. Here, we examined the genetics of Arabidopsis thaliana root length and found that the genomic narrow-sense heritability for this trait in the examined population was statistically zero. The low amount of additive genetic variance that could be captured by the genome-wide genotypes likely explains why no associations to root length could be found using standard additive-model-based genome-wide association (GWA) approaches. However, as the broad-sense heritability for root length was significantly larger, and primarily due to epistasis, we also performed an epistatic GWA analysis to map loci contributing to the epistatic genetic variance. Four interacting pairs of loci were revealed, involving seven chromosomal loci that passed a standard multiple-testing corrected significance threshold. The genotype-phenotype maps for these pairs revealed epistasis that cancelled out the additive genetic variance, explaining why these loci were not detected in the additive GWA analysis. Small population sizes, such as in our experiment, increase the risk of identifying false epistatic interactions due to testing for associations with very large numbers of multi-marker genotypes in few phenotyped individuals. Therefore, we estimated the false-positive risk using a new statistical approach that suggested half of the associated pairs to be true positive associations. Our experimental evaluation of candidate genes within the seven associated loci suggests that this estimate is conservative; we identified functional candidate genes that affected root development in four loci that were part of three of the pairs. The statistical epistatic analyses were thus indispensable for confirming known, and identifying new, candidate genes for root length in this population of wild-collected A. thaliana accessions. We also illustrate how epistatic cancellation of the additive genetic variance explains the insignificant narrow-sense and significant broad-sense heritability by using a combination of careful statistical epistatic analyses and functional genetic experiments.
A catalogue of /Fe/H/ determinations
NASA Astrophysics Data System (ADS)
Cayrel de Strobel, G.; Bentolila, C.; Hauck, B.; Curchod, A.
1980-09-01
A catalog of iron/hydrogen abundance ratios for 628 stars is compiled based on 1109 published values. The catalog consists of (1) a table of absolute iron abundance determinations in the solar photosphere as compiled by Blackwell (1974); (2) the iron/hydrogen abundances of 628 stars in the form of logarithmic differences between iron abundances in the given star and a standard star, obtained from analyses of high-dispersion spectra as well as useful stellar spectroscopic and photometric parameters; and (3) indications of the mean dispersion and wavelength interval used in the analyses. In addition, statistics on the distributions of the number of determinations per star and the apparent magnitudes and spectral types of the stars are presented.
NASA Astrophysics Data System (ADS)
Powell, P. E.
Educators have recently come to consider inquiry based instruction as a more effective method of instruction than didactic instruction. Experience based learning theory suggests that student performance is linked to teaching method. However, research is limited on inquiry teaching and its effectiveness on preparing students to perform well on standardized tests. The purpose of the study to investigate whether one of these two teaching methodologies was more effective in increasing student performance on standardized science tests. The quasi experimental quantitative study was comprised of two stages. Stage 1 used a survey to identify teaching methods of a convenience sample of 57 teacher participants and determined level of inquiry used in instruction to place participants into instructional groups (the independent variable). Stage 2 used analysis of covariance (ANCOVA) to compare posttest scores on a standardized exam by teaching method. Additional analyses were conducted to examine the differences in science achievement by ethnicity, gender, and socioeconomic status by teaching methodology. Results demonstrated a statistically significant gain in test scores when taught using inquiry based instruction. Subpopulation analyses indicated all groups showed improved mean standardized test scores except African American students. The findings benefit teachers and students by presenting data supporting a method of content delivery that increases teacher efficacy and produces students with a greater cognition of science content that meets the school's mission and goals.
Moestue, Helen
2009-08-01
To examine the potential of anthropometry as a tool to measure gender discrimination, with particular attention to the WHO growth standards. Surveillance data collected from 1990 to 1999 were analysed. Height-for-age Z-scores were calculated using three norms: the WHO standards, the 1978 National Center for Health Statistics (NCHS) reference and the 1990 British growth reference (UK90). Bangladesh. Boys and girls aged 6-59 months (n 504 358). The three sets of growth curves provided conflicting pictures of the relative growth of girls and boys by age and over time. Conclusions on sex differences in growth depended also on the method used to analyse the curves, be it according to the shape or the relative position of the sex-specific curves. The shapes of the WHO-generated curves uniquely implied that Bangladeshi girls faltered faster or caught up slower than boys throughout their pre-school years, a finding consistent with the literature. In contrast, analysis of the relative position of the curves suggested that girls had higher WHO Z-scores than boys below 24 months of age. Further research is needed to help establish whether and how the WHO international standards can measure gender discrimination in practice, which continues to be a serious problem in many parts of the world.
Cost-effectiveness of prucalopride in the treatment of chronic constipation in the Netherlands
Nuijten, Mark J. C.; Dubois, Dominique J.; Joseph, Alain; Annemans, Lieven
2015-01-01
Objective: To assess the cost-effectiveness of prucalopride vs. continued laxative treatment for chronic constipation in patients in the Netherlands in whom laxatives have failed to provide adequate relief. Methods: A Markov model was developed to estimate the cost-effectiveness of prucalopride in patients with chronic constipation receiving standard laxative treatment from the perspective of Dutch payers in 2011. Data sources included published prucalopride clinical trials, published Dutch price/tariff lists, and national population statistics. The model simulated the clinical and economic outcomes associated with prucalopride vs. standard treatment and had a cycle length of 1 month and a follow-up time of 1 year. Response to treatment was defined as the proportion of patients who achieved “normal bowel function”. One-way and probabilistic sensitivity analyses were conducted to test the robustness of the base case. Results: In the base case analysis, the cost of prucalopride relative to continued laxative treatment was € 9015 per quality-adjusted life-year (QALY). Extensive sensitivity analyses and scenario analyses confirmed that the base case cost-effectiveness estimate was robust. One-way sensitivity analyses showed that the model was most sensitive in response to prucalopride; incremental cost-effectiveness ratios ranged from € 6475 to 15,380 per QALY. Probabilistic sensitivity analyses indicated that there is a greater than 80% probability that prucalopride would be cost-effective compared with continued standard treatment, assuming a willingness-to-pay threshold of € 20,000 per QALY from a Dutch societal perspective. A scenario analysis was performed for women only, which resulted in a cost-effectiveness ratio of € 7773 per QALY. Conclusion: Prucalopride was cost-effective in a Dutch patient population, as well as in a women-only subgroup, who had chronic constipation and who obtained inadequate relief from laxatives. PMID:25926794
Zheng, Jie; Harris, Marcelline R; Masci, Anna Maria; Lin, Yu; Hero, Alfred; Smith, Barry; He, Yongqun
2016-09-14
Statistics play a critical role in biological and clinical research. However, most reports of scientific results in the published literature make it difficult for the reader to reproduce the statistical analyses performed in achieving those results because they provide inadequate documentation of the statistical tests and algorithms applied. The Ontology of Biological and Clinical Statistics (OBCS) is put forward here as a step towards solving this problem. The terms in OBCS including 'data collection', 'data transformation in statistics', 'data visualization', 'statistical data analysis', and 'drawing a conclusion based on data', cover the major types of statistical processes used in basic biological research and clinical outcome studies. OBCS is aligned with the Basic Formal Ontology (BFO) and extends the Ontology of Biomedical Investigations (OBI), an OBO (Open Biological and Biomedical Ontologies) Foundry ontology supported by over 20 research communities. Currently, OBCS comprehends 878 terms, representing 20 BFO classes, 403 OBI classes, 229 OBCS specific classes, and 122 classes imported from ten other OBO ontologies. We discuss two examples illustrating how the ontology is being applied. In the first (biological) use case, we describe how OBCS was applied to represent the high throughput microarray data analysis of immunological transcriptional profiles in human subjects vaccinated with an influenza vaccine. In the second (clinical outcomes) use case, we applied OBCS to represent the processing of electronic health care data to determine the associations between hospital staffing levels and patient mortality. Our case studies were designed to show how OBCS can be used for the consistent representation of statistical analysis pipelines under two different research paradigms. Other ongoing projects using OBCS for statistical data processing are also discussed. The OBCS source code and documentation are available at: https://github.com/obcs/obcs . The Ontology of Biological and Clinical Statistics (OBCS) is a community-based open source ontology in the domain of biological and clinical statistics. OBCS is a timely ontology that represents statistics-related terms and their relations in a rigorous fashion, facilitates standard data analysis and integration, and supports reproducible biological and clinical research.
Kuss, O
2015-03-30
Meta-analyses with rare events, especially those that include studies with no event in one ('single-zero') or even both ('double-zero') treatment arms, are still a statistical challenge. In the case of double-zero studies, researchers in general delete these studies or use continuity corrections to avoid them. A number of arguments against both options has been given, and statistical methods that use the information from double-zero studies without using continuity corrections have been proposed. In this paper, we collect them and compare them by simulation. This simulation study tries to mirror real-life situations as completely as possible by deriving true underlying parameters from empirical data on actually performed meta-analyses. It is shown that for each of the commonly encountered effect estimators valid statistical methods are available that use the information from double-zero studies without using continuity corrections. Interestingly, all of them are truly random effects models, and so also the current standard method for very sparse data as recommended from the Cochrane collaboration, the Yusuf-Peto odds ratio, can be improved on. For actual analysis, we recommend to use beta-binomial regression methods to arrive at summary estimates for the odds ratio, the relative risk, or the risk difference. Methods that ignore information from double-zero studies or use continuity corrections should no longer be used. We illustrate the situation with an example where the original analysis ignores 35 double-zero studies, and a superior analysis discovers a clinically relevant advantage of off-pump surgery in coronary artery bypass grafting. Copyright © 2014 John Wiley & Sons, Ltd.
Pförtner, T-K
2016-06-01
A common indicator of the measurement of relative poverty is the disposable income of a household. Current research introduces the living standard approach as an alternative concept for describing and measuring relative poverty. This study compares both approaches with regard to subjective health status of the German population, and provides theoretical implications for the utilisation of the income and living standard approach in health research. Analyses are based on the German Socio-Economic Panel (GSOEP) from the year 2011 that includes 12 290 private households and 21106 survey members. Self-rated health was based on a subjective assessment of general health status. Income poverty is based on the equalised disposable income and is applied to a threshold of 60% of the median-based average income. A person will be denoted as deprived (inadequate living standard) if 3 or more out of 11 living standard items are lacking due to financial reasons. To calculate the discriminate power of both poverty indicators, descriptive analyses and stepwise logistic regression models were applied separately for men and women adjusted for age, residence, nationality, educational level, occupational status and marital status. The results of the stepwise regression revealed a stronger poverty-health relationship for the living standard indicator. After adjusting for all control variables and the respective poverty indicator, income poverty was statistically not significantly associated with a poor subjective health status among men (OR Men: 1.33; 95% CI: 1.00-1.77) and women (OR Women: 0.98; 95% CI: 0.78-1.22). In contrast, the association between deprivation and subjective health status was statistically significant for men (OR Men: 2.00; 95% CI: 1.57-2.52) and women (OR Women: 2.11; 95% CI: 1.76-2.64). The results of the present study indicate that the income and standard of living approach measure different dimensions of poverty. In comparison to the income approach, the living standard approach measures stronger shortages of wealth and is relatively robust towards gender differences. This study expands the current debate about complementary research on the association between poverty and health. © Georg Thieme Verlag KG Stuttgart · New York.
Kratochwill, Thomas R; Levin, Joel R
2014-04-01
In this commentary, we add to the spirit of the articles appearing in the special series devoted to meta- and statistical analysis of single-case intervention-design data. Following a brief discussion of historical factors leading to our initial involvement in statistical analysis of such data, we discuss: (a) the value added by including statistical-analysis recommendations in the What Works Clearinghouse Standards for single-case intervention designs; (b) the importance of visual analysis in single-case intervention research, along with the distinctive role that could be played by single-case effect-size measures; and (c) the elevated internal validity and statistical-conclusion validity afforded by the incorporation of various forms of randomization into basic single-case design structures. For the future, we envision more widespread application of quantitative analyses, as critical adjuncts to visual analysis, in both primary single-case intervention research studies and literature reviews in the behavioral, educational, and health sciences. Copyright © 2014 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved.
Cluster mass inference via random field theory.
Zhang, Hui; Nichols, Thomas E; Johnson, Timothy D
2009-01-01
Cluster extent and voxel intensity are two widely used statistics in neuroimaging inference. Cluster extent is sensitive to spatially extended signals while voxel intensity is better for intense but focal signals. In order to leverage strength from both statistics, several nonparametric permutation methods have been proposed to combine the two methods. Simulation studies have shown that of the different cluster permutation methods, the cluster mass statistic is generally the best. However, to date, there is no parametric cluster mass inference available. In this paper, we propose a cluster mass inference method based on random field theory (RFT). We develop this method for Gaussian images, evaluate it on Gaussian and Gaussianized t-statistic images and investigate its statistical properties via simulation studies and real data. Simulation results show that the method is valid under the null hypothesis and demonstrate that it can be more powerful than the cluster extent inference method. Further, analyses with a single subject and a group fMRI dataset demonstrate better power than traditional cluster size inference, and good accuracy relative to a gold-standard permutation test.
Meta-analysis: Problems with Russian Publications.
Verbitskaya, E V
2015-01-01
Meta-analysis is a powerful tool to identify Evidence Based medical technologies (interventions) for use in every day practice. Meta-analysis uses statistical approaches to combine results from multiple studies in an effort to increase power (over individual studies), improve estimates of the size of the effect and/or to resolve uncertainty when reports disagree. Meta-analysis is a quantitative, formal study design used to systematically assess previous research studies to derive conclusions from this research. Meta-analysis may provide more precise estimate of the effect of treatment or risk factor for a disease, or other outcomes, than any individual study contributing to the pooled analysis.We have quite a substantial number of Russian medical publications, but not so many Meta-Analyses published in Russian. Russian publications are cited in English language papers not so often. A total of 90% of clinical studies included in published Meta-Analyses incorporate only English language papers. International studies or papers with Russian co-authors are published in English language. The main question is: what is the problem with inclusion of Russian medical publications in Meta-Analysis? The main reasons for this are the following: 1) It is difficult to find Russian papers, difficult to work with them and to work with Russian journals:a. There are single Russian Biomedical Journals, which are translated into English and are included in databases (PubMed, Scopus and other), despite the fact that all of them have English language abstracts.b. The majority the meta-analyses authors use in their work different citation management software such as the Mendeley, Reference Manager, ProCite, EndNote, and others. These citation management systems allow scientists to organize their own literature databases with internet searches and have adds-on for the Office programs what makes process of literature citation very convenient. The Internet sites of the majority of International Journals have built-in tools for saving citations to reference manager software. The majority of articles in Russian journals cannot be captured by citation management systems: they do not have special coding of articles descriptors.c. Some journals still have PDF files of the whole journal issue without dividing it into articles and do not provide any descriptors, making manual time-consuming input of information the only possibility. Moreover the context search of the article content is unavailable for search engines.2) The quality of research. This problem has been discussed for more than twenty years already. Still we have too many publications of poor quality of study design and statistical analysis. With the exception of pharmacological clinical tails, designed and supervised by international Pharma industry, many interventional studies, conducted in Russia, have methodological flaws inferring a high risk of bias:a. Absence of adequate control,b. No standard endpoints, duration of therapy and follow up,c. Absence of randomization and blinding,d. Low power of studies: sample sizes are calculated (if calculated at all) in such a way, that the main goal is to have as small sample size as possible. Very often statisticians have to solve the problem how to substantiate a small number of subjects, that sponsor could afford, instead of calculating the needed sample size to reach enough power.e. No standards of statistical analysis.f. Russian journals do not have standards for description and presentation of study results, in particular, results of statistical analysis (a reader even cannot see what is presented: standard deviation (SD) or standard error of the mean (SEM).We have a long standing experience in analysis of methodological and statistical quality of Russian biomedical publications and have found up to 80% publications with statistical and methodological errors and high risk of bias.In our practice, we had tried to perform two Meta-analyses for two local pharmaceutical products for prevention of stroke recurrence. For the first product, we did not found even two single Russian language studies suitable for the analysis (incomparable populations, different designs, endpoints, doses etc.). For the second product, only four studies had comparable populations and standard internationally approved scales for effectiveness analysis. However, the combinations of scales, the length of treatment and follow up differed widely, so that we could combine the results of only 2 or 3 studies for each end point. Russian researchers have to follow internationally recognised standards in study design, selection of endpoint, timelines and therapy regimens, data analysis and presentation of results. Russian journals need to develop consolidate rules for authors of clinical trials and epidemiological research of result reporting close to international standards. In this case the international Network EQUATOR (Enhancing the QUAlity and Transparency Of health Research http://www.equator-network.org/) is one to be taken into account. In addition, Russian Journals have to improve their online information for better interaction with search engines and citation managers.
Microplate-based filter paper assay to measure total cellulase activity.
Xiao, Zhizhuang; Storms, Reginald; Tsang, Adrian
2004-12-30
The standard filter paper assay (FPA) published by the International Union of Pure and Applied Chemistry (IUPAC) is widely used to determine total cellulase activity. However, the IUPAC method is not suitable for the parallel analyses of large sample numbers. We describe here a microplate-based method for assaying large sample numbers. To achieve this, we reduced the enzymatic reaction volume to 60 microl from the 1.5 ml used in the IUPAC method. The modified 60-microl format FPA can be carried out in 96-well assay plates. Statistical analyses showed that the cellulase activities of commercial cellulases from Trichoderma reesei and Aspergillus species determined with our 60-microl format FPA were not significantly different from the activities measured with the standard FPA. Our results also indicate that the 60-microl format FPA is quantitative and highly reproducible. Moreover, the addition of excess beta-glucosidase increased the sensitivity of the assay by up to 60%. 2004 Wiley Periodicals, Inc.
Gribova, N P; Iudel'son, Ia B; Golubev, V L; Abramenkova, I V
2003-01-01
To carry out a differential diagnosis of two facial dyskinesia (FD) models--facial hemispasm (FH) and facial paraspasm (FP), a combined program of electroneuromyographic (ENMG) examination has been created, using statistical analyses, including that for objects identification based on hybrid neural network with the application of adaptive fuzzy logic method and standard statistics programs (Wilcoxon, Student statistics). In FH, a lesion of peripheral facial neuromotor apparatus with augmentation of functions of inter-neurons in segmental and upper segmental stem levels predominated. In FP, primary afferent strengthening in mimic muscles was accompanied by increased motor neurons activity and reciprocal augmentation of inter-neurons, inhibiting motor portion of V pair. Mathematical algorithm for ENMG results recognition worked out in the study provides a precise differentiation of two FD models and opens possibilities for differential diagnosis of other facial motor disorders.
Generalized Majority Logic Criterion to Analyze the Statistical Strength of S-Boxes
NASA Astrophysics Data System (ADS)
Hussain, Iqtadar; Shah, Tariq; Gondal, Muhammad Asif; Mahmood, Hasan
2012-05-01
The majority logic criterion is applicable in the evaluation process of substitution boxes used in the advanced encryption standard (AES). The performance of modified or advanced substitution boxes is predicted by processing the results of statistical analysis by the majority logic criteria. In this paper, we use the majority logic criteria to analyze some popular and prevailing substitution boxes used in encryption processes. In particular, the majority logic criterion is applied to AES, affine power affine (APA), Gray, Lui J, residue prime, S8 AES, Skipjack, and Xyi substitution boxes. The majority logic criterion is further extended into a generalized majority logic criterion which has a broader spectrum of analyzing the effectiveness of substitution boxes in image encryption applications. The integral components of the statistical analyses used for the generalized majority logic criterion are derived from results of entropy analysis, contrast analysis, correlation analysis, homogeneity analysis, energy analysis, and mean of absolute deviation (MAD) analysis.
A phylogenetic transform enhances analysis of compositional microbiota data.
Silverman, Justin D; Washburne, Alex D; Mukherjee, Sayan; David, Lawrence A
2017-02-15
Surveys of microbial communities (microbiota), typically measured as relative abundance of species, have illustrated the importance of these communities in human health and disease. Yet, statistical artifacts commonly plague the analysis of relative abundance data. Here, we introduce the PhILR transform, which incorporates microbial evolutionary models with the isometric log-ratio transform to allow off-the-shelf statistical tools to be safely applied to microbiota surveys. We demonstrate that analyses of community-level structure can be applied to PhILR transformed data with performance on benchmarks rivaling or surpassing standard tools. Additionally, by decomposing distance in the PhILR transformed space, we identified neighboring clades that may have adapted to distinct human body sites. Decomposing variance revealed that covariation of bacterial clades within human body sites increases with phylogenetic relatedness. Together, these findings illustrate how the PhILR transform combines statistical and phylogenetic models to overcome compositional data challenges and enable evolutionary insights relevant to microbial communities.
Tipping points in the arctic: eyeballing or statistical significance?
Carstensen, Jacob; Weydmann, Agata
2012-02-01
Arctic ecosystems have experienced and are projected to experience continued large increases in temperature and declines in sea ice cover. It has been hypothesized that small changes in ecosystem drivers can fundamentally alter ecosystem functioning, and that this might be particularly pronounced for Arctic ecosystems. We present a suite of simple statistical analyses to identify changes in the statistical properties of data, emphasizing that changes in the standard error should be considered in addition to changes in mean properties. The methods are exemplified using sea ice extent, and suggest that the loss rate of sea ice accelerated by factor of ~5 in 1996, as reported in other studies, but increases in random fluctuations, as an early warning signal, were observed already in 1990. We recommend to employ the proposed methods more systematically for analyzing tipping points to document effects of climate change in the Arctic.
OTD Observations of Continental US Ground and Cloud Flashes
NASA Technical Reports Server (NTRS)
Koshak, William
2007-01-01
Lightning optical flash parameters (e.g., radiance, area, duration, number of optical groups, and number of optical events) derived from almost five years of Optical Transient Detector (OTD) data are analyzed. Hundreds of thousands of OTD flashes occurring over the continental US are categorized according to flash type (ground or cloud flash) using US National Lightning Detection Network TM (NLDN) data. The statistics of the optical characteristics of the ground and cloud flashes are inter-compared on an overall basis, and as a function of ground flash polarity. A standard two-distribution hypothesis test is used to inter-compare the population means of a given lightning parameter for the two flash types. Given the differences in the statistics of the optical characteristics, it is suggested that statistical analyses (e.g., Bayesian Inference) of the space-based optical measurements might make it possible to successfully discriminate ground and cloud flashes a reasonable percentage of the time.
Does educational status impact adult mortality in Denmark? A twin approach.
Madsen, Mia; Andersen, Anne-Marie Nybo; Christensen, Kaare; Andersen, Per Kragh; Osler, Merete
2010-07-15
To disentangle an independent effect of educational status on mortality risk from direct and indirect selection mechanisms, the authors used a discordant twin pair design, which allowed them to isolate the effect of education by means of adjustment for genetic and environmental confounding per design. The study is based on data from the Danish Twin Registry and Statistics Denmark. Using Cox regression, they estimated hazard ratios for mortality according to the highest attained education among 5,260 monozygotic and 11,088 dizygotic same-sex twin pairs born during 1921-1950 and followed during 1980-2008. Both standard cohort and intrapair analyses were conducted separately for zygosity, gender, and birth cohort. Educational differences in mortality were demonstrated in the standard cohort analyses but attenuated in the intrapair analyses in all subgroups but men born during 1921-1935, and no effect modification by zygosity was observed. Hence, the results are most compatible with an effect of early family environment in explaining the educational inequality in mortality. However, large educational differences were still reflected in mortality risk differences within twin pairs, thus supporting some degree of independent effect of education. In addition, the effect of education may be more pronounced in older cohorts of Danish men.
Stulberg, Jonah J; Pavey, Emily S; Cohen, Mark E; Ko, Clifford Y; Hoyt, David B; Bilimoria, Karl Y
2017-02-01
Changes to resident duty hour policies in the Flexibility in Duty Hour Requirements for Surgical Trainees (FIRST) trial could impact hospitalized patients' length of stay (LOS) by altering care coordination. Length of stay can also serve as a reflection of all complications, particularly those not captured in the FIRST trial (eg pneumothorax from central line). Programs were randomized to either maintaining current ACGME duty hour policies (Standard arm) or more flexible policies waiving rules on maximum shift lengths and time off between shifts (Flexible arm). Our objective was to determine whether flexibility in resident duty hours affected LOS in patients undergoing high-risk surgical operations. Patients were identified who underwent hepatectomy, pancreatectomy, laparoscopic colectomy, open colectomy, or ventral hernia repair (2014-2015 academic year) at 154 hospitals participating in the FIRST trial. Two procedure-stratified evaluations of LOS were undertaken: multivariable negative binomial regression analysis on LOS and a multivariable logistic regression analysis on the likelihood of a prolonged LOS (>75 th percentile). Before any adjustments, there was no statistically significant difference in overall mean LOS between study arms (Flexible Policy: mean [SD] LOS 6.03 [5.78] days vs Standard Policy: mean LOS 6.21 [5.82] days; p = 0.74). In adjusted analyses, there was no statistically significant difference in LOS between study arms overall (incidence rate ratio for Flexible vs Standard: 0.982; 95% CI, 0.939-1.026; p = 0.41) or for any individual procedures. In addition, there was no statistically significant difference in the proportion of patients with prolonged LOS between study arms overall (Flexible vs Standard: odds ratio = 1.028; 95% CI, 0.871-1.212) or for any individual procedures. Duty hour flexibility had no statistically significant effect on LOS in patients undergoing complex intra-abdominal operations. Copyright © 2016 American College of Surgeons. Published by Elsevier Inc. All rights reserved.
GEOquery: a bridge between the Gene Expression Omnibus (GEO) and BioConductor.
Davis, Sean; Meltzer, Paul S
2007-07-15
Microarray technology has become a standard molecular biology tool. Experimental data have been generated on a huge number of organisms, tissue types, treatment conditions and disease states. The Gene Expression Omnibus (Barrett et al., 2005), developed by the National Center for Bioinformatics (NCBI) at the National Institutes of Health is a repository of nearly 140,000 gene expression experiments. The BioConductor project (Gentleman et al., 2004) is an open-source and open-development software project built in the R statistical programming environment (R Development core Team, 2005) for the analysis and comprehension of genomic data. The tools contained in the BioConductor project represent many state-of-the-art methods for the analysis of microarray and genomics data. We have developed a software tool that allows access to the wealth of information within GEO directly from BioConductor, eliminating many the formatting and parsing problems that have made such analyses labor-intensive in the past. The software, called GEOquery, effectively establishes a bridge between GEO and BioConductor. Easy access to GEO data from BioConductor will likely lead to new analyses of GEO data using novel and rigorous statistical and bioinformatic tools. Facilitating analyses and meta-analyses of microarray data will increase the efficiency with which biologically important conclusions can be drawn from published genomic data. GEOquery is available as part of the BioConductor project.
Path loss variation of on-body UWB channel in the frequency bands of IEEE 802.15.6 standard.
Goswami, Dayananda; Sarma, Kanak C; Mahanta, Anil
2016-06-01
The wireless body area network (WBAN) has gaining tremendous attention among researchers and academicians for its envisioned applications in healthcare service. Ultra wideband (UWB) radio technology is considered as excellent air interface for communication among body area network devices. Characterisation and modelling of channel parameters are utmost prerequisite for the development of reliable communication system. The path loss of on-body UWB channel for each frequency band defined in IEEE 802.15.6 standard is experimentally determined. The parameters of path loss model are statistically determined by analysing measurement data. Both the line-of-sight and non-line-of-sight channel conditions are considered in the measurement. Variations of parameter values with the size of human body are analysed along with the variation of parameter values with the surrounding environments. It is observed that the parameters of the path loss model vary with the frequency band as well as with the body size and surrounding environment. The derived parameter values are specific to the particular frequency bands of IEEE 802.15.6 standard, which will be useful for the development of efficient UWB WBAN system.
Australasian Resuscitation In Sepsis Evaluation trial statistical analysis plan.
Delaney, Anthony; Peake, Sandra L; Bellomo, Rinaldo; Cameron, Peter; Holdgate, Anna; Howe, Belinda; Higgins, Alisa; Presneill, Jeffrey; Webb, Steve
2013-10-01
The Australasian Resuscitation In Sepsis Evaluation (ARISE) study is an international, multicentre, randomised, controlled trial designed to evaluate the effectiveness of early goal-directed therapy compared with standard care for patients presenting to the ED with severe sepsis. In keeping with current practice, and taking into considerations aspects of trial design and reporting specific to non-pharmacologic interventions, this document outlines the principles and methods for analysing and reporting the trial results. The document is prepared prior to completion of recruitment into the ARISE study, without knowledge of the results of the interim analysis conducted by the data safety and monitoring committee and prior to completion of the two related international studies. The statistical analysis plan was designed by the ARISE chief investigators, and reviewed and approved by the ARISE steering committee. The data collected by the research team as specified in the study protocol, and detailed in the study case report form were reviewed. Information related to baseline characteristics, characteristics of delivery of the trial interventions, details of resuscitation and other related therapies, and other relevant data are described with appropriate comparisons between groups. The primary, secondary and tertiary outcomes for the study are defined, with description of the planned statistical analyses. A statistical analysis plan was developed, along with a trial profile, mock-up tables and figures. A plan for presenting baseline characteristics, microbiological and antibiotic therapy, details of the interventions, processes of care and concomitant therapies, along with adverse events are described. The primary, secondary and tertiary outcomes are described along with identification of subgroups to be analysed. A statistical analysis plan for the ARISE study has been developed, and is available in the public domain, prior to the completion of recruitment into the study. This will minimise analytic bias and conforms to current best practice in conducting clinical trials. © 2013 Australasian College for Emergency Medicine and Australasian Society for Emergency Medicine.
A Bayesian Method for Identifying Contaminated Detectors in Low-Level Alpha Spectrometers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maclellan, Jay A.; Strom, Daniel J.; Joyce, Kevin E.
2011-11-02
Analyses used for radiobioassay and other radiochemical tests are normally designed to meet specified quality objectives, such relative bias, precision, and minimum detectable activity (MDA). In the case of radiobioassay analyses for alpha emitting radionuclides, a major determiner of the process MDA is the instrument background. Alpha spectrometry detectors are often restricted to only a few counts over multi-day periods in order to meet required MDAs for nuclides such as plutonium-239 and americium-241. A detector background criterion is often set empirically based on experience, or frequentist or classical statistics are applied to the calculated background count necessary to meet amore » required MDA. An acceptance criterion for the detector background is set at the multiple of the estimated background standard deviation above the assumed mean that provides an acceptably small probability of observation if the mean and standard deviation estimate are correct. The major problem with this method is that the observed background counts used to estimate the mean, and thereby the standard deviation when a Poisson distribution is assumed, are often in the range of zero to three counts. At those expected count levels it is impossible to obtain a good estimate of the true mean from a single measurement. As an alternative, Bayesian statistical methods allow calculation of the expected detector background count distribution based on historical counts from new, uncontaminated detectors. This distribution can then be used to identify detectors showing an increased probability of contamination. The effect of varying the assumed range of background counts (i.e., the prior probability distribution) from new, uncontaminated detectors will be is discussed.« less
Lechtzin, Noah; Busse, Anne M; Smith, Michael T; Grossman, Stuart; Nesbit, Suzanne; Diette, Gregory B
2010-09-01
Bone marrow aspiration and biopsy (BMAB) is painful when performed with only local anesthetic. Our objective was to determine whether viewing nature scenes and listening to nature sounds can reduce pain during BMAB. This was a randomized, controlled clinical trial. Adult patients undergoing outpatient BMAB with only local anesthetic were assigned to use either a nature scene with accompanying nature sounds, city scene with city sounds, or standard care. The primary outcome was a visual analog scale (0-10) of pain. Prespecified secondary analyses included categorizing pain as mild and moderate to severe and using multiple logistic regression to adjust for potential confounding variables. One hundred and twenty (120) subjects were enrolled: 44 in the Nature arm, 39 in the City arm, and 37 in the Standard Care arm. The mean pain scores, which were the primary outcome, were not significantly different between the three arms. A higher proportion in the Standard Care arm had moderate-to-severe pain (pain rating ≥4) than in the Nature arm (78.4% versus 60.5%), though this was not statistically significant (p = 0.097). This difference was statistically significant after adjusting for differences in the operators who performed the procedures (odds ratio = 3.71, p = 0.02). We confirmed earlier findings showing that BMAB is poorly tolerated. While mean pain scores were not significantly different between the study arms, secondary analyses suggest that viewing a nature scene while listening to nature sounds is a safe, inexpensive method that may reduce pain during BMAB. This approach should be considered to alleviate pain during invasive procedures.
Furlan, Leonardo; Sterr, Annette
2018-01-01
Motor learning studies face the challenge of differentiating between real changes in performance and random measurement error. While the traditional p -value-based analyses of difference (e.g., t -tests, ANOVAs) provide information on the statistical significance of a reported change in performance scores, they do not inform as to the likely cause or origin of that change, that is, the contribution of both real modifications in performance and random measurement error to the reported change. One way of differentiating between real change and random measurement error is through the utilization of the statistics of standard error of measurement (SEM) and minimal detectable change (MDC). SEM is estimated from the standard deviation of a sample of scores at baseline and a test-retest reliability index of the measurement instrument or test employed. MDC, in turn, is estimated from SEM and a degree of confidence, usually 95%. The MDC value might be regarded as the minimum amount of change that needs to be observed for it to be considered a real change, or a change to which the contribution of real modifications in performance is likely to be greater than that of random measurement error. A computer-based motor task was designed to illustrate the applicability of SEM and MDC to motor learning research. Two studies were conducted with healthy participants. Study 1 assessed the test-retest reliability of the task and Study 2 consisted in a typical motor learning study, where participants practiced the task for five consecutive days. In Study 2, the data were analyzed with a traditional p -value-based analysis of difference (ANOVA) and also with SEM and MDC. The findings showed good test-retest reliability for the task and that the p -value-based analysis alone identified statistically significant improvements in performance over time even when the observed changes could in fact have been smaller than the MDC and thereby caused mostly by random measurement error, as opposed to by learning. We suggest therefore that motor learning studies could complement their p -value-based analyses of difference with statistics such as SEM and MDC in order to inform as to the likely cause or origin of any reported changes in performance.
Statistics for NAEG: past efforts, new results, and future plans
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gilbert, R.O.; Simpson, J.C.; Kinnison, R.R.
A brief review of Nevada Applied Ecology Group (NAEG) objectives is followed by a summary of past statistical analyses conducted by Pacific Northwest Laboratory for the NAEG. Estimates of spatial pattern of radionuclides and other statistical analyses at NS's 201, 219 and 221 are reviewed as background for new analyses presented in this paper. Suggested NAEG activities and statistical analyses needed for the projected termination date of NAEG studies in March 1986 are given.
Misyura, Maksym; Sukhai, Mahadeo A; Kulasignam, Vathany; Zhang, Tong; Kamel-Reid, Suzanne; Stockley, Tracy L
2018-01-01
Aims A standard approach in test evaluation is to compare results of the assay in validation to results from previously validated methods. For quantitative molecular diagnostic assays, comparison of test values is often performed using simple linear regression and the coefficient of determination (R2), using R2 as the primary metric of assay agreement. However, the use of R2 alone does not adequately quantify constant or proportional errors required for optimal test evaluation. More extensive statistical approaches, such as Bland-Altman and expanded interpretation of linear regression methods, can be used to more thoroughly compare data from quantitative molecular assays. Methods We present the application of Bland-Altman and linear regression statistical methods to evaluate quantitative outputs from next-generation sequencing assays (NGS). NGS-derived data sets from assay validation experiments were used to demonstrate the utility of the statistical methods. Results Both Bland-Altman and linear regression were able to detect the presence and magnitude of constant and proportional error in quantitative values of NGS data. Deming linear regression was used in the context of assay comparison studies, while simple linear regression was used to analyse serial dilution data. Bland-Altman statistical approach was also adapted to quantify assay accuracy, including constant and proportional errors, and precision where theoretical and empirical values were known. Conclusions The complementary application of the statistical methods described in this manuscript enables more extensive evaluation of performance characteristics of quantitative molecular assays, prior to implementation in the clinical molecular laboratory. PMID:28747393
Economic and outcomes consequences of TachoSil®: a systematic review.
Colombo, Giorgio L; Bettoni, Daria; Di Matteo, Sergio; Grumi, Camilla; Molon, Cinzia; Spinelli, Daniela; Mauro, Gaetano; Tarozzo, Alessia; Bruno, Giacomo M
2014-01-01
TachoSil(®) is a medicated sponge coated with human fibrinogen and human thrombin. It is indicated as a support treatment in adult surgery to improve hemostasis, promote tissue sealing, and support sutures when standard surgical techniques are insufficient. This review systematically analyses the international scientific literature relating to the use of TachoSil in hemostasis and as a surgical sealant, from the point of view of its economic impact. We carried out a systematic review of the PubMed literature up to November 2013. Based on the selection criteria, papers were grouped according to the following outcomes: reduction of time to hemostasis; decrease in length of hospital stay; and decrease in postoperative complications. Twenty-four scientific papers were screened, 13 (54%) of which were randomized controlled trials and included a total of 2,116 patients, 1,055 of whom were treated with TachoSil. In the clinical studies carried out in patients undergoing hepatic, cardiac, or renal surgery, the time to hemostasis obtained with TachoSil was lower (1-4 minutes) than the time measured with other techniques and hemostatic drugs, with statistically significant differences. Moreover, in 13 of 15 studies, TachoSil showed a statistically significant reduction in postoperative complications in comparison with the standard surgical procedure. The range of the observed decrease in the length of hospital stay for TachoSil patients was 2.01-3.58 days versus standard techniques, with a statistically significant difference in favor of TachoSil in eight of 15 studies. This analysis shows that TachoSil has a role as a supportive treatment in surgery to improve hemostasis and promote tissue sealing when standard techniques are insufficient, with a consequent decrease in postoperative complications and hospital costs.
Gorgolewski, Krzysztof J; Varoquaux, Gael; Rivera, Gabriel; Schwartz, Yannick; Sochat, Vanessa V; Ghosh, Satrajit S; Maumet, Camille; Nichols, Thomas E; Poline, Jean-Baptiste; Yarkoni, Tal; Margulies, Daniel S; Poldrack, Russell A
2016-01-01
NeuroVault.org is dedicated to storing outputs of analyses in the form of statistical maps, parcellations and atlases, a unique strategy that contrasts with most neuroimaging repositories that store raw acquisition data or stereotaxic coordinates. Such maps are indispensable for performing meta-analyses, validating novel methodology, and deciding on precise outlines for regions of interest (ROIs). NeuroVault is open to maps derived from both healthy and clinical populations, as well as from various imaging modalities (sMRI, fMRI, EEG, MEG, PET, etc.). The repository uses modern web technologies such as interactive web-based visualization, cognitive decoding, and comparison with other maps to provide researchers with efficient, intuitive tools to improve the understanding of their results. Each dataset and map is assigned a permanent Universal Resource Locator (URL), and all of the data is accessible through a REST Application Programming Interface (API). Additionally, the repository supports the NIDM-Results standard and has the ability to parse outputs from popular FSL and SPM software packages to automatically extract relevant metadata. This ease of use, modern web-integration, and pioneering functionality holds promise to improve the workflow for making inferences about and sharing whole-brain statistical maps. Copyright © 2015 Elsevier Inc. All rights reserved.
Pace, M.N.; Rosentreter, J.J.; Bartholomay, R.C.
2001-01-01
Idaho State University and the US Geological Survey, in cooperation with the US Department of Energy, conducted a study to determine and evaluate strontium distribution coefficients (Kds) of subsurface materials at the Idaho National Engineering and Environmental Laboratory (INEEL). The Kds were determined to aid in assessing the variability of strontium Kds and their effects on chemical transport of strontium-90 in the Snake River Plain aquifer system. Data from batch experiments done to determine strontium Kds of five sediment-infill samples and six standard reference material samples were analyzed by using multiple linear regression analysis and the stepwise variable-selection method in the statistical program, Statistical Product and Service Solutions, to derive an equation of variables that can be used to predict strontium Kds of sediment-infill samples. The sediment-infill samples were from basalt vesicles and fractures from a selected core at the INEEL; strontium Kds ranged from ???201 to 356 ml g-1. The standard material samples consisted of clay minerals and calcite. The statistical analyses of the batch-experiment results showed that the amount of strontium in the initial solution, the amount of manganese oxide in the sample material, and the amount of potassium in the initial solution are the most important variables in predicting strontium Kds of sediment-infill samples.
Efforts to improve international migration statistics: a historical perspective.
Kraly, E P; Gnanasekaran, K S
1987-01-01
During the past decade, the international statistical community has made several efforts to develop standards for the definition, collection and publication of statistics on international migration. This article surveys the history of official initiatives to standardize international migration statistics by reviewing the recommendations of the International Statistical Institute, International Labor Organization, and the UN, and reports a recently proposed agenda for moving toward comparability among national statistical systems. Heightening awareness of the benefits of exchange and creating motivation to implement international standards requires a 3-pronged effort from the international statistical community. 1st, it is essential to continue discussion about the significance of improvement, specifically standardization, of international migration statistics. The move from theory to practice in this area requires ongoing focus by migration statisticians so that conformity to international standards itself becomes a criterion by which national statistical practices are examined and assessed. 2nd, the countries should be provided with technical documentation to support and facilitate the implementation of the recommended statistical systems. Documentation should be developed with an understanding that conformity to international standards for migration and travel statistics must be achieved within existing national statistical programs. 3rd, the call for statistical research in this area requires more efforts by the community of migration statisticians, beginning with the mobilization of bilateral and multilateral resources to undertake the preceding list of activities.
75 FR 37245 - 2010 Standards for Delineating Metropolitan and Micropolitan Statistical Areas
Federal Register 2010, 2011, 2012, 2013, 2014
2010-06-28
... Micropolitan Statistical Areas; Notice #0;#0;Federal Register / Vol. 75, No. 123 / Monday, June 28, 2010... and Micropolitan Statistical Areas AGENCY: Office of Information and Regulatory Affairs, Office of... Statistical Areas. The 2010 standards replace and supersede the 2000 Standards for Defining Metropolitan and...
Schmidt, Kerstin; Schmidtke, Jörg; Mast, Yvonne; Waldvogel, Eva; Wohlleben, Wolfgang; Klemke, Friederike; Lockau, Wolfgang; Hausmann, Tina; Hühns, Maja; Broer, Inge
2017-08-01
Potatoes are a promising system for industrial production of the biopolymer cyanophycin as a second compound in addition to starch. To assess the efficiency in the field, we analysed the stability of the system, specifically its sensitivity to environmental factors. Field and greenhouse trials with transgenic potatoes (two independent events) were carried out for three years. The influence of environmental factors was measured and target compounds in the transgenic plants (cyanophycin, amino acids) were analysed for differences to control plants. Furthermore, non-target parameters (starch content, number, weight and size of tubers) were analysed for equivalence with control plants. The huge amount of data received was handled using modern statistical approaches to model the correlation between influencing environmental factors (year of cultivation, nitrogen fertilization, origin of plants, greenhouse or field cultivation) and key components (starch, amino acids, cyanophycin) and agronomic characteristics. General linear models were used for modelling, and standard effect sizes were applied to compare conventional and genetically modified plants. Altogether, the field trials prove that significant cyanophycin production is possible without reduction of starch content. Non-target compound composition seems to be equivalent under varying environmental conditions. Additionally, a quick test to measure cyanophycin content gives similar results compared to the extensive enzymatic test. This work facilitates the commercial cultivation of cyanophycin potatoes.
Quantitative Thermochemical Measurements in High-Pressure Gaseous Combustion
NASA Technical Reports Server (NTRS)
Kojima, Jun J.; Fischer, David G.
2012-01-01
We present our strategic experiment and thermochemical analyses on combustion flow using a subframe burst gating (SBG) Raman spectroscopy. This unconventional laser diagnostic technique has promising ability to enhance accuracy of the quantitative scalar measurements in a point-wise single-shot fashion. In the presentation, we briefly describe an experimental methodology that generates transferable calibration standard for the routine implementation of the diagnostics in hydrocarbon flames. The diagnostic technology was applied to simultaneous measurements of temperature and chemical species in a swirl-stabilized turbulent flame with gaseous methane fuel at elevated pressure (17 atm). Statistical analyses of the space-/time-resolved thermochemical data provide insights into the nature of the mixing process and it impact on the subsequent combustion process in the model combustor.
Empirical evidence about inconsistency among studies in a pair-wise meta-analysis.
Rhodes, Kirsty M; Turner, Rebecca M; Higgins, Julian P T
2016-12-01
This paper investigates how inconsistency (as measured by the I 2 statistic) among studies in a meta-analysis may differ, according to the type of outcome data and effect measure. We used hierarchical models to analyse data from 3873 binary, 5132 continuous and 880 mixed outcome meta-analyses within the Cochrane Database of Systematic Reviews. Predictive distributions for inconsistency expected in future meta-analyses were obtained, which can inform priors for between-study variance. Inconsistency estimates were highest on average for binary outcome meta-analyses of risk differences and continuous outcome meta-analyses. For a planned binary outcome meta-analysis in a general research setting, the predictive distribution for inconsistency among log odds ratios had median 22% and 95% CI: 12% to 39%. For a continuous outcome meta-analysis, the predictive distribution for inconsistency among standardized mean differences had median 40% and 95% CI: 15% to 73%. Levels of inconsistency were similar for binary data measured by log odds ratios and log relative risks. Fitted distributions for inconsistency expected in continuous outcome meta-analyses using mean differences were almost identical to those using standardized mean differences. The empirical evidence on inconsistency gives guidance on which outcome measures are most likely to be consistent in particular circumstances and facilitates Bayesian meta-analysis with an informative prior for heterogeneity. © 2015 The Authors. Research Synthesis Methods published by John Wiley & Sons, Ltd. © 2015 The Authors. Research Synthesis Methods published by John Wiley & Sons, Ltd. © 2015 The Authors. Research Synthesis Methods published by John Wiley & Sons, Ltd.
Van Bockstaele, Femke; Janssens, Ann; Piette, Anne; Callewaert, Filip; Pede, Valerie; Offner, Fritz; Verhasselt, Bruno; Philippé, Jan
2006-07-15
ZAP-70 has been proposed as a surrogate marker for immunoglobulin heavy-chain variable region (IgV(H)) mutation status, which is known as a prognostic marker in B-cell chronic lymphocytic leukemia (CLL). The flow cytometric analysis of ZAP-70 suffers from difficulties in standardization and interpretation. We applied the Kolmogorov-Smirnov (KS) statistical test to make analysis more straightforward. We examined ZAP-70 expression by flow cytometry in 53 patients with CLL. Analysis was performed as initially described by Crespo et al. (New England J Med 2003; 348:1764-1775) and alternatively by application of the KS statistical test comparing T cells with B cells. Receiver-operating-characteristics (ROC)-curve analyses were performed to determine the optimal cut-off values for ZAP-70 measured by the two approaches. ZAP-70 protein expression was compared with ZAP-70 mRNA expression measured by a quantitative PCR (qPCR) and with the IgV(H) mutation status. Both flow cytometric analyses correlated well with the molecular technique and proved to be of equal value in predicting the IgV(H) mutation status. Applying the KS test is reproducible, simple, straightforward, and overcomes a number of difficulties encountered in the Crespo-method. The KS statistical test is an essential part of the software delivered with modern routine analytical flow cytometers and is well suited for analysis of ZAP-70 expression in CLL. (c) 2006 International Society for Analytical Cytology.
Simulating Visual Attention Allocation of Pilots in an Advanced Cockpit Environment
NASA Technical Reports Server (NTRS)
Frische, F.; Osterloh, J.-P.; Luedtke, A.
2011-01-01
This paper describes the results of experiments conducted with human line pilots and a cognitive pilot model during interaction with a new 40 Flight Management System (FMS). The aim of these experiments was to gather human pilot behavior data in order to calibrate the behavior of the model. Human behavior is mainly triggered by visual perception. Thus, the main aspect was to setup a profile of human pilots' visual attention allocation in a cockpit environment containing the new FMS. We first performed statistical analyses of eye tracker data and then compared our results to common results of familiar analyses in standard cockpit environments. The comparison has shown a significant influence of the new system on the visual performance of human pilots. Further on, analyses of the pilot models' visual performance have been performed. A comparison to human pilots' visual performance revealed important improvement potentials.
Analysis of Cross-Sectional Univariate Measurements for Family Dyads Using Linear Mixed Modeling
Knafl, George J.; Dixon, Jane K.; O'Malley, Jean P.; Grey, Margaret; Deatrick, Janet A.; Gallo, Agatha M.; Knafl, Kathleen A.
2010-01-01
Outcome measurements from members of the same family are likely correlated. Such intrafamilial correlation (IFC) is an important dimension of the family as a unit but is not always accounted for in analyses of family data. This article demonstrates the use of linear mixed modeling to account for IFC in the important special case of univariate measurements for family dyads collected at a single point in time. Example analyses of data from partnered parents having a child with a chronic condition on their child's adaptation to the condition and on the family's general functioning and management of the condition are provided. Analyses of this kind are reasonably straightforward to generate with popular statistical tools. Thus, it is recommended that IFC be reported as standard practice reflecting the fact that a family dyad is more than just the aggregate of two individuals. Moreover, not accounting for IFC can affect the conclusions. PMID:19307316
Stress and depression in mothers of failure-to-thrive children.
Singer, L T; Song, L Y; Hill, B P; Jaffe, A C
1990-12-01
Compared 30 mothers whose children were hospitalized for failure-to-thrive (FTT) to a normative group on standardized measures of perceived stress and depression. Child and maternal medical and demographic data were also taken. Standardized development and feeding assessments were done. Descriptive statistics, correlational analyses, and t tests were used to describe and examine group differences. FTT children were perceived overall as more stressful, less adaptable, more inconsolable, and more unhappy than were healthy children. Child characteristics associated with higher maternal stress levels were higher birth weight, absence of organic disease or behavioral feeding problems, and higher IQ. Maternal self-report of depression, attachment to her child, sense of competence in parenting, social isolation, and relationship to spouse were not different from the normative sample.
Peak-flow characteristics of Wyoming streams
Miller, Kirk A.
2003-01-01
Peak-flow characteristics for unregulated streams in Wyoming are described in this report. Frequency relations for annual peak flows through water year 2000 at 364 streamflow-gaging stations in and near Wyoming were evaluated and revised or updated as needed. Analyses of historical floods, temporal trends, and generalized skew were included in the evaluation. Physical and climatic basin characteristics were determined for each gaging station using a geographic information system. Gaging stations with similar peak-flow and basin characteristics were grouped into six hydrologic regions. Regional statistical relations between peak-flow and basin characteristics were explored using multiple-regression techniques. Generalized least squares regression equations for estimating magnitudes of annual peak flows with selected recurrence intervals from 1.5 to 500 years were developed for each region. Average standard errors of estimate range from 34 to 131 percent. Average standard errors of prediction range from 35 to 135 percent. Several statistics for evaluating and comparing the errors in these estimates are described. Limitations of the equations are described. Methods for applying the regional equations for various circumstances are listed and examples are given.
Apparent cosmic acceleration from Type Ia supernovae
NASA Astrophysics Data System (ADS)
Dam, Lawrence H.; Heinesen, Asta; Wiltshire, David L.
2017-11-01
Parameters that quantify the acceleration of cosmic expansion are conventionally determined within the standard Friedmann-Lemaître-Robertson-Walker (FLRW) model, which fixes spatial curvature to be homogeneous. Generic averages of Einstein's equations in inhomogeneous cosmology lead to models with non-rigidly evolving average spatial curvature, and different parametrizations of apparent cosmic acceleration. The timescape cosmology is a viable example of such a model without dark energy. Using the largest available supernova data set, the JLA catalogue, we find that the timescape model fits the luminosity distance-redshift data with a likelihood that is statistically indistinguishable from the standard spatially flat Λ cold dark matter cosmology by Bayesian comparison. In the timescape case cosmic acceleration is non-zero but has a marginal amplitude, with best-fitting apparent deceleration parameter, q_{0}=-0.043^{+0.004}_{-0.000}. Systematic issues regarding standardization of supernova light curves are analysed. Cuts of data at the statistical homogeneity scale affect light-curve parameter fits independent of cosmology. A cosmological model dependence of empirical changes to the mean colour parameter is also found. Irrespective of which model ultimately fits better, we argue that as a competitive model with a non-FLRW expansion history, the timescape model may prove a useful diagnostic tool for disentangling selection effects and astrophysical systematics from the underlying expansion history.
Taljaard, Monica; McKenzie, Joanne E; Ramsay, Craig R; Grimshaw, Jeremy M
2014-06-19
An interrupted time series design is a powerful quasi-experimental approach for evaluating effects of interventions introduced at a specific point in time. To utilize the strength of this design, a modification to standard regression analysis, such as segmented regression, is required. In segmented regression analysis, the change in intercept and/or slope from pre- to post-intervention is estimated and used to test causal hypotheses about the intervention. We illustrate segmented regression using data from a previously published study that evaluated the effectiveness of a collaborative intervention to improve quality in pre-hospital ambulance care for acute myocardial infarction (AMI) and stroke. In the original analysis, a standard regression model was used with time as a continuous variable. We contrast the results from this standard regression analysis with those from segmented regression analysis. We discuss the limitations of the former and advantages of the latter, as well as the challenges of using segmented regression in analysing complex quality improvement interventions. Based on the estimated change in intercept and slope from pre- to post-intervention using segmented regression, we found insufficient evidence of a statistically significant effect on quality of care for stroke, although potential clinically important effects for AMI cannot be ruled out. Segmented regression analysis is the recommended approach for analysing data from an interrupted time series study. Several modifications to the basic segmented regression analysis approach are available to deal with challenges arising in the evaluation of complex quality improvement interventions.
Methods for estimating selected low-flow frequency statistics for unregulated streams in Kentucky
Martin, Gary R.; Arihood, Leslie D.
2010-01-01
This report provides estimates of, and presents methods for estimating, selected low-flow frequency statistics for unregulated streams in Kentucky including the 30-day mean low flows for recurrence intervals of 2 and 5 years (30Q2 and 30Q5) and the 7-day mean low flows for recurrence intervals of 5, 10, and 20 years (7Q2, 7Q10, and 7Q20). Estimates of these statistics are provided for 121 U.S. Geological Survey streamflow-gaging stations with data through the 2006 climate year, which is the 12-month period ending March 31 of each year. Data were screened to identify the periods of homogeneous, unregulated flows for use in the analyses. Logistic-regression equations are presented for estimating the annual probability of the selected low-flow frequency statistics being equal to zero. Weighted-least-squares regression equations were developed for estimating the magnitude of the nonzero 30Q2, 30Q5, 7Q2, 7Q10, and 7Q20 low flows. Three low-flow regions were defined for estimating the 7-day low-flow frequency statistics. The explicit explanatory variables in the regression equations include total drainage area and the mapped streamflow-variability index measured from a revised statewide coverage of this characteristic. The percentage of the station low-flow statistics correctly classified as zero or nonzero by use of the logistic-regression equations ranged from 87.5 to 93.8 percent. The average standard errors of prediction of the weighted-least-squares regression equations ranged from 108 to 226 percent. The 30Q2 regression equations have the smallest standard errors of prediction, and the 7Q20 regression equations have the largest standard errors of prediction. The regression equations are applicable only to stream sites with low flows unaffected by regulation from reservoirs and local diversions of flow and to drainage basins in specified ranges of basin characteristics. Caution is advised when applying the equations for basins with characteristics near the applicable limits and for basins with karst drainage features.
Chou, C P; Bentler, P M; Satorra, A
1991-11-01
Research studying robustness of maximum likelihood (ML) statistics in covariance structure analysis has concluded that test statistics and standard errors are biased under severe non-normality. An estimation procedure known as asymptotic distribution free (ADF), making no distributional assumption, has been suggested to avoid these biases. Corrections to the normal theory statistics to yield more adequate performance have also been proposed. This study compares the performance of a scaled test statistic and robust standard errors for two models under several non-normal conditions and also compares these with the results from ML and ADF methods. Both ML and ADF test statistics performed rather well in one model and considerably worse in the other. In general, the scaled test statistic seemed to behave better than the ML test statistic and the ADF statistic performed the worst. The robust and ADF standard errors yielded more appropriate estimates of sampling variability than the ML standard errors, which were usually downward biased, in both models under most of the non-normal conditions. ML test statistics and standard errors were found to be quite robust to the violation of the normality assumption when data had either symmetric and platykurtic distributions, or non-symmetric and zero kurtotic distributions.
The correlation between relatives on the supposition of genomic imprinting.
Spencer, Hamish G
2002-01-01
Standard genetic analyses assume that reciprocal heterozygotes are, on average, phenotypically identical. If a locus is subject to genomic imprinting, however, this assumption does not hold. We incorporate imprinting into the standard quantitative-genetic model for two alleles at a single locus, deriving expressions for the additive and dominance components of genetic variance, as well as measures of resemblance among relatives. We show that, in contrast to the case with Mendelian expression, the additive and dominance deviations are correlated. In principle, this correlation allows imprinting to be detected solely on the basis of different measures of familial resemblances, but in practice, the standard error of the estimate is likely to be too large for a test to have much statistical power. The effects of genomic imprinting will need to be incorporated into quantitative-genetic models of many traits, for example, those concerned with mammalian birthweight. PMID:12019254
The correlation between relatives on the supposition of genomic imprinting.
Spencer, Hamish G
2002-05-01
Standard genetic analyses assume that reciprocal heterozygotes are, on average, phenotypically identical. If a locus is subject to genomic imprinting, however, this assumption does not hold. We incorporate imprinting into the standard quantitative-genetic model for two alleles at a single locus, deriving expressions for the additive and dominance components of genetic variance, as well as measures of resemblance among relatives. We show that, in contrast to the case with Mendelian expression, the additive and dominance deviations are correlated. In principle, this correlation allows imprinting to be detected solely on the basis of different measures of familial resemblances, but in practice, the standard error of the estimate is likely to be too large for a test to have much statistical power. The effects of genomic imprinting will need to be incorporated into quantitative-genetic models of many traits, for example, those concerned with mammalian birthweight.
NASA Astrophysics Data System (ADS)
Lufri, L.; Fitri, R.; Yogica, R.
2018-04-01
The purpose of this study is to produce a learning model based on problem solving and meaningful learning standards by expert assessment or validation for the course of Animal Development. This research is a development research that produce the product in the form of learning model, which consist of sub product, namely: the syntax of learning model and student worksheets. All of these products are standardized through expert validation. The research data is the level of validity of all sub products obtained using questionnaire, filled by validators from various field of expertise (field of study, learning strategy, Bahasa). Data were analysed using descriptive statistics. The result of the research shows that the problem solving and meaningful learning model has been produced. Sub products declared appropriate by expert include the syntax of learning model and student worksheet.
Long, H. Keith; Daddow, Richard L.; Farrar, Jerry W.
1998-01-01
Since 1962, the U.S. Geological Survey (USGS) has operated the Standard Reference Sample Project to evaluate the performance of USGS, cooperator, and contractor analytical laboratories that analyze chemical constituents of environmental samples. The laboratories are evaluated by using performance evaluation samples, called Standard Reference Samples (SRSs). SRSs are submitted to laboratories semi-annually for round-robin laboratory performance comparison purposes. Currently, approximately 100 laboratories are evaluated for their analytical performance on six SRSs for inorganic and nutrient constituents. As part of the SRS Project, a surplus of homogeneous, stable SRSs is maintained for purchase by USGS offices and participating laboratories for use in continuing quality-assurance and quality-control activities. Statistical evaluation of the laboratories results provides information to compare the analytical performance of the laboratories and to determine possible analytical deficiences and problems. SRS results also provide information on the bias and variability of different analytical methods used in the SRS analyses.
Study on the criteria for assessing skull-face correspondence in craniofacial superimposition.
Ibáñez, Oscar; Valsecchi, Andrea; Cavalli, Fabio; Huete, María Isabel; Campomanes-Alvarez, Blanca Rosario; Campomanes-Alvarez, Carmen; Vicente, Ricardo; Navega, David; Ross, Ann; Wilkinson, Caroline; Jankauskas, Rimantas; Imaizumi, Kazuhiko; Hardiman, Rita; Jayaprakash, Paul Thomas; Ruiz, Elena; Molinero, Francisco; Lestón, Patricio; Veselovskaya, Elizaveta; Abramov, Alexey; Steyn, Maryna; Cardoso, Joao; Humpire, Daniel; Lusnig, Luca; Gibelli, Daniele; Mazzarelli, Debora; Gaudio, Daniel; Collini, Federica; Damas, Sergio
2016-11-01
Craniofacial superimposition has the potential to be used as an identification method when other traditional biological techniques are not applicable due to insufficient quality or absence of ante-mortem and post-mortem data. Despite having been used in many countries as a method of inclusion and exclusion for over a century it lacks standards. Thus, the purpose of this research is to provide forensic practitioners with standard criteria for analysing skull-face relationships. Thirty-seven experts from 16 different institutions participated in this study, which consisted of evaluating 65 criteria for assessing skull-face anatomical consistency on a sample of 24 different skull-face superimpositions. An unbiased statistical analysis established the most objective and discriminative criteria. Results did not show strong associations, however, important insights to address lack of standards were provided. In addition, a novel methodology for understanding and standardizing identification methods based on the observation of morphological patterns has been proposed. Crown Copyright © 2016. Published by Elsevier Ireland Ltd. All rights reserved.
Wallach, Joshua D; Sullivan, Patrick G; Trepanowski, John F; Sainani, Kristin L; Steyerberg, Ewout W; Ioannidis, John P A
2017-04-01
Many published randomized clinical trials (RCTs) make claims for subgroup differences. To evaluate how often subgroup claims reported in the abstracts of RCTs are actually supported by statistical evidence (P < .05 from an interaction test) and corroborated by subsequent RCTs and meta-analyses. This meta-epidemiological survey examines data sets of trials with at least 1 subgroup claim, including Subgroup Analysis of Trials Is Rarely Easy (SATIRE) articles and Discontinuation of Randomized Trials (DISCO) articles. We used Scopus (updated July 2016) to search for English-language articles citing each of the eligible index articles with at least 1 subgroup finding in the abstract. Articles with a subgroup claim in the abstract with or without evidence of statistical heterogeneity (P < .05 from an interaction test) in the text and articles attempting to corroborate the subgroup findings. Study characteristics of trials with at least 1 subgroup claim in the abstract were recorded. Two reviewers extracted the data necessary to calculate subgroup-level effect sizes, standard errors, and the P values for interaction. For individual RCTs and meta-analyses that attempted to corroborate the subgroup findings from the index articles, trial characteristics were extracted. Cochran Q test was used to reevaluate heterogeneity with the data from all available trials. The number of subgroup claims in the abstracts of RCTs, the number of subgroup claims in the abstracts of RCTs with statistical support (subgroup findings), and the number of subgroup findings corroborated by subsequent RCTs and meta-analyses. Sixty-four eligible RCTs made a total of 117 subgroup claims in their abstracts. Of these 117 claims, only 46 (39.3%) in 33 articles had evidence of statistically significant heterogeneity from a test for interaction. In addition, out of these 46 subgroup findings, only 16 (34.8%) ensured balance between randomization groups within the subgroups (eg, through stratified randomization), 13 (28.3%) entailed a prespecified subgroup analysis, and 1 (2.2%) was adjusted for multiple testing. Only 5 (10.9%) of the 46 subgroup findings had at least 1 subsequent pure corroboration attempt by a meta-analysis or an RCT. In all 5 cases, the corroboration attempts found no evidence of a statistically significant subgroup effect. In addition, all effect sizes from meta-analyses were attenuated toward the null. A minority of subgroup claims made in the abstracts of RCTs are supported by their own data (ie, a significant interaction effect). For those that have statistical support (P < .05 from an interaction test), most fail to meet other best practices for subgroup tests, including prespecification, stratified randomization, and adjustment for multiple testing. Attempts to corroborate statistically significant subgroup differences are rare; when done, the initially observed subgroup differences are not reproduced.
Friedman, David B
2012-01-01
All quantitative proteomics experiments measure variation between samples. When performing large-scale experiments that involve multiple conditions or treatments, the experimental design should include the appropriate number of individual biological replicates from each condition to enable the distinction between a relevant biological signal from technical noise. Multivariate statistical analyses, such as principal component analysis (PCA), provide a global perspective on experimental variation, thereby enabling the assessment of whether the variation describes the expected biological signal or the unanticipated technical/biological noise inherent in the system. Examples will be shown from high-resolution multivariable DIGE experiments where PCA was instrumental in demonstrating biologically significant variation as well as sample outliers, fouled samples, and overriding technical variation that would not be readily observed using standard univariate tests.
Samsson, Karin S; Larsson, Maria E H
2015-02-01
The literature indicates that physiotherapy triage assessment can be efficient for patients referred for orthopaedic consultation, however long-term follow up of patient reported outcome measures are not available. To report a long-term evaluation of patient-reported health-related quality of life, pain-related disability, and sick leave after a physiotherapy triage assessment of patients referred for orthopaedic consultation compared with standard practice. Patients referred for orthopaedic consultation (n = 208) were randomised to physiotherapy triage assessment or standard practice. The randomised cohort was analysed on an intention-to-treat (ITT) basis. The patient reported outcome measures EuroQol VAS (self-reported health-state), EuroQol 5D-3L (EQ-5D) and Pain Disability Index (PDI) were assessed at baseline and after 3, 6 and 12 months. EQ VAS was analysed using a repeated measure ANOVA. PDI and EQ-5D were analysed using a marginal logistic regression model. Sick leave was analysed for the 12 months following consultation using a Mann-Whitney U-test. The patients rated a significantly better health-state at 3 after physiotherapy triage assessment [mean difference -5.7 (95% CI -11.1; -0.2); p = 0.04]. There were no other statistically significant differences in perceived health-related quality of life or pain related disability between the groups at any of the follow-ups, or sick leave. This study reports that the long-term follow up of the patient related outcome measures health-related quality of life, pain-related disability and sick leave after physiotherapy triage assessment did not differ from standard practice, indicating the possible benefits of implementation of this model of care. Copyright © 2014 Elsevier Ltd. All rights reserved.
Analysis of repeated measurement data in the clinical trials
Singh, Vineeta; Rana, Rakesh Kumar; Singhal, Richa
2013-01-01
Statistics is an integral part of Clinical Trials. Elements of statistics span Clinical Trial design, data monitoring, analyses and reporting. A solid understanding of statistical concepts by clinicians improves the comprehension and the resulting quality of Clinical Trials. In biomedical research it has been seen that researcher frequently use t-test and ANOVA to compare means between the groups of interest irrespective of the nature of the data. In Clinical Trials we record the data on the patients more than two times. In such a situation using the standard ANOVA procedures is not appropriate as it does not consider dependencies between observations within subjects in the analysis. To deal with such types of study data Repeated Measure ANOVA should be used. In this article the application of One-way Repeated Measure ANOVA has been demonstrated by using the software SPSS (Statistical Package for Social Sciences) Version 15.0 on the data collected at four time points 0 day, 15th day, 30th day, and 45th day of multicentre clinical trial conducted on Pandu Roga (~Iron Deficiency Anemia) with an Ayurvedic formulation Dhatrilauha. PMID:23930038
Statistical Reporting Errors and Collaboration on Statistical Analyses in Psychological Science.
Veldkamp, Coosje L S; Nuijten, Michèle B; Dominguez-Alvarez, Linda; van Assen, Marcel A L M; Wicherts, Jelte M
2014-01-01
Statistical analysis is error prone. A best practice for researchers using statistics would therefore be to share data among co-authors, allowing double-checking of executed tasks just as co-pilots do in aviation. To document the extent to which this 'co-piloting' currently occurs in psychology, we surveyed the authors of 697 articles published in six top psychology journals and asked them whether they had collaborated on four aspects of analyzing data and reporting results, and whether the described data had been shared between the authors. We acquired responses for 49.6% of the articles and found that co-piloting on statistical analysis and reporting results is quite uncommon among psychologists, while data sharing among co-authors seems reasonably but not completely standard. We then used an automated procedure to study the prevalence of statistical reporting errors in the articles in our sample and examined the relationship between reporting errors and co-piloting. Overall, 63% of the articles contained at least one p-value that was inconsistent with the reported test statistic and the accompanying degrees of freedom, and 20% of the articles contained at least one p-value that was inconsistent to such a degree that it may have affected decisions about statistical significance. Overall, the probability that a given p-value was inconsistent was over 10%. Co-piloting was not found to be associated with reporting errors.
Statistical Reporting Errors and Collaboration on Statistical Analyses in Psychological Science
Veldkamp, Coosje L. S.; Nuijten, Michèle B.; Dominguez-Alvarez, Linda; van Assen, Marcel A. L. M.; Wicherts, Jelte M.
2014-01-01
Statistical analysis is error prone. A best practice for researchers using statistics would therefore be to share data among co-authors, allowing double-checking of executed tasks just as co-pilots do in aviation. To document the extent to which this ‘co-piloting’ currently occurs in psychology, we surveyed the authors of 697 articles published in six top psychology journals and asked them whether they had collaborated on four aspects of analyzing data and reporting results, and whether the described data had been shared between the authors. We acquired responses for 49.6% of the articles and found that co-piloting on statistical analysis and reporting results is quite uncommon among psychologists, while data sharing among co-authors seems reasonably but not completely standard. We then used an automated procedure to study the prevalence of statistical reporting errors in the articles in our sample and examined the relationship between reporting errors and co-piloting. Overall, 63% of the articles contained at least one p-value that was inconsistent with the reported test statistic and the accompanying degrees of freedom, and 20% of the articles contained at least one p-value that was inconsistent to such a degree that it may have affected decisions about statistical significance. Overall, the probability that a given p-value was inconsistent was over 10%. Co-piloting was not found to be associated with reporting errors. PMID:25493918
Sokka, Tuulikki; Kautiainen, Hannu; Toloza, Sergio; Mäkinen, Heidi; Verstappen, Suzan M M; Hetland, Merete Lund; Naranjo, Antonio; Baecklund, Eva; Herborn, Gertraud; Rau, Rolf; Cazzato, Massimiliano; Gossec, Laure; Skakic, Vlado; Gogus, Feride; Sierakowski, Stanislaw; Bresnihan, Barry; Taylor, Peter; McClinton, Catherine; Pincus, Theodore
2007-01-01
Objective To conduct a cross‐sectional review of non‐selected consecutive outpatients with rheumatoid arthritis (RA) as part of standard clinical care in 15 countries for an overview of the characteristics of patients with RA. Methods The review included current disease activity using data from clinical assessment and a patient self‐report questionnaire, which was translated into each language. Data on demographic, disease and treatment‐related variables were collected and analysed using descriptive statistics. Variation in disease activity on DAS28 (disease activity score on 28‐joint count) within and between countries was graphically analysed. A median regression model was applied to analyse differences in disease activity between countries. Results Between January 2005 and October 2006, the QUEST‐RA (Quantitative Patient Questionnaires in Standard Monitoring of Patients with Rheumatoid Arthritis) project included 4363 patients from 48 sites in 15 countries; 78% were female, >90% Caucasian, mean age was 57 years and mean disease duration was 11.5 years. More than 80% of patients had been treated with methotrexate in all but three countries. Overall, patients had an active disease with a median DAS28 of 4.0, with a significant variation between countries (p<0.001). Among 42 sites with >50 patients included, low disease activity of DAS28 ⩽3.2 was found in the majority of patients in seven sites in five countries; in eight sites in five other countries, >50% of patients had high disease activity of DAS28 >5.1. Conclusions This international multicentre cross‐sectional database provides an overview of clinical status and treatments of patients with RA in standard clinical care in 2005–6 including countries that are infrequently involved in clinical research projects. PMID:17412740
Jadidi, Masoud; Båth, Magnus; Nyrén, Sven
2018-04-09
To compare the quality of images obtained with two different protocols with different acquisition time and the influence from image post processing in a chest digital tomosynthesis (DTS) system. 20 patients with suspected lung cancer were imaged with a chest X-ray equipment with tomosynthesis option. Two examination protocols with different acquisition times (6.3 and 12 s) were performed on each patient. Both protocols were presented with two different image post-processing (standard DTS processing and more advanced processing optimised for chest radiography). Thus, 4 series from each patient, altogether 80 series, were presented anonymously and in a random order. Five observers rated the quality of the reconstructed section images according to predefined quality criteria in three different classes. Visual grading characteristics (VGC) was used to analyse the data and the area under the VGC curve (AUC VGC ) was used as figure-of-merit. The 12 s protocol and the standard DTS processing were used as references in the analyses. The protocol with 6.3 s acquisition time had a statistically significant advantage over the vendor-recommended protocol with 12 s acquisition time for the classes of criteria, Demarcation (AUC VGC = 0.56, p = 0.009) and Disturbance (AUC VGC = 0.58, p < 0.001). A similar value of AUC VGC was found also for the class Structure (definition of bone structures in the spine) (0.56) but it could not be statistically separated from 0.5 (p = 0.21). For the image processing, the VGC analysis showed a small but statistically significant advantage for the standard DTS processing over the more advanced processing for the classes of criteria Demarcation (AUC VGC = 0.45, p = 0.017) and Disturbance (AUC VGC = 0.43, p = 0.005). A similar value of AUC VGC was found also for the class Structure (0.46), but it could not be statistically separated from 0.5 (p = 0.31). The study indicates that the protocol with 6.3 s acquisition time yields slightly better image quality than the vender-recommended protocol with acquisition time 12 s for several anatomical structures. Furthermore, the standard gradation processing (the vendor-recommended post-processing for DTS), yields to some extent advantage over the gradation processing/multiobjective frequency processing/flexible noise control processing in terms of image quality for all classes of criteria. Advances in knowledge: The study proves that the image quality may be strongly affected by the selection of DTS protocol and that the vendor-recommended protocol may not always be the optimal choice.
Misyura, Maksym; Sukhai, Mahadeo A; Kulasignam, Vathany; Zhang, Tong; Kamel-Reid, Suzanne; Stockley, Tracy L
2018-02-01
A standard approach in test evaluation is to compare results of the assay in validation to results from previously validated methods. For quantitative molecular diagnostic assays, comparison of test values is often performed using simple linear regression and the coefficient of determination (R 2 ), using R 2 as the primary metric of assay agreement. However, the use of R 2 alone does not adequately quantify constant or proportional errors required for optimal test evaluation. More extensive statistical approaches, such as Bland-Altman and expanded interpretation of linear regression methods, can be used to more thoroughly compare data from quantitative molecular assays. We present the application of Bland-Altman and linear regression statistical methods to evaluate quantitative outputs from next-generation sequencing assays (NGS). NGS-derived data sets from assay validation experiments were used to demonstrate the utility of the statistical methods. Both Bland-Altman and linear regression were able to detect the presence and magnitude of constant and proportional error in quantitative values of NGS data. Deming linear regression was used in the context of assay comparison studies, while simple linear regression was used to analyse serial dilution data. Bland-Altman statistical approach was also adapted to quantify assay accuracy, including constant and proportional errors, and precision where theoretical and empirical values were known. The complementary application of the statistical methods described in this manuscript enables more extensive evaluation of performance characteristics of quantitative molecular assays, prior to implementation in the clinical molecular laboratory. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Park, Ji Eun; Han, Kyunghwa; Sung, Yu Sub; Chung, Mi Sun; Koo, Hyun Jung; Yoon, Hee Mang; Choi, Young Jun; Lee, Seung Soo; Kim, Kyung Won; Shin, Youngbin; An, Suah; Cho, Hyo-Min
2017-01-01
Objective To evaluate the frequency and adequacy of statistical analyses in a general radiology journal when reporting a reliability analysis for a diagnostic test. Materials and Methods Sixty-three studies of diagnostic test accuracy (DTA) and 36 studies reporting reliability analyses published in the Korean Journal of Radiology between 2012 and 2016 were analyzed. Studies were judged using the methodological guidelines of the Radiological Society of North America-Quantitative Imaging Biomarkers Alliance (RSNA-QIBA), and COnsensus-based Standards for the selection of health Measurement INstruments (COSMIN) initiative. DTA studies were evaluated by nine editorial board members of the journal. Reliability studies were evaluated by study reviewers experienced with reliability analysis. Results Thirty-one (49.2%) of the 63 DTA studies did not include a reliability analysis when deemed necessary. Among the 36 reliability studies, proper statistical methods were used in all (5/5) studies dealing with dichotomous/nominal data, 46.7% (7/15) of studies dealing with ordinal data, and 95.2% (20/21) of studies dealing with continuous data. Statistical methods were described in sufficient detail regarding weighted kappa in 28.6% (2/7) of studies and regarding the model and assumptions of intraclass correlation coefficient in 35.3% (6/17) and 29.4% (5/17) of studies, respectively. Reliability parameters were used as if they were agreement parameters in 23.1% (3/13) of studies. Reproducibility and repeatability were used incorrectly in 20% (3/15) of studies. Conclusion Greater attention to the importance of reporting reliability, thorough description of the related statistical methods, efforts not to neglect agreement parameters, and better use of relevant terminology is necessary. PMID:29089821
Severgnini, Marco; Bicciato, Silvio; Mangano, Eleonora; Scarlatti, Francesca; Mezzelani, Alessandra; Mattioli, Michela; Ghidoni, Riccardo; Peano, Clelia; Bonnal, Raoul; Viti, Federica; Milanesi, Luciano; De Bellis, Gianluca; Battaglia, Cristina
2006-06-01
Meta-analysis of microarray data is increasingly important, considering both the availability of multiple platforms using disparate technologies and the accumulation in public repositories of data sets from different laboratories. We addressed the issue of comparing gene expression profiles from two microarray platforms by devising a standardized investigative strategy. We tested this procedure by studying MDA-MB-231 cells, which undergo apoptosis on treatment with resveratrol. Gene expression profiles were obtained using high-density, short-oligonucleotide, single-color microarray platforms: GeneChip (Affymetrix) and CodeLink (Amersham). Interplatform analyses were carried out on 8414 common transcripts represented on both platforms, as identified by LocusLink ID, representing 70.8% and 88.6% of annotated GeneChip and CodeLink features, respectively. We identified 105 differentially expressed genes (DEGs) on CodeLink and 42 DEGs on GeneChip. Among them, only 9 DEGs were commonly identified by both platforms. Multiple analyses (BLAST alignment of probes with target sequences, gene ontology, literature mining, and quantitative real-time PCR) permitted us to investigate the factors contributing to the generation of platform-dependent results in single-color microarray experiments. An effective approach to cross-platform comparison involves microarrays of similar technologies, samples prepared by identical methods, and a standardized battery of bioinformatic and statistical analyses.
Cumulative sum control charts for assessing performance in arterial surgery.
Beiles, C Barry; Morton, Anthony P
2004-03-01
The Melbourne Vascular Surgical Association (Melbourne, Australia) undertakes surveillance of mortality following aortic aneurysm surgery, patency at discharge following infrainguinal bypass and stroke and death following carotid endarterectomy. Quality improvement protocol employing the Deming cycle requires that the system for performing surgery first be analysed and optimized. Then process and outcome data are collected and these data require careful analysis. There must be a mechanism so that the causes of unsatisfactory outcomes can be determined and a good feedback mechanism must exist so that good performance is acknowledged and unsatisfactory performance corrected. A simple method for analysing these data that detects changes in average outcome rates is available using cumulative sum statistical control charts. Data have been analysed both retrospectively from 1999 to 2001, and prospectively during 2002 using cumulative sum control methods. A pathway to deal with control chart signals has been developed. The standard of arterial surgery in Victoria, Australia, is high. In one case a safe and satisfactory outcome was achieved by following the pathway developed by the audit committee. Cumulative sum control charts are a simple and effective tool for the identification of variations in performance standards in arterial surgery. The establishment of a pathway to manage problem performance is a vital part of audit activity.
Does Educational Status Impact Adult Mortality in Denmark? A Twin Approach
Madsen, Mia; Andersen, Anne-Marie Nybo; Christensen, Kaare; Andersen, Per Kragh; Osler, Merete
2010-01-01
To disentangle an independent effect of educational status on mortality risk from direct and indirect selection mechanisms, the authors used a discordant twin pair design, which allowed them to isolate the effect of education by means of adjustment for genetic and environmental confounding per design. The study is based on data from the Danish Twin Registry and Statistics Denmark. Using Cox regression, they estimated hazard ratios for mortality according to the highest attained education among 5,260 monozygotic and 11,088 dizygotic same-sex twin pairs born during 1921–1950 and followed during 1980–2008. Both standard cohort and intrapair analyses were conducted separately for zygosity, gender, and birth cohort. Educational differences in mortality were demonstrated in the standard cohort analyses but attenuated in the intrapair analyses in all subgroups but men born during 1921–1935, and no effect modification by zygosity was observed. Hence, the results are most compatible with an effect of early family environment in explaining the educational inequality in mortality. However, large educational differences were still reflected in mortality risk differences within twin pairs, thus supporting some degree of independent effect of education. In addition, the effect of education may be more pronounced in older cohorts of Danish men. PMID:20530466
Have the temperature time series a structural change after 1998?
NASA Astrophysics Data System (ADS)
Werner, Rolf; Valev, Dimitare; Danov, Dimitar
2012-07-01
The global and hemisphere temperature GISS and Hadcrut3 time series were analysed for structural changes. We postulate the continuity of the preceding temperature function depending from the time. The slopes are calculated for a sequence of segments limited by time thresholds. We used a standard method, the restricted linear regression with dummy variables. We performed the calculations and tests for different number of thresholds. The thresholds are searched continuously in determined time intervals. The F-statistic is used to obtain the time points of the structural changes.
Wynant, Willy; Abrahamowicz, Michal
2016-11-01
Standard optimization algorithms for maximizing likelihood may not be applicable to the estimation of those flexible multivariable models that are nonlinear in their parameters. For applications where the model's structure permits separating estimation of mutually exclusive subsets of parameters into distinct steps, we propose the alternating conditional estimation (ACE) algorithm. We validate the algorithm, in simulations, for estimation of two flexible extensions of Cox's proportional hazards model where the standard maximum partial likelihood estimation does not apply, with simultaneous modeling of (1) nonlinear and time-dependent effects of continuous covariates on the hazard, and (2) nonlinear interaction and main effects of the same variable. We also apply the algorithm in real-life analyses to estimate nonlinear and time-dependent effects of prognostic factors for mortality in colon cancer. Analyses of both simulated and real-life data illustrate good statistical properties of the ACE algorithm and its ability to yield new potentially useful insights about the data structure. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Nour-Eldein, Hebatallah
2016-01-01
With limited statistical knowledge of most physicians it is not uncommon to find statistical errors in research articles. To determine the statistical methods and to assess the statistical errors in family medicine (FM) research articles that were published between 2010 and 2014. This was a cross-sectional study. All 66 FM research articles that were published over 5 years by FM authors with affiliation to Suez Canal University were screened by the researcher between May and August 2015. Types and frequencies of statistical methods were reviewed in all 66 FM articles. All 60 articles with identified inferential statistics were examined for statistical errors and deficiencies. A comprehensive 58-item checklist based on statistical guidelines was used to evaluate the statistical quality of FM articles. Inferential methods were recorded in 62/66 (93.9%) of FM articles. Advanced analyses were used in 29/66 (43.9%). Contingency tables 38/66 (57.6%), regression (logistic, linear) 26/66 (39.4%), and t-test 17/66 (25.8%) were the most commonly used inferential tests. Within 60 FM articles with identified inferential statistics, no prior sample size 19/60 (31.7%), application of wrong statistical tests 17/60 (28.3%), incomplete documentation of statistics 59/60 (98.3%), reporting P value without test statistics 32/60 (53.3%), no reporting confidence interval with effect size measures 12/60 (20.0%), use of mean (standard deviation) to describe ordinal/nonnormal data 8/60 (13.3%), and errors related to interpretation were mainly for conclusions without support by the study data 5/60 (8.3%). Inferential statistics were used in the majority of FM articles. Data analysis and reporting statistics are areas for improvement in FM research articles.
Nour-Eldein, Hebatallah
2016-01-01
Background: With limited statistical knowledge of most physicians it is not uncommon to find statistical errors in research articles. Objectives: To determine the statistical methods and to assess the statistical errors in family medicine (FM) research articles that were published between 2010 and 2014. Methods: This was a cross-sectional study. All 66 FM research articles that were published over 5 years by FM authors with affiliation to Suez Canal University were screened by the researcher between May and August 2015. Types and frequencies of statistical methods were reviewed in all 66 FM articles. All 60 articles with identified inferential statistics were examined for statistical errors and deficiencies. A comprehensive 58-item checklist based on statistical guidelines was used to evaluate the statistical quality of FM articles. Results: Inferential methods were recorded in 62/66 (93.9%) of FM articles. Advanced analyses were used in 29/66 (43.9%). Contingency tables 38/66 (57.6%), regression (logistic, linear) 26/66 (39.4%), and t-test 17/66 (25.8%) were the most commonly used inferential tests. Within 60 FM articles with identified inferential statistics, no prior sample size 19/60 (31.7%), application of wrong statistical tests 17/60 (28.3%), incomplete documentation of statistics 59/60 (98.3%), reporting P value without test statistics 32/60 (53.3%), no reporting confidence interval with effect size measures 12/60 (20.0%), use of mean (standard deviation) to describe ordinal/nonnormal data 8/60 (13.3%), and errors related to interpretation were mainly for conclusions without support by the study data 5/60 (8.3%). Conclusion: Inferential statistics were used in the majority of FM articles. Data analysis and reporting statistics are areas for improvement in FM research articles. PMID:27453839
Fast and accurate imputation of summary statistics enhances evidence of functional enrichment
Pasaniuc, Bogdan; Zaitlen, Noah; Shi, Huwenbo; Bhatia, Gaurav; Gusev, Alexander; Pickrell, Joseph; Hirschhorn, Joel; Strachan, David P.; Patterson, Nick; Price, Alkes L.
2014-01-01
Motivation: Imputation using external reference panels (e.g. 1000 Genomes) is a widely used approach for increasing power in genome-wide association studies and meta-analysis. Existing hidden Markov models (HMM)-based imputation approaches require individual-level genotypes. Here, we develop a new method for Gaussian imputation from summary association statistics, a type of data that is becoming widely available. Results: In simulations using 1000 Genomes (1000G) data, this method recovers 84% (54%) of the effective sample size for common (>5%) and low-frequency (1–5%) variants [increasing to 87% (60%) when summary linkage disequilibrium information is available from target samples] versus the gold standard of 89% (67%) for HMM-based imputation, which cannot be applied to summary statistics. Our approach accounts for the limited sample size of the reference panel, a crucial step to eliminate false-positive associations, and it is computationally very fast. As an empirical demonstration, we apply our method to seven case–control phenotypes from the Wellcome Trust Case Control Consortium (WTCCC) data and a study of height in the British 1958 birth cohort (1958BC). Gaussian imputation from summary statistics recovers 95% (105%) of the effective sample size (as quantified by the ratio of χ2 association statistics) compared with HMM-based imputation from individual-level genotypes at the 227 (176) published single nucleotide polymorphisms (SNPs) in the WTCCC (1958BC height) data. In addition, for publicly available summary statistics from large meta-analyses of four lipid traits, we publicly release imputed summary statistics at 1000G SNPs, which could not have been obtained using previously published methods, and demonstrate their accuracy by masking subsets of the data. We show that 1000G imputation using our approach increases the magnitude and statistical evidence of enrichment at genic versus non-genic loci for these traits, as compared with an analysis without 1000G imputation. Thus, imputation of summary statistics will be a valuable tool in future functional enrichment analyses. Availability and implementation: Publicly available software package available at http://bogdan.bioinformatics.ucla.edu/software/. Contact: bpasaniuc@mednet.ucla.edu or aprice@hsph.harvard.edu Supplementary information: Supplementary materials are available at Bioinformatics online. PMID:24990607
Statistical Association Criteria in Forensic Psychiatry–A criminological evaluation of casuistry
Gheorghiu, V; Buda, O; Popescu, I; Trandafir, MS
2011-01-01
Purpose. Identification of potential shared primary psychoprophylaxis and crime prevention is measured by analyzing the rate of commitments for patients–subjects to forensic examination. Material and method. The statistic trial is a retrospective, document–based study. The statistical lot consists of 770 initial examination reports performed and completed during the whole year 2007, primarily analyzed in order to summarize the data within the National Institute of Forensic Medicine, Bucharest, Romania (INML), with one of the group variables being ‘particularities of the psychiatric patient history’, containing the items ‘forensic onset’, ‘commitments within the last year prior to the examination’ and ‘absence of commitments within the last year prior to the examination’. The method used was the Kendall bivariate correlation. For this study, the authors separately analyze only the two items regarding commitments by other correlation alternatives and by modern, elaborate statistical analyses, i.e. recording of the standard case study variables, Kendall bivariate correlation, cross tabulation, factor analysis and hierarchical cluster analysis. Results. The results are varied, from theoretically presumed clinical nosography (such as schizophrenia or manic depression), to non–presumed (conduct disorders) or unexpected behavioral acts, and therefore difficult to interpret. Conclusions. One took into consideration the features of the batch as well as the results of the previous standard correlation of the whole statistical lot. The authors emphasize the role of medical security measures that are actually applied in the therapeutic management in general and in risk and second offence management in particular, as well as the role of forensic psychiatric examinations in the detection of certain aspects related to the monitoring of mental patients. PMID:21505571
A phylogenetic transform enhances analysis of compositional microbiota data
Silverman, Justin D; Washburne, Alex D; Mukherjee, Sayan; David, Lawrence A
2017-01-01
Surveys of microbial communities (microbiota), typically measured as relative abundance of species, have illustrated the importance of these communities in human health and disease. Yet, statistical artifacts commonly plague the analysis of relative abundance data. Here, we introduce the PhILR transform, which incorporates microbial evolutionary models with the isometric log-ratio transform to allow off-the-shelf statistical tools to be safely applied to microbiota surveys. We demonstrate that analyses of community-level structure can be applied to PhILR transformed data with performance on benchmarks rivaling or surpassing standard tools. Additionally, by decomposing distance in the PhILR transformed space, we identified neighboring clades that may have adapted to distinct human body sites. Decomposing variance revealed that covariation of bacterial clades within human body sites increases with phylogenetic relatedness. Together, these findings illustrate how the PhILR transform combines statistical and phylogenetic models to overcome compositional data challenges and enable evolutionary insights relevant to microbial communities. DOI: http://dx.doi.org/10.7554/eLife.21887.001 PMID:28198697
A statistical investigation into the relationship between meteorological parameters and suicide
NASA Astrophysics Data System (ADS)
Dixon, Keith W.; Shulman, Mark D.
1983-06-01
Many previous studies of relationships between weather and suicides have been inconclusive and contradictory. This study investigated the relationship between suicide frequency and meteorological conditions in people who are psychologically predisposed to commit suicide. Linear regressions of diurnal temperature change, departure of temperature from the climatic norm, mean daytime sky cover, and the number of hours of precipitation for each day were performed on daily suicide totals using standard computer methods. Statistical analyses of suicide data for days with and without frontal passages were also performed. Days with five or more suicides (clusterdays) were isolated, and their weather parameters compared with those of nonclusterdays. Results show that neither suicide totals nor clusterday occurrence can be predicted using these meteorological parameters, since statistically significant relationships were not found. Although the data hinted that frontal passages and large daily temperature changes may occur on days with above average suicide totals, it was concluded that the influence of the weather parameters used, on the suicide rate, is a minor one, if indeed one exists.
Statistical Analysis of NAS Parallel Benchmarks and LINPACK Results
NASA Technical Reports Server (NTRS)
Meuer, Hans-Werner; Simon, Horst D.; Strohmeier, Erich; Lasinski, T. A. (Technical Monitor)
1994-01-01
In the last three years extensive performance data have been reported for parallel machines both based on the NAS Parallel Benchmarks, and on LINPACK. In this study we have used the reported benchmark results and performed a number of statistical experiments using factor, cluster, and regression analyses. In addition to the performance results of LINPACK and the eight NAS parallel benchmarks, we have also included peak performance of the machine, and the LINPACK n and n(sub 1/2) values. Some of the results and observations can be summarized as follows: 1) All benchmarks are strongly correlated with peak performance. 2) LINPACK and EP have each a unique signature. 3) The remaining NPB can grouped into three groups as follows: (CG and IS), (LU and SP), and (MG, FT, and BT). Hence three (or four with EP) benchmarks are sufficient to characterize the overall NPB performance. Our poster presentation will follow a standard poster format, and will present the data of our statistical analysis in detail.
Assessing cultural validity in standardized tests in stem education
NASA Astrophysics Data System (ADS)
Gassant, Lunes
This quantitative ex post facto study examined how race and gender, as elements of culture, influence the development of common misconceptions among STEM students. Primary data came from a standardized test: the Digital Logic Concept Inventory (DLCI) developed by Drs. Geoffrey L. Herman, Michael C. Louis, and Craig Zilles from the University of Illinois at Urbana-Champaign. The sample consisted of a cohort of 82 STEM students recruited from three universities in Northern Louisiana. Microsoft Excel and the Statistical Package for the Social Sciences (SPSS) were used for data computation. Two key concepts, several sub concepts, and 19 misconceptions were tested through 11 items in the DLCI. Statistical analyses based on both the Classical Test Theory (Spearman, 1904) and the Item Response Theory (Lord, 1952) yielded similar results: some misconceptions in the DLCI can reliably be predicted by the Race or the Gender of the test taker. The research is significant because it has shown that some misconceptions in a STEM discipline attracted students with similar ethnic backgrounds differently; thus, leading to the existence of some cultural bias in the standardized test. Therefore the study encourages further research in cultural validity in standardized tests. With culturally valid tests, it will be possible to increase the effectiveness of targeted teaching and learning strategies for STEM students from diverse ethnic backgrounds. To some extent, this dissertation has contributed to understanding, better, the gap between high enrollment rates and low graduation rates among African American students and also among other minority students in STEM disciplines.
Schukken, Y H; Rauch, B J; Morelli, J
2013-04-01
The objective of this paper was to define standardized protocols for determining the efficacy of a postmilking teat disinfectant following experimental exposure of teats to both Staphylococcus aureus and Streptococcus agalactiae. The standardized protocols describe the selection of cows and herds and define the critical points in performing experimental exposure, performing bacterial culture, evaluating the culture results, and finally performing statistical analyses and reporting of the results. The protocols define both negative control and positive control trials. For negative control trials, the protocol states that an efficacy of reducing new intramammary infections (IMI) of at least 40% is required for a teat disinfectant to be considered effective. For positive control trials, noninferiority to a control disinfectant with a published efficacy of reducing new IMI of at least 70% is required. Sample sizes for both negative and positive control trials are calculated. Positive control trials are expected to require a large trial size. Statistical analysis methods are defined and, in the proposed methods, the rate of IMI may be analyzed using generalized linear mixed models. The efficacy of the test product can be evaluated while controlling for important covariates and confounders in the trial. Finally, standards for reporting are defined and reporting considerations are discussed. The use of the defined protocol is shown through presentation of the results of a recent trial of a test product against a negative control. Copyright © 2013 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Calhelha, Ricardo C; Martínez, Mireia A; Prieto, M A; Ferreira, Isabel C F R
2017-10-23
The development of convenient tools for describing and quantifying the effects of standard and novel therapeutic agents is essential for the research community, to perform more precise evaluations. Although mathematical models and quantification criteria have been exchanged in the last decade between different fields of study, there are relevant methodologies that lack proper mathematical descriptions and standard criteria to quantify their responses. Therefore, part of the relevant information that can be drawn from the experimental results obtained and the quantification of its statistical reliability are lost. Despite its relevance, there is not a standard form for the in vitro endpoint tumor cell lines' assays (TCLA) that enables the evaluation of the cytotoxic dose-response effects of anti-tumor drugs. The analysis of all the specific problems associated with the diverse nature of the available TCLA used is unfeasible. However, since most TCLA share the main objectives and similar operative requirements, we have chosen the sulforhodamine B (SRB) colorimetric assay for cytotoxicity screening of tumor cell lines as an experimental case study. In this work, the common biological and practical non-linear dose-response mathematical models are tested against experimental data and, following several statistical analyses, the model based on the Weibull distribution was confirmed as the convenient approximation to test the cytotoxic effectiveness of anti-tumor compounds. Then, the advantages and disadvantages of all the different parametric criteria derived from the model, which enable the quantification of the dose-response drug-effects, are extensively discussed. Therefore, model and standard criteria for easily performing the comparisons between different compounds are established. The advantages include a simple application, provision of parametric estimations that characterize the response as standard criteria, economization of experimental effort and enabling rigorous comparisons among the effects of different compounds and experimental approaches. In all experimental data fitted, the calculated parameters were always statistically significant, the equations proved to be consistent and the correlation coefficient of determination was, in most of the cases, higher than 0.98.
Eash, David A.; Barnes, Kimberlee K.; O'Shea, Padraic S.
2016-09-19
A statewide study was led to develop regression equations for estimating three selected spring and three selected fall low-flow frequency statistics for ungaged stream sites in Iowa. The estimation equations developed for the six low-flow frequency statistics include spring (April through June) 1-, 7-, and 30-day mean low flows for a recurrence interval of 10 years and fall (October through December) 1-, 7-, and 30-day mean low flows for a recurrence interval of 10 years. Estimates of the three selected spring statistics are provided for 241 U.S. Geological Survey continuous-record streamgages, and estimates of the three selected fall statistics are provided for 238 of these streamgages, using data through June 2014. Because only 9 years of fall streamflow record were available, three streamgages included in the development of the spring regression equations were not included in the development of the fall regression equations. Because of regulation, diversion, or urbanization, 30 of the 241 streamgages were not included in the development of the regression equations. The study area includes Iowa and adjacent areas within 50 miles of the Iowa border. Because trend analyses indicated statistically significant positive trends when considering the period of record for most of the streamgages, the longest, most recent period of record without a significant trend was determined for each streamgage for use in the study. Geographic information system software was used to measure 63 selected basin characteristics for each of the 211streamgages used to develop the regional regression equations. The study area was divided into three low-flow regions that were defined in a previous study for the development of regional regression equations.Because several streamgages included in the development of regional regression equations have estimates of zero flow calculated from observed streamflow for selected spring and fall low-flow frequency statistics, the final equations for the three low-flow regions were developed using two types of regression analyses—left-censored and generalized-least-squares regression analyses. A total of 211 streamgages were included in the development of nine spring regression equations—three equations for each of the three low-flow regions. A total of 208 streamgages were included in the development of nine fall regression equations—three equations for each of the three low-flow regions. A censoring threshold was used to develop 15 left-censored regression equations to estimate the three fall low-flow frequency statistics for each of the three low-flow regions and to estimate the three spring low-flow frequency statistics for the southern and northwest regions. For the northeast region, generalized-least-squares regression was used to develop three equations to estimate the three spring low-flow frequency statistics. For the northeast region, average standard errors of prediction range from 32.4 to 48.4 percent for the spring equations and average standard errors of estimate range from 56.4 to 73.8 percent for the fall equations. For the northwest region, average standard errors of estimate range from 58.9 to 62.1 percent for the spring equations and from 83.2 to 109.4 percent for the fall equations. For the southern region, average standard errors of estimate range from 43.2 to 64.0 percent for the spring equations and from 78.1 to 78.7 percent for the fall equations.The regression equations are applicable only to stream sites in Iowa with low flows not substantially affected by regulation, diversion, or urbanization and with basin characteristics within the range of those used to develop the equations. The regression equations will be implemented within the U.S. Geological Survey StreamStats Web-based geographic information system application. StreamStats allows users to click on any ungaged stream site and compute estimates of the six selected spring and fall low-flow statistics; in addition, 90-percent prediction intervals and the measured basin characteristics for the ungaged site are provided. StreamStats also allows users to click on any Iowa streamgage to obtain computed estimates for the six selected spring and fall low-flow statistics.
Nitrogen Dioxide Exposure and Airway Responsiveness in ...
Controlled human exposure studies evaluating the effect of inhaled NO2 on the inherent responsiveness of the airways to challenge by bronchoconstricting agents have had mixed results. In general, existing meta-analyses show statistically significant effects of NO2 on the airway responsiveness of individuals with asthma. However, no meta-analysis has provided a comprehensive assessment of clinical relevance of changes in airway responsiveness, the potential for methodological biases in the original papers, and the distribution of responses. This paper provides analyses showing that a statistically significant fraction, 70% of individuals with asthma exposed to NO2 at rest, experience increases in airway responsiveness following 30-minute exposures to NO2 in the range of 200 to 300 ppb and following 60-minute exposures to 100 ppb. The distribution of changes in airway responsiveness is log-normally distributed with a median change of 0.75 (provocative dose following NO2 divided by provocative dose following filtered air exposure) and geometric standard deviation of 1.88. About a quarter of the exposed individuals experience a clinically relevant reduction in their provocative dose due to NO2 relative to air exposure. The fraction experiencing an increase in responsiveness was statistically significant and robust to exclusion of individual studies. Results showed minimal change in airway responsiveness for individuals exposed to NO2 during exercise. A variety of fa
4P: fast computing of population genetics statistics from large DNA polymorphism panels
Benazzo, Andrea; Panziera, Alex; Bertorelle, Giorgio
2015-01-01
Massive DNA sequencing has significantly increased the amount of data available for population genetics and molecular ecology studies. However, the parallel computation of simple statistics within and between populations from large panels of polymorphic sites is not yet available, making the exploratory analyses of a set or subset of data a very laborious task. Here, we present 4P (parallel processing of polymorphism panels), a stand-alone software program for the rapid computation of genetic variation statistics (including the joint frequency spectrum) from millions of DNA variants in multiple individuals and multiple populations. It handles a standard input file format commonly used to store DNA variation from empirical or simulation experiments. The computational performance of 4P was evaluated using large SNP (single nucleotide polymorphism) datasets from human genomes or obtained by simulations. 4P was faster or much faster than other comparable programs, and the impact of parallel computing using multicore computers or servers was evident. 4P is a useful tool for biologists who need a simple and rapid computer program to run exploratory population genetics analyses in large panels of genomic data. It is also particularly suitable to analyze multiple data sets produced in simulation studies. Unix, Windows, and MacOs versions are provided, as well as the source code for easier pipeline implementations. PMID:25628874
GAMBIT: the global and modular beyond-the-standard-model inference tool
NASA Astrophysics Data System (ADS)
Athron, Peter; Balazs, Csaba; Bringmann, Torsten; Buckley, Andy; Chrząszcz, Marcin; Conrad, Jan; Cornell, Jonathan M.; Dal, Lars A.; Dickinson, Hugh; Edsjö, Joakim; Farmer, Ben; Gonzalo, Tomás E.; Jackson, Paul; Krislock, Abram; Kvellestad, Anders; Lundberg, Johan; McKay, James; Mahmoudi, Farvah; Martinez, Gregory D.; Putze, Antje; Raklev, Are; Ripken, Joachim; Rogan, Christopher; Saavedra, Aldo; Savage, Christopher; Scott, Pat; Seo, Seon-Hee; Serra, Nicola; Weniger, Christoph; White, Martin; Wild, Sebastian
2017-11-01
We describe the open-source global fitting package GAMBIT: the Global And Modular Beyond-the-Standard-Model Inference Tool. GAMBIT combines extensive calculations of observables and likelihoods in particle and astroparticle physics with a hierarchical model database, advanced tools for automatically building analyses of essentially any model, a flexible and powerful system for interfacing to external codes, a suite of different statistical methods and parameter scanning algorithms, and a host of other utilities designed to make scans faster, safer and more easily-extendible than in the past. Here we give a detailed description of the framework, its design and motivation, and the current models and other specific components presently implemented in GAMBIT. Accompanying papers deal with individual modules and present first GAMBIT results. GAMBIT can be downloaded from gambit.hepforge.org.
Appraising the self-assessed support needs of Turkish women with breast cancer.
Erci, B; Karabulut, N
2007-03-01
The purposes of this study were to establish the range of needs of women with breast cancer and to examine how women's needs might form clusters that could provide the basis for developing a standardized scale of needs for use by local breast care nurses in the evaluation of care. The sample consisted of 143 women with breast cancer who were admitted to the outpatient and inpatient oncology clinics in a university hospital in Erzurum, Turkey. The data were collected by questionnaire, and included demographic characteristics and the self-assessed support needs of women with breast cancer. Statistical analyses have shown that the standardized scale of needs has statistically acceptable levels of reliability and validity. The women's support needs mostly clustered in Family and Friends (79%) and After Care (78.3%). The most frequently required support category was Family and Friend; however, the women were in need of support of all categories. In terms of age ranges, there are statistically significant differences in relation to Femininity and Body Image, and Family and Friends of the seven categories. Women experienced a high level of needs associated with a diagnosis of breast cancer. The results in this study should increase awareness among cancer care professionals about a range of psychosocial needs and may help them target particular patient groups for particular support interventions.
"What If" Analyses: Ways to Interpret Statistical Significance Test Results Using EXCEL or "R"
ERIC Educational Resources Information Center
Ozturk, Elif
2012-01-01
The present paper aims to review two motivations to conduct "what if" analyses using Excel and "R" to understand the statistical significance tests through the sample size context. "What if" analyses can be used to teach students what statistical significance tests really do and in applied research either prospectively to estimate what sample size…
Lehmann, Corinne; Zipponi, Ingrid; Baumann, Marc U; Radlinger, Lorenz; Mueller, Michael D; Kuhn, Annette
2016-08-01
Pelvic floor rehabilitation is the conservative therapy of choice for women with stress urinary incontinence (SUI). The success rate of surgical procedures in SUI patients with intrinsic sphincter deficiency (ISD) is low. The aim of this study was to analyse the effect of a standardized physiotherapy on patients with SUI and normotonic urethra and ISD. In this study, 64 patients with ISD and 69 patients with normotonic urethra were enrolled. Maximum urethral pressure (MUCP) >20 cm H2 O was considered as normotonic urethral pressure. Before and after physiotherapy MUCP was measured and cough testing was performed. Additionally, patient reported outcome was assessed using the King's Health Questionnaire (KHQ). For statistical analyses Excel 2010 (Microsoft Inc; Redmond, Washington) and SPSS 20 (SPSS Inc; Chicago, Illinois) for Windows were used. Power calculation was based on the primary endpoint incontinence impact and general health. For power calculation, GraphPad Statmate version 2.00 for Windows was used. Sixty-four patients with ISD and 69 patients with normotonic urethra were included in the study. In SUI patients with normotonic and hypotonic urethra KHQ-scores regarding the primary endpoins "general health" and "incontinence impact" significantly improved following standardized physiotherapy. In both groups MUCP increased after physiotherapy. In SUI patients with ISD standardized physiotherapy resulted in a decreased incidence of a positive cough test. Standardized physiotherapy should be offered to patients with SUI and ISD. Long-term results are subject to future studies. Neurourol. Urodynam. 35:711-716, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
Santric-Milicevic, M; Vasic, V; Terzic-Supic, Z
2016-08-15
In times of austerity, the availability of econometric health knowledge assists policy-makers in understanding and balancing health expenditure with health care plans within fiscal constraints. The objective of this study is to explore whether the health workforce supply of the public health care sector, population number, and utilization of inpatient care significantly contribute to total health expenditure. The dependent variable is the total health expenditure (THE) in Serbia from the years 2003 to 2011. The independent variables are the number of health workers employed in the public health care sector, population number, and inpatient care discharges per 100 population. The statistical analyses include the quadratic interpolation method, natural logarithm and differentiation, and multiple linear regression analyses. The level of significance is set at P < 0.05. The regression model captures 90 % of all variations of observed dependent variables (adjusted R square), and the model is significant (P < 0.001). Total health expenditure increased by 1.21 standard deviations, with an increase in health workforce growth rate by 1 standard deviation. Furthermore, this rate decreased by 1.12 standard deviations, with an increase in (negative) population growth rate by 1 standard deviation. Finally, the growth rate increased by 0.38 standard deviation, with an increase of the growth rate of inpatient care discharges per 100 population by 1 standard deviation (P < 0.001). Study results demonstrate that the government has been making an effort to control strongly health budget growth. Exploring causality relationships between health expenditure and health workforce is important for countries that are trying to consolidate their public health finances and achieve universal health coverage at the same time.
Sabes-Figuera, Ramon; McCrone, Paul; Kendricks, Antony
2013-04-01
Economic evaluation analyses can be enhanced by employing regression methods, allowing for the identification of important sub-groups and to adjust for imperfect randomisation in clinical trials or to analyse non-randomised data. To explore the benefits of combining regression techniques and the standard Bayesian approach to refine cost-effectiveness analyses using data from randomised clinical trials. Data from a randomised trial of anti-depressant treatment were analysed and a regression model was used to explore the factors that have an impact on the net benefit (NB) statistic with the aim of using these findings to adjust the cost-effectiveness acceptability curves. Exploratory sub-samples' analyses were carried out to explore possible differences in cost-effectiveness. Results The analysis found that having suffered a previous similar depression is strongly correlated with a lower NB, independent of the outcome measure or follow-up point. In patients with previous similar depression, adding an selective serotonin reuptake inhibitors (SSRI) to supportive care for mild-to-moderate depression is probably cost-effective at the level used by the English National Institute for Health and Clinical Excellence to make recommendations. This analysis highlights the need for incorporation of econometric methods into cost-effectiveness analyses using the NB approach.
Statistical contact angle analyses; "slow moving" drops on a horizontal silicon-oxide surface.
Schmitt, M; Grub, J; Heib, F
2015-06-01
Sessile drop experiments on horizontal surfaces are commonly used to characterise surface properties in science and in industry. The advancing angle and the receding angle are measurable on every solid. Specially on horizontal surfaces even the notions themselves are critically questioned by some authors. Building a standard, reproducible and valid method of measuring and defining specific (advancing/receding) contact angles is an important challenge of surface science. Recently we have developed two/three approaches, by sigmoid fitting, by independent and by dependent statistical analyses, which are practicable for the determination of specific angles/slopes if inclining the sample surface. These approaches lead to contact angle data which are independent on "user-skills" and subjectivity of the operator which is also of urgent need to evaluate dynamic measurements of contact angles. We will show in this contribution that the slightly modified procedures are also applicable to find specific angles for experiments on horizontal surfaces. As an example droplets on a flat freshly cleaned silicon-oxide surface (wafer) are dynamically measured by sessile drop technique while the volume of the liquid is increased/decreased. The triple points, the time, the contact angles during the advancing and the receding of the drop obtained by high-precision drop shape analysis are statistically analysed. As stated in the previous contribution the procedure is called "slow movement" analysis due to the small covered distance and the dominance of data points with low velocity. Even smallest variations in velocity such as the minimal advancing motion during the withdrawing of the liquid are identifiable which confirms the flatness and the chemical homogeneity of the sample surface and the high sensitivity of the presented approaches. Copyright © 2014 Elsevier Inc. All rights reserved.
Examining the Reproducibility of 6 Published Studies in Public Health Services and Systems Research.
Harris, Jenine K; B Wondmeneh, Sarah; Zhao, Yiqiang; Leider, Jonathon P
2018-02-23
Research replication, or repeating a study de novo, is the scientific standard for building evidence and identifying spurious results. While replication is ideal, it is often expensive and time consuming. Reproducibility, or reanalysis of data to verify published findings, is one proposed minimum alternative standard. While a lack of research reproducibility has been identified as a serious and prevalent problem in biomedical research and a few other fields, little work has been done to examine the reproducibility of public health research. We examined reproducibility in 6 studies from the public health services and systems research subfield of public health research. Following the methods described in each of the 6 papers, we computed the descriptive and inferential statistics for each study. We compared our results with the original study results and examined the percentage differences in descriptive statistics and differences in effect size, significance, and precision of inferential statistics. All project work was completed in 2017. We found consistency between original and reproduced results for each paper in at least 1 of the 4 areas examined. However, we also found some inconsistency. We identified incorrect transcription of results and omitting detail about data management and analyses as the primary contributors to the inconsistencies. Increasing reproducibility, or reanalysis of data to verify published results, can improve the quality of science. Researchers, journals, employers, and funders can all play a role in improving the reproducibility of science through several strategies including publishing data and statistical code, using guidelines to write clear and complete methods sections, conducting reproducibility reviews, and incentivizing reproducible science.
Predicting clinical trial results based on announcements of interim analyses
2014-01-01
Background Announcements of interim analyses of a clinical trial convey information about the results beyond the trial’s Data Safety Monitoring Board (DSMB). The amount of information conveyed may be minimal, but the fact that none of the trial’s stopping boundaries has been crossed implies that the experimental therapy is neither extremely effective nor hopeless. Predicting success of the ongoing trial is of interest to the trial’s sponsor, the medical community, pharmaceutical companies, and investors. We determine the probability of trial success by quantifying only the publicly available information from interim analyses of an ongoing trial. We illustrate our method in the context of the National Surgical Adjuvant Breast and Bowel (NSABP) trial, C-08. Methods We simulated trials based on the specifics of the NSABP C-08 protocol that were publicly available. We quantified the uncertainty around the treatment effect using prior weights for the various possibilities in light of other colon cancer studies and other studies of the investigational agent, bevacizumab. We considered alternative prior distributions. Results Subsequent to the trial’s third interim analysis, our predictive probabilities were: that the trial would eventually be successful, 48.0%; would stop for futility, 7.4%; and would continue to completion without statistical significance, 44.5%. The actual trial continued to completion without statistical significance. Conclusions Announcements of interim analyses provide information outside the DSMB’s sphere of confidentiality. This information is potentially helpful to clinical trial prognosticators. ‘Information leakage’ from standard interim analyses such as in NSABP C-08 is conventionally viewed as acceptable even though it may be quite revealing. Whether leakage from more aggressive types of adaptations is acceptable should be assessed at the design stage. PMID:24607270
Vedula, S Swaroop; Li, Tianjing; Dickersin, Kay
2013-01-01
Details about the type of analysis (e.g., intent to treat [ITT]) and definitions (i.e., criteria for including participants in the analysis) are necessary for interpreting a clinical trial's findings. Our objective was to compare the description of types of analyses and criteria for including participants in the publication (i.e., what was reported) with descriptions in the corresponding internal company documents (i.e., what was planned and what was done). Trials were for off-label uses of gabapentin sponsored by Pfizer and Parke-Davis, and documents were obtained through litigation. For each trial, we compared internal company documents (protocols, statistical analysis plans, and research reports, all unpublished), with publications. One author extracted data and another verified, with a third person verifying discordant items and a sample of the rest. Extracted data included the number of participants randomized and analyzed for efficacy, and types of analyses for efficacy and safety and their definitions (i.e., criteria for including participants in each type of analysis). We identified 21 trials, 11 of which were published randomized controlled trials, and that provided the documents needed for planned comparisons. For three trials, there was disagreement on the number of randomized participants between the research report and publication. Seven types of efficacy analyses were described in the protocols, statistical analysis plans, and publications, including ITT and six others. The protocol or publication described ITT using six different definitions, resulting in frequent disagreements between the two documents (i.e., different numbers of participants were included in the analyses). Descriptions of analyses conducted did not agree between internal company documents and what was publicly reported. Internal company documents provide extensive documentation of methods planned and used, and trial findings, and should be publicly accessible. Reporting standards for randomized controlled trials should recommend transparent descriptions and definitions of analyses performed and which study participants are excluded.
Hydrology and trout populations of cold-water rivers of Michigan and Wisconsin
Hendrickson, G.E.; Knutilla, R.L.
1974-01-01
Statistical multiple-regression analyses showed significant relationships between trout populations and hydrologic parameters. Parameters showing the higher levels of significance were temperature, hardness of water, percentage of gravel bottom, percentage of bottom vegetation, variability of streamflow, and discharge per unit drainage area. Trout populations increase with lower levels of annual maximum water temperatures, with increase in water hardness, and with increase in percentage of gravel and bottom vegetation. Trout populations also increase with decrease in variability of streamflow, and with increase in discharge per unit drainage area. Most hydrologic parameters were significant when evaluated collectively, but no parameter, by itself, showed a high degree of correlation with trout populations in regression analyses that included all the streams sampled. Regression analyses of stream segments that were restricted to certain limits of hardness, temperature, or percentage of gravel bottom showed improvements in correlation. Analyses of trout populations, in pounds per acre and pounds per mile and hydrologic parameters resulted in regression equations from which trout populations could be estimated with standard errors of 89 and 84 per cent, respectively.
Publication bias in obesity treatment trials?
Allison, D B; Faith, M S; Gorman, B S
1996-10-01
The present investigation examined the extent of publication bias (namely the tendency to publish significant findings and file away non-significant findings) within the obesity treatment literature. Quantitative literature synthesis of four published meta-analyses from the obesity treatment literature. Interventions in these studies included pharmacological, educational, child, and couples treatments. To assess publication bias, several regression procedures (for example weighted least-squares, random-effects multi-level modeling, and robust regression methods) were used to regress effect sizes onto their standard errors, or proxies thereof, within each of the four meta-analysis. A significant positive beta weight in these analyses signified publication bias. There was evidence for publication bias within two of the four published meta-analyses, such that reviews of published studies were likely to overestimate clinical efficacy. The lack of evidence for publication bias within the two other meta-analyses might have been due to insufficient statistical power rather than the absence of selection bias. As in other disciplines, publication bias appears to exist in the obesity treatment literature. Suggestions are offered for managing publication bias once identified or reducing its likelihood in the first place.
Mendell, M J; Eliseeva, E A; Davies, M M; Lobscheid, A
2016-08-01
Limited evidence has associated lower ventilation rates (VRs) in schools with reduced student learning or achievement. We analyzed longitudinal data collected over two school years from 150 classrooms in 28 schools within three California school districts. We estimated daily classroom VRs from real-time indoor carbon dioxide measured by web-connected sensors. School districts provided individual-level scores on standard tests in Math and English, and classroom-level demographic data. Analyses assessing learning effects used two VR metrics: average VRs for 30 days prior to tests, and proportion of prior daily VRs above specified thresholds during the year. We estimated relationships between scores and VR metrics in multivariate models with generalized estimating equations. All school districts had median school-year VRs below the California VR standard. Most models showed some positive associations of VRs with test scores; however, estimates varied in magnitude and few 95% confidence intervals excluded the null. Combined-district models estimated statistically significant increases of 0.6 points (P = 0.01) on English tests for each 10% increase in prior 30-day VRs. Estimated increases in Math were of similar magnitude but not statistically significant. Findings suggest potential small positive associations between classroom VRs and learning. Published 2015. This article is a U.S. Government work and is in the public domain in the USA.
Helmholtz and Gibbs ensembles, thermodynamic limit and bistability in polymer lattice models
NASA Astrophysics Data System (ADS)
Giordano, Stefano
2017-12-01
Representing polymers by random walks on a lattice is a fruitful approach largely exploited to study configurational statistics of polymer chains and to develop efficient Monte Carlo algorithms. Nevertheless, the stretching and the folding/unfolding of polymer chains within the Gibbs (isotensional) and the Helmholtz (isometric) ensembles of the statistical mechanics have not been yet thoroughly analysed by means of the lattice methodology. This topic, motivated by the recent introduction of several single-molecule force spectroscopy techniques, is investigated in the present paper. In particular, we analyse the force-extension curves under the Gibbs and Helmholtz conditions and we give a proof of the ensembles equivalence in the thermodynamic limit for polymers represented by a standard random walk on a lattice. Then, we generalize these concepts for lattice polymers that can undergo conformational transitions or, equivalently, for chains composed of bistable or two-state elements (that can be either folded or unfolded). In this case, the isotensional condition leads to a plateau-like force-extension response, whereas the isometric condition causes a sawtooth-like force-extension curve, as predicted by numerous experiments. The equivalence of the ensembles is finally proved also for lattice polymer systems exhibiting conformational transitions.
A review of published analyses of case-cohort studies and recommendations for future reporting.
Sharp, Stephen J; Poulaliou, Manon; Thompson, Simon G; White, Ian R; Wood, Angela M
2014-01-01
The case-cohort study design combines the advantages of a cohort study with the efficiency of a nested case-control study. However, unlike more standard observational study designs, there are currently no guidelines for reporting results from case-cohort studies. Our aim was to review recent practice in reporting these studies, and develop recommendations for the future. By searching papers published in 24 major medical and epidemiological journals between January 2010 and March 2013 using PubMed, Scopus and Web of Knowledge, we identified 32 papers reporting case-cohort studies. The median subcohort sampling fraction was 4.1% (interquartile range 3.7% to 9.1%). The papers varied in their approaches to describing the numbers of individuals in the original cohort and the subcohort, presenting descriptive data, and in the level of detail provided about the statistical methods used, so it was not always possible to be sure that appropriate analyses had been conducted. Based on the findings of our review, we make recommendations about reporting of the study design, subcohort definition, numbers of participants, descriptive information and statistical methods, which could be used alongside existing STROBE guidelines for reporting observational studies.
Linking land use with pesticides in Dutch surface waters.
Van't, Zelfde M T; Tamis, W L M; Vijver, M G; De Snoo, G R
2012-01-01
Compared with other European countries The Netherlands has a relatively high level of pesticide consumption, particularly in agriculture. Many of the compounds concerned end up in surface waters. Surface water quality is routinely monitored and numerous pesticides are found to be present in high concentrations, with various standards being regularly exceeded. Many standards-breaching pesticides exhibit regional patterns that can be traced back to land use. These patterns have been statistically analysed by correlating surface area per land use category with standards exceedance per pesticide, thereby identifying numerous significant correlations with respect to breaches of both the ecotoxicological standard (Maximum Tolerable Risk, MTR) and the drinking water standard. In the case of the MTR, greenhouse horticulture, floriculture and bulb-growing have the highest number as well as percentage of standard-breaching pesticides, despite these market segments being relatively small in terms of area cropped. Cereals, onions, vegetables, perennial border plants and pulses are also associated with many pesticides that exceed the drinking water standard. When a correction is made for cropped acreage, cereals and potatoes also prove to be a major contributor to monitoring sites where the MTR standard is exceeded. Over the period 1998-2006 the land-use categories with the most and highest percentage of standards-exceeding pesticides (greenhouse horticulture, bulb-growing and flower cultivation) showed an increase in the percentage of standards-exceeding compounds.
[Quality assessment in anesthesia].
Kupperwasser, B
1996-01-01
Quality assessment (assurance/improvement) is the set of methods used to measure and improve the delivered care and the department's performance against pre-established criteria or standards. The four stages of the self-maintained quality assessment cycle are: problem identification, problem analysis, problem correction and evaluation of corrective actions. Quality assessment is a measurable entity for which it is necessary to define and calibrate measurement parameters (indicators) from available data gathered from the hospital anaesthesia environment. Problem identification comes from the accumulation of indicators. There are four types of quality indicators: structure, process, outcome and sentinel indicators. The latter signal a quality defect, are independent of outcomes, are easier to analyse by statistical methods and closely related to processes and main targets of quality improvement. The three types of methods to analyse the problems (indicators) are: peer review, quantitative methods and risks management techniques. Peer review is performed by qualified anaesthesiologists. To improve its validity, the review process should be explicited and conclusions based on standards of practice and literature references. The quantitative methods are statistical analyses applied to the collected data and presented in a graphic format (histogram, Pareto diagram, control charts). The risks management techniques include: a) critical incident analysis establishing an objective relationship between a 'critical' event and the associated human behaviours; b) system accident analysis, based on the fact that accidents continue to occur despite safety systems and sophisticated technologies, checks of all the process components leading to the impredictable outcome and not just the human factors; c) cause-effect diagrams facilitate the problem analysis in reducing its causes to four fundamental components (persons, regulations, equipment, process). Definition and implementation of corrective measures, based on the findings of the two previous stages, are the third step of the evaluation cycle. The Hawthorne effect is an outcome improvement, before the implementation of any corrective actions. Verification of the implemented actions is the final and mandatory step closing the evaluation cycle.
Anderson, Craig S; Woodward, Mark; Arima, Hisatomi; Chen, Xiaoying; Lindley, Richard I; Wang, Xia; Chalmers, John
2015-12-01
The ENhanced Control of Hypertension And Thrombolysis strokE stuDy trial is a 2 × 2 quasi-factorial active-comparison, prospective, randomized, open, blinded endpoint clinical trial that is evaluating in thrombolysis-eligible acute ischemic stroke patients whether: (1) low-dose (0·6 mg/kg body weight) intravenous alteplase has noninferior efficacy and lower risk of symptomatic intracerebral hemorrhage compared with standard-dose (0·9 mg/kg body weight) intravenous alteplase; and (2) early intensive blood pressure lowering (systolic target 130-140 mmHg) has superior efficacy and lower risk of any intracerebral hemorrhage compared with guideline-recommended blood pressure control (systolic target <180 mmHg). To outline in detail the predetermined statistical analysis plan for the 'alteplase dose arm' of the study. All data collected by participating researchers will be reviewed and formally assessed. Information pertaining to the baseline characteristics of patients, their process of care, and the delivery of treatments will be classified, and for each item, appropriate descriptive statistical analyses are planned with appropriate comparisons made between randomized groups. For the trial outcomes, the most appropriate statistical comparisons to be made between groups are planned and described. A statistical analysis plan was developed for the results of the alteplase dose arm of the study that is transparent, available to the public, verifiable, and predetermined before completion of data collection. We have developed a predetermined statistical analysis plan for the ENhanced Control of Hypertension And Thrombolysis strokE stuDy alteplase dose arm which is to be followed to avoid analysis bias arising from prior knowledge of the study findings. © 2015 The Authors. International Journal of Stroke published by John Wiley & Sons Ltd on behalf of World Stroke Organization.
Rice, Stephen B; Chan, Christopher; Brown, Scott C; Eschbach, Peter; Han, Li; Ensor, David S; Stefaniak, Aleksandr B; Bonevich, John; Vladár, András E; Hight Walker, Angela R; Zheng, Jiwen; Starnes, Catherine; Stromberg, Arnold; Ye, Jia; Grulke, Eric A
2015-01-01
This paper reports an interlaboratory comparison that evaluated a protocol for measuring and analysing the particle size distribution of discrete, metallic, spheroidal nanoparticles using transmission electron microscopy (TEM). The study was focused on automated image capture and automated particle analysis. NIST RM8012 gold nanoparticles (30 nm nominal diameter) were measured for area-equivalent diameter distributions by eight laboratories. Statistical analysis was used to (1) assess the data quality without using size distribution reference models, (2) determine reference model parameters for different size distribution reference models and non-linear regression fitting methods and (3) assess the measurement uncertainty of a size distribution parameter by using its coefficient of variation. The interlaboratory area-equivalent diameter mean, 27.6 nm ± 2.4 nm (computed based on a normal distribution), was quite similar to the area-equivalent diameter, 27.6 nm, assigned to NIST RM8012. The lognormal reference model was the preferred choice for these particle size distributions as, for all laboratories, its parameters had lower relative standard errors (RSEs) than the other size distribution reference models tested (normal, Weibull and Rosin–Rammler–Bennett). The RSEs for the fitted standard deviations were two orders of magnitude higher than those for the fitted means, suggesting that most of the parameter estimate errors were associated with estimating the breadth of the distributions. The coefficients of variation for the interlaboratory statistics also confirmed the lognormal reference model as the preferred choice. From quasi-linear plots, the typical range for good fits between the model and cumulative number-based distributions was 1.9 fitted standard deviations less than the mean to 2.3 fitted standard deviations above the mean. Automated image capture, automated particle analysis and statistical evaluation of the data and fitting coefficients provide a framework for assessing nanoparticle size distributions using TEM for image acquisition. PMID:26361398
Latimer, Nicholas R; Abrams, Keith R; Amonkar, Mayur M; Stapelkamp, Ceilidh; Swann, R Suzanne
2015-07-01
Patients with previously untreated BRAF V600E mutation-positive melanoma in BREAK-3 showed a median overall survival (OS) of 18.2 months for dabrafenib versus 15.6 months for dacarbazine (hazard ratio [HR], 0.76; 95% confidence interval, 0.48-1.21). Because patients receiving dacarbazine were allowed to switch to dabrafenib at disease progression, we attempted to adjust for the confounding effects on OS. Rank preserving structural failure time models (RPSFTMs) and the iterative parameter estimation (IPE) algorithm were used. Two analyses, "treatment group" (assumes treatment effect could continue until death) and "on-treatment observed" (assumes treatment effect disappears with discontinuation), were used to test the assumptions around the durability of the treatment effect. A total of 36 of 63 patients (57%) receiving dacarbazine switched to dabrafenib. The adjusted OS HRs ranged from 0.50 to 0.55, depending on the analysis. The RPSFTM and IPE "treatment group" and "on-treatment observed" analyses performed similarly well. RPSFTM and IPE analyses resulted in point estimates for the OS HR that indicate a substantial increase in the treatment effect compared with the unadjusted OS HR of 0.76. The results are uncertain because of the assumptions associated with the adjustment methods. The confidence intervals continued to cross 1.00; thus, the adjusted estimates did not provide statistically significant evidence of a treatment benefit on survival. However, it is clear that a standard intention-to-treat analysis will be confounded in the presence of treatment switching-a reliance on unadjusted analyses could lead to inappropriate practice. Adjustment analyses provide useful additional information on the estimated treatment effects to inform decision making. Treatment switching is common in oncology trials, and the implications of this for the interpretation of the clinical effectiveness and cost-effectiveness of the novel treatment are important to consider. If patients who switch treatments benefit from the experimental treatment and a standard intention-to-treat analysis is conducted, the overall survival advantage associated with the new treatment could be underestimated. The present study applied established statistical methods to adjust for treatment switching in a trial that compared dabrafenib and dacarbazine for metastatic melanoma. The results showed that this led to a substantially increased estimate of the overall survival treatment effect associated with dabrafenib. ©AlphaMed Press.
Abrams, Keith R.; Amonkar, Mayur M.; Stapelkamp, Ceilidh; Swann, R. Suzanne
2015-01-01
Background. Patients with previously untreated BRAF V600E mutation-positive melanoma in BREAK-3 showed a median overall survival (OS) of 18.2 months for dabrafenib versus 15.6 months for dacarbazine (hazard ratio [HR], 0.76; 95% confidence interval, 0.48–1.21). Because patients receiving dacarbazine were allowed to switch to dabrafenib at disease progression, we attempted to adjust for the confounding effects on OS. Materials and Methods. Rank preserving structural failure time models (RPSFTMs) and the iterative parameter estimation (IPE) algorithm were used. Two analyses, “treatment group” (assumes treatment effect could continue until death) and “on-treatment observed” (assumes treatment effect disappears with discontinuation), were used to test the assumptions around the durability of the treatment effect. Results. A total of 36 of 63 patients (57%) receiving dacarbazine switched to dabrafenib. The adjusted OS HRs ranged from 0.50 to 0.55, depending on the analysis. The RPSFTM and IPE “treatment group” and “on-treatment observed” analyses performed similarly well. Conclusion. RPSFTM and IPE analyses resulted in point estimates for the OS HR that indicate a substantial increase in the treatment effect compared with the unadjusted OS HR of 0.76. The results are uncertain because of the assumptions associated with the adjustment methods. The confidence intervals continued to cross 1.00; thus, the adjusted estimates did not provide statistically significant evidence of a treatment benefit on survival. However, it is clear that a standard intention-to-treat analysis will be confounded in the presence of treatment switching—a reliance on unadjusted analyses could lead to inappropriate practice. Adjustment analyses provide useful additional information on the estimated treatment effects to inform decision making. Implications for Practice: Treatment switching is common in oncology trials, and the implications of this for the interpretation of the clinical effectiveness and cost-effectiveness of the novel treatment are important to consider. If patients who switch treatments benefit from the experimental treatment and a standard intention-to-treat analysis is conducted, the overall survival advantage associated with the new treatment could be underestimated. The present study applied established statistical methods to adjust for treatment switching in a trial that compared dabrafenib and dacarbazine for metastatic melanoma. The results showed that this led to a substantially increased estimate of the overall survival treatment effect associated with dabrafenib. PMID:26040620
Oostdam, Nicolette; Bosmans, Judith; Wouters, Maurice G A J; Eekhoff, Elisabeth M W; van Mechelen, Willem; van Poppel, Mireille N M
2012-07-04
The prevalence of gestational diabetes mellitus (GDM) is increasing worldwide. GDM and the risks associated with GDM lead to increased health care costs and losses in productivity. The objective of this study is to evaluate whether the FitFor2 exercise program during pregnancy is cost-effective from a societal perspective as compared to standard care. A randomised controlled trial (RCT) and simultaneous economic evaluation of the FitFor2 program were conducted. Pregnant women at risk for GDM were randomised to an exercise program to prevent high maternal blood glucose (n = 62) or to standard care (n = 59). The exercise program consisted of two sessions of aerobic and strengthening exercises per week. Clinical outcome measures were maternal fasting blood glucose levels, insulin sensitivity and infant birth weight. Quality of life was measured using the EuroQol 5-D and quality-adjusted life-years (QALYs) were calculated. Resource utilization and sick leave data were collected by questionnaires. Data were analysed according to the intention-to-treat principle. Missing data were imputed using multiple imputations. Bootstrapping techniques estimated the uncertainty surrounding the cost differences and incremental cost-effectiveness ratios. There were no statistically significant differences in any outcome measure. During pregnancy, total health care costs and costs of productivity losses were statistically non-significant (mean difference €1308; 95%CI €-229 - €3204). The cost-effectiveness analyses showed that the exercise program was not cost-effective in comparison to the control group for blood glucose levels, insulin sensitivity, infant birth weight or QALYs. The twice-weekly exercise program for pregnant women at risk for GDM evaluated in the present study was not cost-effective compared to standard care. Based on these results, implementation of this exercise program for the prevention of GDM cannot be recommended. NTR1139.
Implementation errors in the GingerALE Software: Description and recommendations.
Eickhoff, Simon B; Laird, Angela R; Fox, P Mickle; Lancaster, Jack L; Fox, Peter T
2017-01-01
Neuroscience imaging is a burgeoning, highly sophisticated field the growth of which has been fostered by grant-funded, freely distributed software libraries that perform voxel-wise analyses in anatomically standardized three-dimensional space on multi-subject, whole-brain, primary datasets. Despite the ongoing advances made using these non-commercial computational tools, the replicability of individual studies is an acknowledged limitation. Coordinate-based meta-analysis offers a practical solution to this limitation and, consequently, plays an important role in filtering and consolidating the enormous corpus of functional and structural neuroimaging results reported in the peer-reviewed literature. In both primary data and meta-analytic neuroimaging analyses, correction for multiple comparisons is a complex but critical step for ensuring statistical rigor. Reports of errors in multiple-comparison corrections in primary-data analyses have recently appeared. Here, we report two such errors in GingerALE, a widely used, US National Institutes of Health (NIH)-funded, freely distributed software package for coordinate-based meta-analysis. These errors have given rise to published reports with more liberal statistical inferences than were specified by the authors. The intent of this technical report is threefold. First, we inform authors who used GingerALE of these errors so that they can take appropriate actions including re-analyses and corrective publications. Second, we seek to exemplify and promote an open approach to error management. Third, we discuss the implications of these and similar errors in a scientific environment dependent on third-party software. Hum Brain Mapp 38:7-11, 2017. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Cunningham, Michael R.; Baumeister, Roy F.
2016-01-01
The limited resource model states that self-control is governed by a relatively finite set of inner resources on which people draw when exerting willpower. Once self-control resources have been used up or depleted, they are less available for other self-control tasks, leading to a decrement in subsequent self-control success. The depletion effect has been studied for over 20 years, tested or extended in more than 600 studies, and supported in an independent meta-analysis (Hagger et al., 2010). Meta-analyses are supposed to reduce bias in literature reviews. Carter et al.’s (2015) meta-analysis, by contrast, included a series of questionable decisions involving sampling, methods, and data analysis. We provide quantitative analyses of key sampling issues: exclusion of many of the best depletion studies based on idiosyncratic criteria and the emphasis on mini meta-analyses with low statistical power as opposed to the overall depletion effect. We discuss two key methodological issues: failure to code for research quality, and the quantitative impact of weak studies by novice researchers. We discuss two key data analysis issues: questionable interpretation of the results of trim and fill and Funnel Plot Asymmetry test procedures, and the use and misinterpretation of the untested Precision Effect Test and Precision Effect Estimate with Standard Error (PEESE) procedures. Despite these serious problems, the Carter et al. (2015) meta-analysis results actually indicate that there is a real depletion effect – contrary to their title. PMID:27826272
Fish: A New Computer Program for Friendly Introductory Statistics Help
ERIC Educational Resources Information Center
Brooks, Gordon P.; Raffle, Holly
2005-01-01
All introductory statistics students must master certain basic descriptive statistics, including means, standard deviations and correlations. Students must also gain insight into such complex concepts as the central limit theorem and standard error. This article introduces and describes the Friendly Introductory Statistics Help (FISH) computer…
Rivera-Rodriguez, Claudia L; Resch, Stephen; Haneuse, Sebastien
2018-01-01
In many low- and middle-income countries, the costs of delivering public health programs such as for HIV/AIDS, nutrition, and immunization are not routinely tracked. A number of recent studies have sought to estimate program costs on the basis of detailed information collected on a subsample of facilities. While unbiased estimates can be obtained via accurate measurement and appropriate analyses, they are subject to statistical uncertainty. Quantification of this uncertainty, for example, via standard errors and/or 95% confidence intervals, provides important contextual information for decision-makers and for the design of future costing studies. While other forms of uncertainty, such as that due to model misspecification, are considered and can be investigated through sensitivity analyses, statistical uncertainty is often not reported in studies estimating the total program costs. This may be due to a lack of awareness/understanding of (1) the technical details regarding uncertainty estimation and (2) the availability of software with which to calculate uncertainty for estimators resulting from complex surveys. We provide an overview of statistical uncertainty in the context of complex costing surveys, emphasizing the various potential specific sources that contribute to overall uncertainty. We describe how analysts can compute measures of uncertainty, either via appropriately derived formulae or through resampling techniques such as the bootstrap. We also provide an overview of calibration as a means of using additional auxiliary information that is readily available for the entire program, such as the total number of doses administered, to decrease uncertainty and thereby improve decision-making and the planning of future studies. A recent study of the national program for routine immunization in Honduras shows that uncertainty can be reduced by using information available prior to the study. This method can not only be used when estimating the total cost of delivering established health programs but also to decrease uncertainty when the interest lies in assessing the incremental effect of an intervention. Measures of statistical uncertainty associated with survey-based estimates of program costs, such as standard errors and 95% confidence intervals, provide important contextual information for health policy decision-making and key inputs for the design of future costing studies. Such measures are often not reported, possibly because of technical challenges associated with their calculation and a lack of awareness of appropriate software. Modern statistical analysis methods for survey data, such as calibration, provide a means to exploit additional information that is readily available but was not used in the design of the study to significantly improve the estimation of total cost through the reduction of statistical uncertainty.
Resch, Stephen
2018-01-01
Objectives: In many low- and middle-income countries, the costs of delivering public health programs such as for HIV/AIDS, nutrition, and immunization are not routinely tracked. A number of recent studies have sought to estimate program costs on the basis of detailed information collected on a subsample of facilities. While unbiased estimates can be obtained via accurate measurement and appropriate analyses, they are subject to statistical uncertainty. Quantification of this uncertainty, for example, via standard errors and/or 95% confidence intervals, provides important contextual information for decision-makers and for the design of future costing studies. While other forms of uncertainty, such as that due to model misspecification, are considered and can be investigated through sensitivity analyses, statistical uncertainty is often not reported in studies estimating the total program costs. This may be due to a lack of awareness/understanding of (1) the technical details regarding uncertainty estimation and (2) the availability of software with which to calculate uncertainty for estimators resulting from complex surveys. We provide an overview of statistical uncertainty in the context of complex costing surveys, emphasizing the various potential specific sources that contribute to overall uncertainty. Methods: We describe how analysts can compute measures of uncertainty, either via appropriately derived formulae or through resampling techniques such as the bootstrap. We also provide an overview of calibration as a means of using additional auxiliary information that is readily available for the entire program, such as the total number of doses administered, to decrease uncertainty and thereby improve decision-making and the planning of future studies. Results: A recent study of the national program for routine immunization in Honduras shows that uncertainty can be reduced by using information available prior to the study. This method can not only be used when estimating the total cost of delivering established health programs but also to decrease uncertainty when the interest lies in assessing the incremental effect of an intervention. Conclusion: Measures of statistical uncertainty associated with survey-based estimates of program costs, such as standard errors and 95% confidence intervals, provide important contextual information for health policy decision-making and key inputs for the design of future costing studies. Such measures are often not reported, possibly because of technical challenges associated with their calculation and a lack of awareness of appropriate software. Modern statistical analysis methods for survey data, such as calibration, provide a means to exploit additional information that is readily available but was not used in the design of the study to significantly improve the estimation of total cost through the reduction of statistical uncertainty. PMID:29636964
Zhang, Lin; Vranckx, Katleen; Janssens, Koen; Sandrin, Todd R.
2015-01-01
MALDI-TOF mass spectrometry has been shown to be a rapid and reliable tool for identification of bacteria at the genus and species, and in some cases, strain levels. Commercially available and open source software tools have been developed to facilitate identification; however, no universal/standardized data analysis pipeline has been described in the literature. Here, we provide a comprehensive and detailed demonstration of bacterial identification procedures using a MALDI-TOF mass spectrometer. Mass spectra were collected from 15 diverse bacteria isolated from Kartchner Caverns, AZ, USA, and identified by 16S rDNA sequencing. Databases were constructed in BioNumerics 7.1. Follow-up analyses of mass spectra were performed, including cluster analyses, peak matching, and statistical analyses. Identification was performed using blind-coded samples randomly selected from these 15 bacteria. Two identification methods are presented: similarity coefficient-based and biomarker-based methods. Results show that both identification methods can identify the bacteria to the species level. PMID:25590854
Zhang, Lin; Vranckx, Katleen; Janssens, Koen; Sandrin, Todd R
2015-01-02
MALDI-TOF mass spectrometry has been shown to be a rapid and reliable tool for identification of bacteria at the genus and species, and in some cases, strain levels. Commercially available and open source software tools have been developed to facilitate identification; however, no universal/standardized data analysis pipeline has been described in the literature. Here, we provide a comprehensive and detailed demonstration of bacterial identification procedures using a MALDI-TOF mass spectrometer. Mass spectra were collected from 15 diverse bacteria isolated from Kartchner Caverns, AZ, USA, and identified by 16S rDNA sequencing. Databases were constructed in BioNumerics 7.1. Follow-up analyses of mass spectra were performed, including cluster analyses, peak matching, and statistical analyses. Identification was performed using blind-coded samples randomly selected from these 15 bacteria. Two identification methods are presented: similarity coefficient-based and biomarker-based methods. Results show that both identification methods can identify the bacteria to the species level.
When ab ≠ c - c': published errors in the reports of single-mediator models.
Petrocelli, John V; Clarkson, Joshua J; Whitmire, Melanie B; Moon, Paul E
2013-06-01
Accurate reports of mediation analyses are critical to the assessment of inferences related to causality, since these inferences are consequential for both the evaluation of previous research (e.g., meta-analyses) and the progression of future research. However, upon reexamination, approximately 15% of published articles in psychology contain at least one incorrect statistical conclusion (Bakker & Wicherts, Behavior research methods, 43, 666-678 2011), disparities that beget the question of inaccuracy in mediation reports. To quantify this question of inaccuracy, articles reporting standard use of single-mediator models in three high-impact journals in personality and social psychology during 2011 were examined. More than 24% of the 156 models coded failed an equivalence test (i.e., ab = c - c'), suggesting that one or more regression coefficients in mediation analyses are frequently misreported. The authors cite common sources of errors, provide recommendations for enhanced accuracy in reports of single-mediator models, and discuss implications for alternative methods.
Simourd, David J; Olver, Mark E; Brandenburg, Bryan
2016-09-01
The present study investigated the effect of a criminal attitude treatment program to changes on measured criminal attitudes and postprogram recidivism. The criminal attitude program (CAP) is a standardized therapeutic curriculum consisting of 15 modules offering 44 hr of therapeutic time. It was delivered by trained facilitators to a total of 113 male offenders incarcerated in one of five state correctional institutions. Pretreatment and posttreatment comparisons were made on standardized measures of criminal attitudes, response bias, and motivation for lifestyle changes. Results found statistically significant lower criminal attitudes at posttreatment that were unaffected by response bias. There were also increases in motivation for lifestyle changes, but these did not reach statistical significance. Fifty-seven participants were released into the community following the program and were eligible for recidivism analyses. Comparisons between participants who completed the CAP and those who did not complete the CAP revealed 7% lower rearrest among CAP completers. Although preliminary, these results indicate that the CAP had a positive effect on changes to criminal attitudes and recidivism. The findings are discussed in terms of conceptual and practical considerations in the assessment and treatment of criminal attitudes among offenders. © The Author(s) 2015.
Martin, Roy C; Okonkwo, Ozioma C; Hill, Joni; Griffith, H Randall; Triebel, Kristen; Bartolucci, Alfred; Nicholas, Anthony P; Watts, Ray L; Stover, Natividad; Harrell, Lindy E; Clark, David; Marson, Daniel C
2008-10-15
Little is currently known about the higher order functional skills of patients with Parkinson disease and cognitive impairment. Medical decision-making capacity (MDC) was assessed in patients with Parkinson's disease (PD) with cognitive impairment and dementia. Participants were 16 patients with PD and cognitive impairment without dementia (PD-CIND), 16 patients with PD dementia (PDD), and 22 healthy older adults. All participants were administered the Capacity to Consent to Treatment Instrument (CCTI), a standardized capacity instrument assessing MDC under five different consent standards. Parametric and nonparametric statistical analyses were utilized to examine capacity performance on the consent standards. In addition, capacity outcomes (capable, marginally capable, or incapable outcomes) on the standards were identified for the two patient groups. Relative to controls, PD-CIND patients demonstrated significant impairment on the understanding treatment consent standard, clinically the most stringent CCTI standard. Relative to controls and PD-CIND patients, PDD patients were impaired on the three clinical standards of understanding, reasoning, and appreciation. The findings suggest that impairment in decisional capacity is already present in cognitively impaired patients with PD without dementia and increases as these patients develop dementia. Clinicians and researchers should carefully assess decisional capacity in all patients with PD with cognitive impairment. (c) 2008 Movement Disorder Society.
An entropy-based statistic for genomewide association studies.
Zhao, Jinying; Boerwinkle, Eric; Xiong, Momiao
2005-07-01
Efficient genotyping methods and the availability of a large collection of single-nucleotide polymorphisms provide valuable tools for genetic studies of human disease. The standard chi2 statistic for case-control studies, which uses a linear function of allele frequencies, has limited power when the number of marker loci is large. We introduce a novel test statistic for genetic association studies that uses Shannon entropy and a nonlinear function of allele frequencies to amplify the differences in allele and haplotype frequencies to maintain statistical power with large numbers of marker loci. We investigate the relationship between the entropy-based test statistic and the standard chi2 statistic and show that, in most cases, the power of the entropy-based statistic is greater than that of the standard chi2 statistic. The distribution of the entropy-based statistic and the type I error rates are validated using simulation studies. Finally, we apply the new entropy-based test statistic to two real data sets, one for the COMT gene and schizophrenia and one for the MMP-2 gene and esophageal carcinoma, to evaluate the performance of the new method for genetic association studies. The results show that the entropy-based statistic obtained smaller P values than did the standard chi2 statistic.
77 FR 34044 - National Committee on Vital and Health Statistics: Meeting Standards Subcommittee
Federal Register 2010, 2011, 2012, 2013, 2014
2012-06-08
... DEPARTMENT OF HEALTH AND HUMAN SERVICES National Committee on Vital and Health Statistics: Meeting... Health Statistics (NCVHS); Subcommittee on Standards. Time and Date: June 20, 2012, 9 a.m.-5 p.m. EST..., Executive Secretary, NCVHS, National Center for Health Statistics, Centers for Disease Control and...
Shahbazi, Korosh; Solati, Kamal
2016-01-01
Introduction Irritable bowel syndrome (IBS) is one of the most prevalent gastroenterological disorders. IBS is characterized by abdominal pain, cramping, diarrhea, constipation, bloating and flatulence. Complementary therapy is a group of diverse therapeutic and health care systems products that are used in treatment of IBS. Hypnotherapy helps to alleviate the symptoms of a broad range of diseases and conditions. It can be used independently or along with other treatments. Aim This study was conducted to compare therapeutic effect of hypnotherapy plus standard medical treatment and standard medical treatment alone on quality of life in patients with IBS. Materials and Methods This study is a clinical trial investigating 60 patients who were enrolled according to Rome-III criteria. The sample size was determined per statistical advice, previous studies, and the formula of sample size calculation. The participants were randomly assigned to two groups of hypnotherapy plus standard medical treatment group (n: 30), and standard medical treatment group (30). The study consisted of three steps; prior to treatment, after treatment and six months after the last intervention (follow-up). The instruments of data gathering were a questionnaire of demographic characteristics and standard questionnaire of quality of life for IBS patients (Quality of Life IBS-34). The data were analysed by analysis of co-variance, Levene’s test and descriptive statistics in SPSS-18. Results There were significant differences between the two groups of study in post-treatment and follow-up stage with regards to quality of life (p<0.05). Conclusion Psychological intervention, particularly hypno-therapy, alongside standard medical therapy could contribute to improving quality of life, pain and fatigue, and psychological disorder in IBS patients resistant to treatment. Also, therapeutic costs, hospital stay and days lost from work could be decreased and patients’ efficiency could be increased. PMID:27437261
Singla, Sanjeev; Mittal, Geeta; Raghav; Mittal, Rajinder K
2014-01-01
Background: Abdominal pain and shoulder tip pain after laparoscopic cholecystectomy are distressing for the patient. Various causes of this pain are peritoneal stretching and diaphragmatic irritation by high intra-abdominal pressure caused by pneumoperitoneum . We designed a study to compare the post operative pain after laparoscopic cholecystectomy at low pressure (7-8 mm of Hg) and standard pressure technique (12-14 mm of Hg). Aim : To compare the effect of low pressure and standard pressure pneumoperitoneum in post laparoscopic cholecystectomy pain . Further to study the safety of low pressure pneumoperitoneum in laparoscopic cholecystectomy. Settings and Design: A prospective randomised double blind study. Materials and Methods: A prospective randomised double blind study was done in 100 ASA grade I & II patients. They were divided into two groups -50 each. Group A patients underwent laparoscopic cholecystectomy with low pressure pneumoperitoneum (7-8 mm Hg) while group B underwent laparoscopic cholecystectomy with standard pressure pneumoperitoneum (12-13 mm Hg). Both the groups were compared for pain intensity, analgesic requirement and complications. Statistical Analysis: Demographic data and intraoperative complications were analysed using chi-square test. Frequency of pain, intensity of pain and analgesics consumption was compared by applying ANOVA test. Results: Post-operative pain score was significantly less in low pressure group as compared to standard pressure group. Number of patients requiring rescue analgesic doses was more in standard pressure group . This was statistically significant. Also total analgesic consumption was more in standard pressure group. There was no difference in intraoperative complications. Conclusion: This study demonstrates the use of simple expedient of reducing the pressure of pneumoperitoneum to 8 mm results in reduction in both intensity and frequency of post-operative pain and hence early recovery and better outcome.This study also shows that low pressure technique is safe with comparable rate of intraoperative complications. PMID:24701492
Statistical Data Analyses of Trace Chemical, Biochemical, and Physical Analytical Signatures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Udey, Ruth Norma
Analytical and bioanalytical chemistry measurement results are most meaningful when interpreted using rigorous statistical treatments of the data. The same data set may provide many dimensions of information depending on the questions asked through the applied statistical methods. Three principal projects illustrated the wealth of information gained through the application of statistical data analyses to diverse problems.
The estimation of the measurement results with using statistical methods
NASA Astrophysics Data System (ADS)
Velychko, O.; Gordiyenko, T.
2015-02-01
The row of international standards and guides describe various statistical methods that apply for a management, control and improvement of processes with the purpose of realization of analysis of the technical measurement results. The analysis of international standards and guides on statistical methods estimation of the measurement results recommendations for those applications in laboratories is described. For realization of analysis of standards and guides the cause-and-effect Ishikawa diagrams concerting to application of statistical methods for estimation of the measurement results are constructed.
Building a framework for ergonomic research on laparoscopic instrument handles.
Li, Zheng; Wang, Guohui; Tan, Juan; Sun, Xulong; Lin, Hao; Zhu, Shaihong
2016-06-01
Laparoscopic surgery carries the advantage of minimal invasiveness, but ergonomic design of the instruments used has progressed slowly. Previous studies have demonstrated that the handle of laparoscopic instruments is vital for both surgical performance and surgeon's health. This review provides an overview of the sub-discipline of handle ergonomics, including an evaluation framework, objective and subjective assessment systems, data collection and statistical analyses. Furthermore, a framework for ergonomic research on laparoscopic instrument handles is proposed to standardize work on instrument design. Copyright © 2016 IJS Publishing Group Ltd. Published by Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
1976-01-01
A representative set of payloads for both science and applications disciplines were selected that would ensure a realistic and statistically significant estimate of equipment utilization. The selected payloads were analyzed to determine the applicability of Nuclear Instrumentation Modular (NIM)/Computer Automated Measurement Control (CAMAC) equipment in satisfying their data acquisition and control requirements. The analyses results were combined with the comparable results from related studies to arrive at an overall assessment of the applicability and commonality of NIM/CAMAC equipment usage across the spectrum of payloads.
The Second National Ballistics Imaging Comparison (NBIC-2)
Vorburger, TV; Yen, J; Song, JF; Thompson, RM; Renegar, TB; Zheng, A; Tong, M; Ols, M
2014-01-01
In response to the guidelines issued by the American Society of Crime Laboratory Directors/Laboratory Accreditation Board (ASCLD/LAB-International) to establish traceability and quality assurance in U.S. crime laboratories, NIST and the ATF initiated a joint project, entitled the National Ballistics Imaging Comparison (NBIC). The NBIC project aims to establish a national traceability and quality system for ballistics identifications in crime laboratories utilizing ATF’s National Integrated Ballistics Information Network (NIBIN). The original NBIC was completed in 2010. In the second NBIC, NIST Standard Reference Material (SRM) 2461 Cartridge Cases were used as reference standards, and 14 experts from 11 U.S. crime laboratories each performed 17 image acquisitions and correlations of the SRM cartridge cases over the course of about half a year. Resulting correlation scores were collected by NIST for statistical analyses, from which control charts and control limits were developed for the proposed quality system and for promoting future assessments and accreditations for firearm evidence in U.S. forensic laboratories in accordance with the ISO 17025 Standard. PMID:26601051
Hong, Na; Prodduturi, Naresh; Wang, Chen; Jiang, Guoqian
2017-01-01
In this study, we describe our efforts in building a clinical statistics and analysis application platform using an emerging clinical data standard, HL7 FHIR, and an open source web application framework, Shiny. We designed two primary workflows that integrate a series of R packages to enable both patient-centered and cohort-based interactive analyses. We leveraged Shiny with R to develop interactive interfaces on FHIR-based data and used ovarian cancer study datasets as a use case to implement a prototype. Specifically, we implemented patient index, patient-centered data report and analysis, and cohort analysis. The evaluation of our study was performed by testing the adaptability of the framework on two public FHIR servers. We identify common research requirements and current outstanding issues, and discuss future enhancement work of the current studies. Overall, our study demonstrated that it is feasible to use Shiny for implementing interactive analysis on FHIR-based standardized clinical data.
The Second National Ballistics Imaging Comparison (NBIC-2).
Vorburger, T V; Yen, J; Song, J F; Thompson, R M; Renegar, T B; Zheng, A; Tong, M; Ols, M
2014-01-01
In response to the guidelines issued by the American Society of Crime Laboratory Directors/Laboratory Accreditation Board (ASCLD/LAB-International) to establish traceability and quality assurance in U.S. crime laboratories, NIST and the ATF initiated a joint project, entitled the National Ballistics Imaging Comparison (NBIC). The NBIC project aims to establish a national traceability and quality system for ballistics identifications in crime laboratories utilizing ATF's National Integrated Ballistics Information Network (NIBIN). The original NBIC was completed in 2010. In the second NBIC, NIST Standard Reference Material (SRM) 2461 Cartridge Cases were used as reference standards, and 14 experts from 11 U.S. crime laboratories each performed 17 image acquisitions and correlations of the SRM cartridge cases over the course of about half a year. Resulting correlation scores were collected by NIST for statistical analyses, from which control charts and control limits were developed for the proposed quality system and for promoting future assessments and accreditations for firearm evidence in U.S. forensic laboratories in accordance with the ISO 17025 Standard.
Schlichtiger, Jenny; Haas, Johannes-Peter; Barth, Swaantje; Bisdorff, Betty; Hager, Lisa; Michels, Hartmut; Hügle, Boris; Radon, Katja
2017-05-22
Although several studies show that JIA-patients have significantly lower employment rates than the general population, the research on educational and occupational attainments in patients with juvenile idiopathic arthritis (JIA) remain conflicting most likely due to small sample sizes. Therefore, aim of this study is to compare the educational achievements and employment status of 3698 JIA-patients with the German general population (GGP). "SEPIA" was a large cross-sectional study on the current status of a historic cohort of JIA-patients treated in a single center between 1952 and 2010. For the analyses of education and employment a sub-cohort was extracted, including only adult cases with a confirmed diagnosis of JIA (N = 2696). Participants were asked to fill out a standardized written questionnaire on education and employment. Outcome measures (education/unemployment) were directly standardized to the GGP using data obtained from the National Educational Panel Study 2013 (N = 11,728) and the German Unemployment Statistics 2012 of the Federal Statistical Office (N = 42,791,000). After age- and sex-standardization, 3% (95% Confidence Interval 1.9 to 4.1%) more of the JIA-patients (26%) than of the GGP (23%) had only reached primary education. In contrast, parents of JIA-patients had similar levels of education as parents in the GGP. With a standardized difference of 0.2% (95% CI: 0.16 to 0.19%), the unemployment rate in JIA-patients was slightly, but not significantly higher than in the GGP. Stratifying for disease duration and the current treatment status, differences were confirmed for persons diagnosed before 2001, whilst for patients diagnosed after 2000, differences were found only in JIA-patients with ongoing disease. Medium and high educational achievements did not differ statistically significant between JIA patients and the GPP. Educational achievements in German JIA-patients are significantly lower than in the GGP. Furthermore we were able to identify a slightly higher level of unemployment, especially in those with still under treatment and longer disease duration. Better treatment options as well as further development of social support programs might help to overcome this lifelong secondary effect of JIA.
Simon, Heather; Baker, Kirk R; Akhtar, Farhan; Napelenok, Sergey L; Possiel, Norm; Wells, Benjamin; Timin, Brian
2013-03-05
In setting primary ambient air quality standards, the EPA's responsibility under the law is to establish standards that protect public health. As part of the current review of the ozone National Ambient Air Quality Standard (NAAQS), the US EPA evaluated the health exposure and risks associated with ambient ozone pollution using a statistical approach to adjust recent air quality to simulate just meeting the current standard level, without specifying emission control strategies. One drawback of this purely statistical concentration rollback approach is that it does not take into account spatial and temporal heterogeneity of ozone response to emissions changes. The application of the higher-order decoupled direct method (HDDM) in the community multiscale air quality (CMAQ) model is discussed here to provide an example of a methodology that could incorporate this variability into the risk assessment analyses. Because this approach includes a full representation of the chemical production and physical transport of ozone in the atmosphere, it does not require assumed background concentrations, which have been applied to constrain estimates from past statistical techniques. The CMAQ-HDDM adjustment approach is extended to measured ozone concentrations by determining typical sensitivities at each monitor location and hour of the day based on a linear relationship between first-order sensitivities and hourly ozone values. This approach is demonstrated by modeling ozone responses for monitor locations in Detroit and Charlotte to domain-wide reductions in anthropogenic NOx and VOCs emissions. As seen in previous studies, ozone response calculated using HDDM compared well to brute-force emissions changes up to approximately a 50% reduction in emissions. A new stepwise approach is developed here to apply this method to emissions reductions beyond 50% allowing for the simulation of more stringent reductions in ozone concentrations. Compared to previous rollback methods, this application of modeled sensitivities to ambient ozone concentrations provides a more realistic spatial response of ozone concentrations at monitors inside and outside the urban core and at hours of both high and low ozone concentrations.
Comparative measurements using different particle size instruments
NASA Technical Reports Server (NTRS)
Chigier, N.
1984-01-01
This paper discusses the measurement and comparison of particle size and velocity measurements in sprays. The general nature of sprays and the development of standard, consistent research sprays are described. The instruments considered in this paper are: pulsed laser photography, holography, television, and cinematography; laser anemometry and interferometry using visibility, peak amplitude, and intensity ratioing; and laser diffraction. Calibration is by graticule, reticle, powders with known size distributions in liquid cells, monosize sprays, and, eventually, standard sprays. Statistical analyses including spatial and temporal long-time averaging as well as high-frequency response time histories with conditional sampling are examined. Previous attempts at comparing instruments, the making of simultaneous or consecutive measurements with similar types and different types of imaging, interferometric, and diffraction instruments are reviewed. A program of calibration and experiments for comparing and assessing different instruments is presented.
NASA Astrophysics Data System (ADS)
Bucher, François-Xavier; Cao, Frédéric; Viard, Clément; Guichard, Frédéric
2014-03-01
We present in this paper a novel capacitive device that stimulates the touchscreen interface of a smartphone (or of any imaging device equipped with a capacitive touchscreen) and synchronizes triggering with the DxO LED Universal Timer to measure shooting time lag and shutter lag according to ISO 15781:2013. The device and protocol extend the time lag measurement beyond the standard by including negative shutter lag, a phenomenon that is more and more commonly found in smartphones. The device is computer-controlled, and this feature, combined with measurement algorithms, makes it possible to automatize a large series of captures so as to provide more refined statistical analyses when, for example, the shutter lag of "zero shutter lag" devices is limited by the frame time as our measurements confirm.
[Biomechanical significance of the acetabular roof and its reaction to mechanical injury].
Domazet, N; Starović, D; Nedeljković, R
1999-01-01
The introduction of morphometry into the quantitative analysis of the bone system and functional adaptation of acetabulum to mechanical damages and injuries enabled a relatively simple and acceptable examination of morphological acetabular changes in patients with damaged hip joints. Measurements of the depth and form of acetabulum can be done by radiological methods, computerized tomography and ultrasound (1-9). The aim of the study was to obtain data on the behaviour of acetabular roof, the so-called "eyebrow", by morphometric analyses during different mechanical injuries. Clinical studies of the effect of different loads on acetabular roof were carried out in 741 patients. Radiographic findings of 400 men and 341 women were analysed. The control group was composed of 148 patients with normal hip joints. Average age of the patients was 54.7 years and that of control subjects 52.0 years. Data processing was done for all examined patients. On the basis of our measurements the average size of female "eyebrow" ranged from 24.8 mm to 31.5 mm with standard deviation of 0.93 and in men from 29.4 mm to 40.3 mm with standard deviation of 1.54. The average size in the whole population was 32.1 mm with standard deviation of 15.61. Statistical analyses revealed high correlation coefficients between the age and "eyebrow" size in men (r = 0.124; p < 0.05); it was statically in inverse proportion (Graph 1). However, in female patients the correlation coefficient was statistically significant (r = 0.060; p > 0.05). The examination of the size of collodiaphysial angle and length of "eyebrow" revealed that "eyebrow" length was in inverse proportion to the size of collodiaphysial angle (r = 0.113; p < 0.05). The average "eyebrow" length in relation to the size of collodiaphysial angle ranged from 21.3 mm to 35.2 mm with standard deviation of 1.60. There was no statistically significant correlation between the "eyebrow" size and Wiberg's angle in male (r = 0.049; p > 0.05) and female (r = 0.005; p > 0.05) patients. The "eyebrow" length was proportionally dependent on the size of the shortened extremity in all examined subjects. This dependence was statistically significant both in female (r = 0.208; p < 0.05) and male (r = 0.193; p < 0.05) patients. The study revealed that fossa acetabuli was forward and downward laterally directed. The size, form and cross-section of acetabulum changed during different loads. Dimensions and morphological changes in acetabulum showed some but unimportant changes in comparison to that in the control group. These findings are graphically presented in Figure 5 and numerically in Tables 1 and 2. The study of spatial orientation among hip joints revealed that fossa acetabuli was forward and downward laterally directed; this was in accordance with results other authors (1, 7, 9, 15, 18). There was a statistically significant difference in relation to the "eyebrow" size between patients and normal subjects (t = 3.88; p < 0.05). The average difference of "eyebrow" size was 6.892 mm. A larger "eyebrow" was found in patients with normally loaded hip. There was also a significant difference in "eyebrow" size between patients and healthy female subjects (t = 4.605; p < 0.05). A larger "eyebrow" of 8.79 mm was found in female subjects with normally loaded hip. On the basis of our study it can be concluded that the findings related to changes in acetabular roof, the so-called "eyebrow", are important in diagnosis, follow-up and therapy of pathogenetic processes of these disorders.
78 FR 65317 - National Committee on Vital and Health Statistics: Meeting Standards Subcommittee
Federal Register 2010, 2011, 2012, 2013, 2014
2013-10-31
... DEPARTMENT OF HEALTH AND HUMAN SERVICES National Committee on Vital and Health Statistics: Meeting... Health Statistics (NCVHS) Subcommittee on Standards. Time and Date: November 12, 2013 8:30 a.m.-5:30 p.m. EST. Place: Centers for Disease Control and Prevention, National Center for Health Statistics, 3311...
78 FR 54470 - National Committee on Vital and Health Statistics: Meeting Standards Subcommittee
Federal Register 2010, 2011, 2012, 2013, 2014
2013-09-04
... DEPARTMENT OF HEALTH AND HUMAN SERVICES National Committee on Vital and Health Statistics: Meeting... Health Statistics (NCVHS) Subcommittee on Standards Time and Date: September 18, 2013 8:30 p.m.--5:00 p.m. EDT. Place: Centers for Disease Control and Prevention, National Center for Health Statistics, 3311...
78 FR 942 - National Committee on Vital and Health Statistics: Meeting Standards Subcommittee
Federal Register 2010, 2011, 2012, 2013, 2014
2013-01-07
... DEPARTMENT OF HEALTH AND HUMAN SERVICES National Committee on Vital and Health Statistics: Meeting... Health Statistics (NCVHS) Subcommittee on Standards. Time and Date: February 27, 2013 9:30 a.m.-5:00 p.m... electronic claims attachments. The National Committee on Vital Health Statistics is the public advisory body...
78 FR 34100 - National Committee on Vital and Health Statistics: Meeting Standards Subcommittee
Federal Register 2010, 2011, 2012, 2013, 2014
2013-06-06
... DEPARTMENT OF HEALTH AND HUMAN SERVICES National Committee on Vital and Health Statistics: Meeting... Health Statistics (NCVHS) Subcommittee on Standards. Time and Date: June 17, 2013 1:00 p.m.-5:00 p.m. e.d..., National Center for Health Statistics, 3311 Toledo Road, Auditorium B & C, Hyattsville, Maryland 20782...
Statistical Characterization of School Bus Drive Cycles Collected via Onboard Logging Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duran, A.; Walkowicz, K.
In an effort to characterize the dynamics typical of school bus operation, National Renewable Energy Laboratory (NREL) researchers set out to gather in-use duty cycle data from school bus fleets operating across the country. Employing a combination of Isaac Instruments GPS/CAN data loggers in conjunction with existing onboard telemetric systems resulted in the capture of operating information for more than 200 individual vehicles in three geographically unique domestic locations. In total, over 1,500 individual operational route shifts from Washington, New York, and Colorado were collected. Upon completing the collection of in-use field data using either NREL-installed data acquisition devices ormore » existing onboard telemetry systems, large-scale duty-cycle statistical analyses were performed to examine underlying vehicle dynamics trends within the data and to explore vehicle operation variations between fleet locations. Based on the results of these analyses, high, low, and average vehicle dynamics requirements were determined, resulting in the selection of representative standard chassis dynamometer test cycles for each condition. In this paper, the methodology and accompanying results of the large-scale duty-cycle statistical analysis are presented, including graphical and tabular representations of a number of relationships between key duty-cycle metrics observed within the larger data set. In addition to presenting the results of this analysis, conclusions are drawn and presented regarding potential applications of advanced vehicle technology as it relates specifically to school buses.« less
The statistical analysis of global climate change studies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hardin, J.W.
1992-01-01
The focus of this work is to contribute to the enhancement of the relationship between climatologists and statisticians. The analysis of global change data has been underway for many years by atmospheric scientists. Much of this analysis includes a heavy reliance on statistics and statistical inference. Some specific climatological analyses are presented and the dependence on statistics is documented before the analysis is undertaken. The first problem presented involves the fluctuation-dissipation theorem and its application to global climate models. This problem has a sound theoretical niche in the literature of both climate modeling and physics, but a statistical analysis inmore » which the data is obtained from the model to show graphically the relationship has not been undertaken. It is under this motivation that the author presents this problem. A second problem concerning the standard errors in estimating global temperatures is purely statistical in nature although very little materials exists for sampling on such a frame. This problem not only has climatological and statistical ramifications, but political ones as well. It is planned to use these results in a further analysis of global warming using actual data collected on the earth. In order to simplify the analysis of these problems, the development of a computer program, MISHA, is presented. This interactive program contains many of the routines, functions, graphics, and map projections needed by the climatologist in order to effectively enter the arena of data visualization.« less
On Teaching about the Coefficient of Variation in Introductory Statistics Courses
ERIC Educational Resources Information Center
Trafimow, David
2014-01-01
The standard deviation is related to the mean by virtue of the coefficient of variation. Teachers of statistics courses can make use of that fact to make the standard deviation more comprehensible for statistics students.
Lima, Lucia Helena Mello de; Mattar, Rosiane; Abrahão, Anelise Riedel
2016-06-15
The aim of this study was to estimate the prevalence of domestic violence in adolescent and adult mothers who were admitted to obstetrics services centers in Brazil and to identify risk factors of domestic violence and any adverse obstetric and perinatal outcomes. Researchers used standardized interviews, the questionnaire Abuse Assessment Screen, and a review of patients' medical records. Descriptive statistical analyses were also used. The prevalence of domestic violence among all participants totaled 40.1% (38.5% of adolescents, 41.7% of adults). Factors associated with domestic violence during pregnancy were as follows: a history of family violence, a greater number of sexual partners, and being a smoker. No statistically significant association was found for adverse obstetric and perinatal outcomes. Results showed that, in Vitória, Espírito Santo, Brazil, pregnancy did not protect a woman from suffering domestic violence. © The Author(s) 2016.
Eljamel, M Sam; Mahboob, Syed Osama
2016-12-01
Surgical resection of high-grade gliomas (HGG) is standard therapy because it imparts significant progression free (PFS) and overall survival (OS). However, HGG-tumor margins are indistinguishable from normal brain during surgery. Hence intraoperative technology such as fluorescence (ALA, fluorescein) and intraoperative ultrasound (IoUS) and MRI (IoMRI) has been deployed. This study compares the effectiveness and cost-effectiveness of these technologies. Critical literature review and meta-analyses, using MEDLINE/PubMed service. The list of references in each article was double-checked for any missing references. We included all studies that reported the use of ALA, fluorescein (FLCN), IoUS or IoMRI to guide HGG-surgery. The meta-analyses were conducted according to statistical heterogeneity between studies. If there was no heterogeneity, fixed effects model was used; otherwise, a random effects model was used. Statistical heterogeneity was explored by χ 2 and inconsistency (I 2 ) statistics. To assess cost-effectiveness, we calculated the incremental cost per quality-adjusted life-year (QALY). Gross total resection (GTR) after ALA, FLCN, IoUS and IoMRI was 69.1%, 84.4%, 73.4% and 70% respectively. The differences were not statistically significant. All four techniques led to significant prolongation of PFS and tended to prolong OS. However none of these technologies led to significant prolongation of OS compared to controls. The cost/QALY was $16,218, $3181, $6049 and $32,954 for ALA, FLCN, IoUS and IoMRI respectively. ALA, FLCN, IoUS and IoMRI significantly improve GTR and PFS of HGG. Their incremental cost was below the threshold for cost-effectiveness of HGG-therapy, denoting that each intraoperative technology was cost-effective on its own. Copyright © 2016 Elsevier B.V. All rights reserved.
Early Warning Signs of Suicide in Service Members Who Engage in Unauthorized Acts of Violence
2016-06-01
observable to military law enforcement personnel. Statistical analyses tested for differences in warning signs between cases of suicide, violence, or...indicators, (2) Behavioral Change indicators, (3) Social indicators, and (4) Occupational indicators. Statistical analyses were conducted to test for...6 Coding _________________________________________________________________ 7 Statistical
Foster, Ken; Anwar, Nasim; Pogue, Rhea; Morré, Dorothy M.; Keenan, T. W.; Morré, D. James
2003-01-01
Seasonal decomposition analyses were applied to the statistical evaluation of an oscillating activity for a plasma membrane NADH oxidase activity with a temperature compensated period of 24 min. The decomposition fits were used to validate the cyclic oscillatory pattern. Three measured values, average percentage error (MAPE), a measure of the periodic oscillation, mean average deviation (MAD), a measure of the absolute average deviations from the fitted values, and mean standard deviation (MSD), the measure of standard deviation from the fitted values plus R-squared and the Henriksson-Merton p value were used to evaluate accuracy. Decomposition was carried out by fitting a trend line to the data, then detrending the data if necessary, by subtracting the trend component. The data, with or without detrending, were then smoothed by subtracting a centered moving average of length equal to the period length determined by Fourier analysis. Finally, the time series were decomposed into cyclic and error components. The findings not only validate the periodic nature of the major oscillations but suggest, as well, that the minor intervening fluctuations also recur within each period with a reproducible pattern of recurrence. PMID:19330112
[Statistical analysis using freely-available "EZR (Easy R)" software].
Kanda, Yoshinobu
2015-10-01
Clinicians must often perform statistical analyses for purposes such evaluating preexisting evidence and designing or executing clinical studies. R is a free software environment for statistical computing. R supports many statistical analysis functions, but does not incorporate a statistical graphical user interface (GUI). The R commander provides an easy-to-use basic-statistics GUI for R. However, the statistical function of the R commander is limited, especially in the field of biostatistics. Therefore, the author added several important statistical functions to the R commander and named it "EZR (Easy R)", which is now being distributed on the following website: http://www.jichi.ac.jp/saitama-sct/. EZR allows the application of statistical functions that are frequently used in clinical studies, such as survival analyses, including competing risk analyses and the use of time-dependent covariates and so on, by point-and-click access. In addition, by saving the script automatically created by EZR, users can learn R script writing, maintain the traceability of the analysis, and assure that the statistical process is overseen by a supervisor.
Hayat, Matthew J
2014-04-01
Statistics coursework is usually a core curriculum requirement for nursing students at all degree levels. The American Association of Colleges of Nursing (AACN) establishes curriculum standards for academic nursing programs. However, the AACN provides little guidance on statistics education and does not offer standardized competency guidelines or recommendations about course content or learning objectives. Published standards may be used in the course development process to clarify course content and learning objectives. This article includes suggestions for implementing and integrating recommendations given in the Guidelines for Assessment and Instruction in Statistics Education (GAISE) report into statistics education for nursing students. Copyright 2014, SLACK Incorporated.
Fast and accurate imputation of summary statistics enhances evidence of functional enrichment.
Pasaniuc, Bogdan; Zaitlen, Noah; Shi, Huwenbo; Bhatia, Gaurav; Gusev, Alexander; Pickrell, Joseph; Hirschhorn, Joel; Strachan, David P; Patterson, Nick; Price, Alkes L
2014-10-15
Imputation using external reference panels (e.g. 1000 Genomes) is a widely used approach for increasing power in genome-wide association studies and meta-analysis. Existing hidden Markov models (HMM)-based imputation approaches require individual-level genotypes. Here, we develop a new method for Gaussian imputation from summary association statistics, a type of data that is becoming widely available. In simulations using 1000 Genomes (1000G) data, this method recovers 84% (54%) of the effective sample size for common (>5%) and low-frequency (1-5%) variants [increasing to 87% (60%) when summary linkage disequilibrium information is available from target samples] versus the gold standard of 89% (67%) for HMM-based imputation, which cannot be applied to summary statistics. Our approach accounts for the limited sample size of the reference panel, a crucial step to eliminate false-positive associations, and it is computationally very fast. As an empirical demonstration, we apply our method to seven case-control phenotypes from the Wellcome Trust Case Control Consortium (WTCCC) data and a study of height in the British 1958 birth cohort (1958BC). Gaussian imputation from summary statistics recovers 95% (105%) of the effective sample size (as quantified by the ratio of [Formula: see text] association statistics) compared with HMM-based imputation from individual-level genotypes at the 227 (176) published single nucleotide polymorphisms (SNPs) in the WTCCC (1958BC height) data. In addition, for publicly available summary statistics from large meta-analyses of four lipid traits, we publicly release imputed summary statistics at 1000G SNPs, which could not have been obtained using previously published methods, and demonstrate their accuracy by masking subsets of the data. We show that 1000G imputation using our approach increases the magnitude and statistical evidence of enrichment at genic versus non-genic loci for these traits, as compared with an analysis without 1000G imputation. Thus, imputation of summary statistics will be a valuable tool in future functional enrichment analyses. Publicly available software package available at http://bogdan.bioinformatics.ucla.edu/software/. bpasaniuc@mednet.ucla.edu or aprice@hsph.harvard.edu Supplementary materials are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Standard Errors and Confidence Intervals of Norm Statistics for Educational and Psychological Tests.
Oosterhuis, Hannah E M; van der Ark, L Andries; Sijtsma, Klaas
2016-11-14
Norm statistics allow for the interpretation of scores on psychological and educational tests, by relating the test score of an individual test taker to the test scores of individuals belonging to the same gender, age, or education groups, et cetera. Given the uncertainty due to sampling error, one would expect researchers to report standard errors for norm statistics. In practice, standard errors are seldom reported; they are either unavailable or derived under strong distributional assumptions that may not be realistic for test scores. We derived standard errors for four norm statistics (standard deviation, percentile ranks, stanine boundaries and Z-scores) under the mild assumption that the test scores are multinomially distributed. A simulation study showed that the standard errors were unbiased and that corresponding Wald-based confidence intervals had good coverage. Finally, we discuss the possibilities for applying the standard errors in practical test use in education and psychology. The procedure is provided via the R function check.norms, which is available in the mokken package.
Zhu, Wensheng; Yuan, Ying; Zhang, Jingwen; Zhou, Fan; Knickmeyer, Rebecca C; Zhu, Hongtu
2017-02-01
The aim of this paper is to systematically evaluate a biased sampling issue associated with genome-wide association analysis (GWAS) of imaging phenotypes for most imaging genetic studies, including the Alzheimer's Disease Neuroimaging Initiative (ADNI). Specifically, the original sampling scheme of these imaging genetic studies is primarily the retrospective case-control design, whereas most existing statistical analyses of these studies ignore such sampling scheme by directly correlating imaging phenotypes (called the secondary traits) with genotype. Although it has been well documented in genetic epidemiology that ignoring the case-control sampling scheme can produce highly biased estimates, and subsequently lead to misleading results and suspicious associations, such findings are not well documented in imaging genetics. We use extensive simulations and a large-scale imaging genetic data analysis of the Alzheimer's Disease Neuroimaging Initiative (ADNI) data to evaluate the effects of the case-control sampling scheme on GWAS results based on some standard statistical methods, such as linear regression methods, while comparing it with several advanced statistical methods that appropriately adjust for the case-control sampling scheme. Copyright © 2016 Elsevier Inc. All rights reserved.
Laurin, E; Thakur, K K; Gardner, I A; Hick, P; Moody, N J G; Crane, M S J; Ernst, I
2018-05-01
Design and reporting quality of diagnostic accuracy studies (DAS) are important metrics for assessing utility of tests used in animal and human health. Following standards for designing DAS will assist in appropriate test selection for specific testing purposes and minimize the risk of reporting biased sensitivity and specificity estimates. To examine the benefits of recommending standards, design information from published DAS literature was assessed for 10 finfish, seven mollusc, nine crustacean and two amphibian diseases listed in the 2017 OIE Manual of Diagnostic Tests for Aquatic Animals. Of the 56 DAS identified, 41 were based on field testing, eight on experimental challenge studies and seven on both. Also, we adapted human and terrestrial-animal standards and guidelines for DAS structure for use in aquatic animal diagnostic research. Through this process, we identified and addressed important metrics for consideration at the design phase: study purpose, targeted disease state, selection of appropriate samples and specimens, laboratory analytical methods, statistical methods and data interpretation. These recommended design standards for DAS are presented as a checklist including risk-of-failure points and actions to mitigate bias at each critical step. Adherence to standards when designing DAS will also facilitate future systematic review and meta-analyses of DAS research literature. © 2018 John Wiley & Sons Ltd.
A Study on the Development of Service Quality Index for Incheon International Airport
NASA Technical Reports Server (NTRS)
Lee, Kang Seok; Lee, Seung Chang; Hong, Soon Kil
2003-01-01
The main purpose of this study is located at developing Ominibus Monitors System(OMS) for internal management, which will enable to establish standards, finding out matters to be improved, and appreciation for its treatment in a systematic way. It is through developing subjective or objective estimation tool with use importance, perceived level, and complex index at international airport by each principal service items. The direction of this study came towards for the purpose of developing a metric analysis tool, utilizing the Quantitative Second Data, Analysing Perceived Data through airport user surveys, systemizing the data collection-input-analysis process, making data image according to graph of results, planning Service Encounter and endowing control attribution, and ensuring competitiveness at the minimal international standards. It is much important to set up a pre-investigation plan on the base of existent foreign literature and actual inspection to international airport. Two tasks have been executed together on the base of this pre-investigation; one is developing subjective estimation standards for departing party, entering party, and airport residence and the other is developing objective standards as complementary methods. The study has processed for the purpose of monitoring services at airports regularly and irregularly through developing software system for operating standards after ensuring credibility and feasibility of estimation standards with substantial and statistical way.
A comprehensive neuropsychological mapping battery for functional magnetic resonance imaging.
Karakas, Sirel; Baran, Zeynel; Ceylan, Arzu Ozkan; Tileylioglu, Emre; Tali, Turgut; Karakas, Hakki Muammer
2013-11-01
Existing batteries for FMRI do not precisely meet the criteria for comprehensive mapping of cognitive functions within minimum data acquisition times using standard scanners and head coils. The goal was to develop a battery of neuropsychological paradigms for FMRI that can also be used in other brain imaging techniques and behavioural research. Participants were 61 healthy, young adult volunteers (48 females and 13 males, mean age: 22.25 ± 3.39 years) from the university community. The battery included 8 paradigms for basic (visual, auditory, sensory-motor, emotional arousal) and complex (language, working memory, inhibition/interference control, learning) cognitive functions. Imaging was performed using standard functional imaging capabilities (1.5-T MR scanner, standard head coil). Structural and functional data series were analysed using Brain Voyager QX2.9 and Statistical Parametric Mapping-8. For basic processes, activation centres for individuals were within a distance of 3-11 mm of the group centres of the target regions and for complex cognitive processes, between 7 mm and 15 mm. Based on fixed-effect and random-effects analyses, the distance between the activation centres was 0-4 mm. There was spatial variability between individual cases; however, as shown by the distances between the centres found with fixed-effect and random-effects analyses, the coordinates for individual cases can be used to represent those of the group. The findings show that the neuropsychological brain mapping battery described here can be used in basic science studies that investigate the relationship of the brain to the mind and also as functional localiser in clinical studies for diagnosis, follow-up and pre-surgical mapping. © 2013.
High statistical heterogeneity is more frequent in meta-analysis of continuous than binary outcomes.
Alba, Ana C; Alexander, Paul E; Chang, Joanne; MacIsaac, John; DeFry, Samantha; Guyatt, Gordon H
2016-02-01
We compared the distribution of heterogeneity in meta-analyses of binary and continuous outcomes. We searched citations in MEDLINE and Cochrane databases for meta-analyses of randomized trials published in 2012 that reported a measure of heterogeneity of either binary or continuous outcomes. Two reviewers independently performed eligibility screening and data abstraction. We evaluated the distribution of I(2) in meta-analyses of binary and continuous outcomes and explored hypotheses explaining the difference in distributions. After full-text screening, we selected 671 meta-analyses evaluating 557 binary and 352 continuous outcomes. Heterogeneity as assessed by I(2) proved higher in continuous than in binary outcomes: the proportion of continuous and binary outcomes reporting an I(2) of 0% was 34% vs. 52%, respectively, and reporting an I(2) of 60-100% was 39% vs. 14%. In continuous but not binary outcomes, I(2) increased with larger number of studies included in a meta-analysis. Increased precision and sample size do not explain the larger I(2) found in meta-analyses of continuous outcomes with a larger number of studies. Meta-analyses evaluating continuous outcomes showed substantially higher I(2) than meta-analyses of binary outcomes. Results suggest differing standards for interpreting I(2) in continuous vs. binary outcomes may be appropriate. Copyright © 2016 Elsevier Inc. All rights reserved.
Full in-vitro analyses of new-generation bulk fill dental composites cured by halogen light.
Tekin, Tuçe Hazal; Kantürk Figen, Aysel; Yılmaz Atalı, Pınar; Coşkuner Filiz, Bilge; Pişkin, Mehmet Burçin
2017-08-01
The objective of this study was to investigate the full in-vitro analyses of new-generation bulk-fill dental composites cured by halogen light (HLG). Two types' four composites were studied: Surefill SDR (SDR) and Xtra Base (XB) as bulk-fill flowable materials; QuixFill (QF) and XtraFill (XF) as packable bulk-fill materials. Samples were prepared for each analysis and test by applying the same procedure, but with different diameters and thicknesses appropriate to the analysis and test requirements. Thermal properties were determined by thermogravimetric analysis (TG/DTG) and differential scanning calorimetry (DSC) analysis; the Vickers microhardness (VHN) was measured after 1, 7, 15 and 30days of storage in water. The degree of conversion values for the materials (DC, %) were immediately measured using near-infrared spectroscopy (FT-IR). The surface morphology of the composites was investigated by scanning electron microscopes (SEM) and atomic-force microscopy (AFM) analyses. The sorption and solubility measurements were also performed after 1, 7, 15 and 30days of storage in water. In addition to his, the data were statistically analyzed using one-way analysis of variance, and both the Newman Keuls and Tukey multiple comparison tests. The statistical significance level was established at p<0.05. According to the ISO 4049 standards, all the tested materials showed acceptable water sorption and solubility, and a halogen light source was an option to polymerize bulk-fill, resin-based dental composites. Copyright © 2017 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Josse, Florent; Lefebvre, Yannick; Todeschini, Patrick
2006-07-01
Assessing the structural integrity of a nuclear Reactor Pressure Vessel (RPV) subjected to pressurized-thermal-shock (PTS) transients is extremely important to safety. In addition to conventional deterministic calculations to confirm RPV integrity, Electricite de France (EDF) carries out probabilistic analyses. Probabilistic analyses are interesting because some key variables, albeit conventionally taken at conservative values, can be modeled more accurately through statistical variability. One variable which significantly affects RPV structural integrity assessment is cleavage fracture initiation toughness. The reference fracture toughness method currently in use at EDF is the RCCM and ASME Code lower-bound K{sub IC} based on the indexing parameter RT{submore » NDT}. However, in order to quantify the toughness scatter for probabilistic analyses, the master curve method is being analyzed at present. Furthermore, the master curve method is a direct means of evaluating fracture toughness based on K{sub JC} data. In the framework of the master curve investigation undertaken by EDF, this article deals with the following two statistical items: building a master curve from an extract of a fracture toughness dataset (from the European project 'Unified Reference Fracture Toughness Design curves for RPV Steels') and controlling statistical uncertainty for both mono-temperature and multi-temperature tests. Concerning the first point, master curve temperature dependence is empirical in nature. To determine the 'original' master curve, Wallin postulated that a unified description of fracture toughness temperature dependence for ferritic steels is possible, and used a large number of data corresponding to nuclear-grade pressure vessel steels and welds. Our working hypothesis is that some ferritic steels may behave in slightly different ways. Therefore we focused exclusively on the basic french reactor vessel metal of types A508 Class 3 and A 533 grade B Class 1, taking the sampling level and direction into account as well as the test specimen type. As for the second point, the emphasis is placed on the uncertainties in applying the master curve approach. For a toughness dataset based on different specimens of a single product, application of the master curve methodology requires the statistical estimation of one parameter: the reference temperature T{sub 0}. Because of the limited number of specimens, estimation of this temperature is uncertain. The ASTM standard provides a rough evaluation of this statistical uncertainty through an approximate confidence interval. In this paper, a thorough study is carried out to build more meaningful confidence intervals (for both mono-temperature and multi-temperature tests). These results ensure better control over uncertainty, and allow rigorous analysis of the impact of its influencing factors: the number of specimens and the temperatures at which they have been tested. (authors)« less
Zhang, Harrison G; Ying, Gui-Shuang
2018-02-09
The aim of this study is to evaluate the current practice of statistical analysis of eye data in clinical science papers published in British Journal of Ophthalmology ( BJO ) and to determine whether the practice of statistical analysis has improved in the past two decades. All clinical science papers (n=125) published in BJO in January-June 2017 were reviewed for their statistical analysis approaches for analysing primary ocular measure. We compared our findings to the results from a previous paper that reviewed BJO papers in 1995. Of 112 papers eligible for analysis, half of the studies analysed the data at an individual level because of the nature of observation, 16 (14%) studies analysed data from one eye only, 36 (32%) studies analysed data from both eyes at ocular level, one study (1%) analysed the overall summary of ocular finding per individual and three (3%) studies used the paired comparison. Among studies with data available from both eyes, 50 (89%) of 56 papers in 2017 did not analyse data from both eyes or ignored the intereye correlation, as compared with in 60 (90%) of 67 papers in 1995 (P=0.96). Among studies that analysed data from both eyes at an ocular level, 33 (92%) of 36 studies completely ignored the intereye correlation in 2017, as compared with in 16 (89%) of 18 studies in 1995 (P=0.40). A majority of studies did not analyse the data properly when data from both eyes were available. The practice of statistical analysis did not improve in the past two decades. Collaborative efforts should be made in the vision research community to improve the practice of statistical analysis for ocular data. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Dahabreh, Issa J.; Sheldrick, Radley C.; Paulus, Jessica K.; Chung, Mei; Varvarigou, Vasileia; Jafri, Haseeb; Rassen, Jeremy A.; Trikalinos, Thomas A.; Kitsios, Georgios D.
2012-01-01
Aims Randomized controlled trials (RCTs) are the gold standard for assessing the efficacy of therapeutic interventions because randomization protects from biases inherent in observational studies. Propensity score (PS) methods, proposed as a potential solution to confounding of the treatment–outcome association, are widely used in observational studies of therapeutic interventions for acute coronary syndromes (ACS). We aimed to systematically assess agreement between observational studies using PS methods and RCTs on therapeutic interventions for ACS. Methods and results We searched for observational studies of interventions for ACS that used PS methods to estimate treatment effects on short- or long-term mortality. Using a standardized algorithm, we matched observational studies to RCTs based on patients’ characteristics, interventions, and outcomes (‘topics’), and we compared estimates of treatment effect between the two designs. When multiple observational studies or RCTs were identified for the same topic, we performed a meta-analysis and used the summary relative risk for comparisons. We matched 21 observational studies investigating 17 distinct clinical topics to 63 RCTs (median = 3 RCTs per observational study) for short-term (7 topics) and long-term (10 topics) mortality. Estimates from PS analyses differed statistically significantly from randomized evidence in two instances; however, observational studies reported more extreme beneficial treatment effects compared with RCTs in 13 of 17 instances (P = 0.049). Sensitivity analyses limited to large RCTs, and using alternative meta-analysis models yielded similar results. Conclusion For the treatment of ACS, observational studies using PS methods produce treatment effect estimates that are of more extreme magnitude compared with those from RCTs, although the differences are rarely statistically significant. PMID:22711757
Electronic Communication of Protected Health Information: Privacy, Security, and HIPAA Compliance.
Drolet, Brian C; Marwaha, Jayson S; Hyatt, Brad; Blazar, Phillip E; Lifchez, Scott D
2017-06-01
Technology has enhanced modern health care delivery, particularly through accessibility to health information and ease of communication with tools like mobile device messaging (texting). However, text messaging has created new risks for breach of protected health information (PHI). In the current study, we sought to evaluate hand surgeons' knowledge and compliance with privacy and security standards for electronic communication by text message. A cross-sectional survey of the American Society for Surgery of the Hand membership was conducted in March and April 2016. Descriptive and inferential statistical analyses were performed of composite results as well as relevant subgroup analyses. A total of 409 responses were obtained (11% response rate). Although 63% of surgeons reported that they believe that text messaging does not meet Health Insurance Portability and Accountability Act of 1996 security standards, only 37% reported they do not use text messages to communicate PHI. Younger surgeons and respondents who believed that their texting was compliant were statistically significantly more like to report messaging of PHI (odds ratio, 1.59 and 1.22, respectively). A majority of hand surgeons in this study reported the use of text messaging to communicate PHI. Of note, neither the Health Insurance Portability and Accountability Act of 1996 statute nor US Department of Health and Human Services specifically prohibits this form of electronic communication. To be compliant, surgeons, practices, and institutions need to take reasonable security precautions to prevent breach of privacy with electronic communication. Communication of clinical information by text message is not prohibited under Health Insurance Portability and Accountability Act of 1996, but surgeons should use appropriate safeguards to prevent breach when using this form of communication. Copyright © 2017 American Society for Surgery of the Hand. Published by Elsevier Inc. All rights reserved.
Tenório-Daussat, Carolina Lyrio; Resende, Marcia Carolina Martinho; Ziolli, Roberta L; Hauser-Davis, Rachel Ann; Schaumloffel, Dirk; Saint'Pierre, Tatiana D
2014-03-01
Fish bile metallothioneins (MT) have been recently reported as biomarkers for environmental metal contamination; however, no studies regarding standardizations for their purification are available. Therefore, different procedures (varying centrifugation times and heat-treatment temperatures) and reducing agents (DTT, β-mercaptoethanol and TCEP) were applied to purify MT isolated from fish (Oreochromis niloticus) bile and liver. Liver was also analyzed, since these two organs are intrinsically connected and show the same trend regarding MT expression. Spectrophotometrical analyses were used to quantify the resulting MT samples, and SDS-PAGE gels were used to qualitatively assess the different procedure results. Each procedure was then statistically evaluated and a multivariate statistical analysis was then applied. A response surface methodology was also applied for bile samples, in order to further evaluate the responses for this matrix. Heat treatment effectively removes most undesired proteins from the samples, however results indicate that temperatures above 70 °C are not efficient since they also remove MTs from both bile and liver samples. Our results also indicate that the centrifugation times described in the literature can be decreased in order to analyze more samples in the same timeframe, of importance in environmental monitoring contexts where samples are usually numerous. In an environmental context, biliary MT was lower than liver MT, as expected, since liver accumulates MT with slower detoxification rates than bile, which is released from the gallbladder during feeding, and then diluted by water. Therefore, bile MT seems to be more adequate in environmental monitoring scopes regarding recent exposure to xenobiotics that may affect the proteomic and metalloproteomic expression of this biological matrix. Copyright © 2013 Elsevier B.V. All rights reserved.
Cigarette characteristic and emission variations across high-, middle- and low-income countries.
O'Connor, R J; Wilkins, K J; Caruso, R V; Cummings, K M; Kozlowski, L T
2010-12-01
The public health burden of tobacco use is shifting to the developing world, and the tobacco industry may apply some of its successful marketing tactics, such as allaying health concerns with product modifications. This study used standard smoking machine tests to examine the extent to which the industry is introducing engineering features that reduce tar and nicotine to cigarettes sold in middle- and low-income countries. Multicountry observational study. Cigarettes from 10 different countries were purchased in 2005 and 2007 with low-, middle- and high-income countries identified using the World Bank's per capita gross national income metric. Physical measurements of each brand were tested, and tobacco moisture and weight, paper porosity, filter ventilation and pressure drop were analysed. Tar, nicotine and carbon monoxide emission levels were determined for each brand using International Organization for Standardization and Canadian Intensive methods. Statistical analyses were performed using Statistical Package for the Social Sciences. Among cigarette brands with filters, more brands were ventilated in high-income countries compared with middle- and low-income countries [χ(2)(4)=25.92, P<0.001]. Low-income brands differed from high- and middle-income brands in engineering features such as filter density, ventilation and paper porosity, while tobacco weight and density measures separated the middle- and high-income groups. Smoke emissions differed across income groups, but these differences were largely negated when one accounted for design features. This study showed that as a country's income level increases, cigarettes become more highly engineered and the emissions levels decrease. In order to reduce the burden of tobacco-related disease and further effective product regulation, health officials must understand cigarette design and function within and between countries. Copyright © 2010 The Royal Society for Public Health. Published by Elsevier Ltd. All rights reserved.
Cigarette characteristic and emission variations across high-, middle- and low-income countries
O’Connor, R.J.; Wilkins, K.J.; Caruso, R.V.; Cummings, K.M.; Kozlowski, L.T.
2010-01-01
SUMMARY Objectives The public health burden of tobacco use is shifting to the developing world, and the tobacco industry may apply some of its successful marketing tactics, such as allaying health concerns with product modifications. This study used standard smoking machine tests to examine the extent to which the industry is introducing engineering features that reduce tar and nicotine to cigarettes sold in middle- and low-income countries. Study design Multicountry observational study. Methods Cigarettes from 10 different countries were purchased in 2005 and 2007 with low-, middle- and high-income countries identified using the World Bank’s per-capita gross national income metric. Physical measurements of each brand were tested, and tobacco moisture and weight, paper porosity, filter ventilation and pressure drop were analysed. Tar, nicotine and carbon monoxide emission levels were determined for each brand using International Organization for Standardization and Canadian Intensive methods. Statistical analyses were performed using Statistical Package for the Social Sciences. Results Among cigarette brands with filters, more brands were ventilated in high-income countries compared with middle- and low-income countries [χ2(4)=25.92, P<0.001]. Low-income brands differed from high- and middle-income brands in engineering features such as filter density, ventilation and paper porosity, while tobacco weight and density measures separated the middle- and high-income groups. Smoke emissions differed across income groups, but these differences were largely negated when one accounted for design features. Conclusions This study showed that as a country’s income level increases, cigarettes become more highly engineered and the emissions levels decrease. In order to reduce the burden of tobacco-related disease and further effective product regulation, health officials must understand cigarette design and function within and between countries. PMID:21030055
Bartels, E M; Folmer, V N; Bliddal, H; Altman, R D; Juhl, C; Tarp, S; Zhang, W; Christensen, R
2015-01-01
The aim of this study was to assess the clinical efficacy and safety of oral ginger for symptomatic treatment of osteoarthritis (OA) by carrying out a systematic literature search followed by meta-analyses on selected studies. Inclusion criteria were randomized controlled trials (RCTs) comparing oral ginger treatment with placebo in OA patients aged >18 years. Outcomes were reduction in pain and reduction in disability. Harm was assessed as withdrawals due to adverse events. The efficacy effect size was estimated using Hedges' standardized mean difference (SMD), and safety by risk ratio (RR). Standard random-effects meta-analysis was used, and inconsistency was evaluated by the I-squared index (I(2)). Out of 122 retrieved references, 117 were discarded, leaving five trials (593 patients) for meta-analyses. The majority reported relevant randomization procedures and blinding, but an inadequate intention-to-treat (ITT) analysis. Following ginger intake, a statistically significant pain reduction SMD = -0.30 ([95% CI: [(-0.50, -0.09)], P = 0.005]) with a low degree of inconsistency among trials (I(2) = 27%), and a statistically significant reduction in disability SMD = -0.22 ([95% CI: ([-0.39, -0.04)]; P = 0.01; I(2) = 0%]) were seen, both in favor of ginger. Patients given ginger were more than twice as likely to discontinue treatment compared to placebo ([RR = 2.33; 95% CI: (1.04, 5.22)]; P = 0.04; I(2) = 0%]). Ginger was modestly efficacious and reasonably safe for treatment of OA. We judged the evidence to be of moderate quality, based on the small number of participants and inadequate ITT populations. Prospero: CRD42011001777. Copyright © 2014 Osteoarthritis Research Society International. Published by Elsevier Ltd. All rights reserved.
Using R-Project for Free Statistical Analysis in Extension Research
ERIC Educational Resources Information Center
Mangiafico, Salvatore S.
2013-01-01
One option for Extension professionals wishing to use free statistical software is to use online calculators, which are useful for common, simple analyses. A second option is to use a free computing environment capable of performing statistical analyses, like R-project. R-project is free, cross-platform, powerful, and respected, but may be…
Development of QC Procedures for Ocean Data Obtained by National Research Projects of Korea
NASA Astrophysics Data System (ADS)
Kim, S. D.; Park, H. M.
2017-12-01
To establish data management system for ocean data obtained by national research projects of Ministry of Oceans and Fisheries of Korea, KIOST conducted standardization and development of QC procedures. After reviewing and analyzing the existing international and domestic ocean-data standards and QC procedures, the draft version of standards and QC procedures were prepared. The proposed standards and QC procedures were reviewed and revised by experts in the field of oceanography and academic societies several times. A technical report on the standards of 25 data items and 12 QC procedures for physical, chemical, biological and geological data items. The QC procedure for temperature and salinity data was set up by referring the manuals published by GTSPP, ARGO and IOOS QARTOD. It consists of 16 QC tests applicable for vertical profile data and time series data obtained in real-time mode and delay mode. Three regional range tests to inspect annual, seasonal and monthly variations were included in the procedure. Three programs were developed to calculate and provide upper limit and lower limit of temperature and salinity at depth from 0 to 1550m. TS data of World Ocean Database, ARGO, GTSPP and in-house data of KIOST were analysed statistically to calculate regional limit of Northwest Pacific area. Based on statistical analysis, the programs calculate regional ranges using mean and standard deviation at 3 kind of grid systems (3° grid, 1° grid and 0.5° grid) and provide recommendation. The QC procedures for 12 data items were set up during 1st phase of national program for data management (2012-2015) and are being applied to national research projects practically at 2nd phase (2016-2019). The QC procedures will be revised by reviewing the result of QC application when the 2nd phase of data management programs is completed.
Slonim, Anthony D; Marcin, James P; Turenne, Wendy; Hall, Matt; Joseph, Jill G
2007-12-01
To determine the rates, patient, and institutional characteristics associated with the occurrence of patient safety indicators (PSIs) in hospitalized children and the degree of statistical difference derived from using three approaches of controlling for institution level effects. Pediatric Health Information System Dataset consisting of all pediatric discharges (<21 years of age) from 34 academic, freestanding children's hospitals for calendar year 2003. The rates of PSIs were computed for all discharges. The patient and institutional characteristics associated with these PSIs were calculated. The analyses sequentially applied three increasingly conservative methods to control for the institution-level effects robust standard error estimation, a fixed effects model, and a random effects model. The degree of difference from a "base state," which excluded institution-level variables, and between the models was calculated. The effects of these analyses on the interpretation of the PSIs are presented. PSIs are relatively infrequent events in hospitalized children ranging from 0 per 10,000 (postoperative hip fracture) to 87 per 10,000 (postoperative respiratory failure). Significant variables associated PSIs included age (neonates), race (Caucasians), payor status (public insurance), severity of illness (extreme), and hospital size (>300 beds), which all had higher rates of PSIs than their reference groups in the bivariable logistic regression results. The three different approaches of adjusting for institution-level effects demonstrated that there were similarities in both the clinical and statistical significance across each of the models. Institution-level effects can be appropriately controlled for by using a variety of methods in the analyses of administrative data. Whenever possible, resource-conservative methods should be used in the analyses especially if clinical implications are minimal.
Cost effectiveness of the stream-gaging program in Nevada
Arteaga, F.E.
1990-01-01
The stream-gaging network in Nevada was evaluated as part of a nationwide effort by the U.S. Geological Survey to define and document the most cost-effective means of furnishing streamflow information. Specifically, the study dealt with 79 streamflow gages and 2 canal-flow gages that were under the direct operation of Nevada personnel as of 1983. Cost-effective allocations of resources, including budget and operational criteria, were studied using statistical procedures known as Kalman-filtering techniques. The possibility of developing streamflow data at ungaged sites was evaluated using flow-routing and statistical regression analyses. Neither of these methods provided sufficiently accurate results to warrant their use in place of stream gaging. The 81 gaging stations were being operated in 1983 with a budget of $465,500. As a result of this study, all existing stations were determined to be necessary components of the program for the foreseeable future. At the 1983 funding level, the average standard error of streamflow records was nearly 28%. This same overall level of accuracy could have been maintained with a budget of approximately $445,000 if the funds were redistributed more equitably among the gages. The maximum budget analyzed, $1,164 ,000 would have resulted in an average standard error of 11%. The study indicates that a major source of error is lost data. If perfectly operating equipment were available, the standard error for the 1983 program and budget could have been reduced to 21%. (Thacker-USGS, WRD)
Atmospheric effects on cluster analyses. [for remote sensing application
NASA Technical Reports Server (NTRS)
Kiang, R. K.
1979-01-01
Ground reflected radiance, from which information is extracted through techniques of cluster analyses for remote sensing application, is altered by the atmosphere when it reaches the satellite. Therefore it is essential to understand the effects of the atmosphere on Landsat measurements, cluster characteristics and analysis accuracy. A doubling model is employed to compute the effective reflectivity, observed from the satellite, as a function of ground reflectivity, solar zenith angle and aerosol optical thickness for standard atmosphere. The relation between the effective reflectivity and ground reflectivity is approximately linear. It is shown that for a horizontally homogeneous atmosphere, the classification statistics from a maximum likelihood classifier remains unchanged under these transforms. If inhomogeneity is present, the divergence between clusters is reduced, and correlation between spectral bands increases. Radiance reflected by the background area surrounding the target may also reach the satellite. The influence of background reflectivity on effective reflectivity is discussed.
Asymmetric correlation matrices: an analysis of financial data
NASA Astrophysics Data System (ADS)
Livan, G.; Rebecchi, L.
2012-06-01
We analyse the spectral properties of correlation matrices between distinct statistical systems. Such matrices are intrinsically non-symmetric, and lend themselves to extend the spectral analyses usually performed on standard Pearson correlation matrices to the realm of complex eigenvalues. We employ some recent random matrix theory results on the average eigenvalue density of this type of matrix to distinguish between noise and non-trivial correlation structures, and we focus on financial data as a case study. Namely, we employ daily prices of stocks belonging to the American and British stock exchanges, and look for the emergence of correlations between two such markets in the eigenvalue spectrum of their non-symmetric correlation matrix. We find several non trivial results when considering time-lagged correlations over short lags, and we corroborate our findings by additionally studying the asymmetric correlation matrix of the principal components of our datasets.
Family Caregiver Role and Burden Related to Gender and Family Relationships
Friedemann, Marie-Luise; Buckwalter, Kathleen C.
2015-01-01
This study described and contrasted family caregivers and explored the effect of gender and family relationship on the caregiver’s role perception, workload, burden, and family help. Home care agencies and community organizations assisted with the recruitment of 533 multicultural, predominantly Latino caregivers who were interviewed at home. The Caregiver Identity Theory guided the study. Survey instruments were standardized tools or were constructed and pretested for this study. Descriptive statistics and t-test analyses assisted in describing the sample and multivariate analyses were used to contrast the caregiver groups. Findings suggested a gendered approach to self-appraisal and coping. Men in this predominantly Latino and Caribbean sample felt less burden and depression than women who believed caregiving is a female duty. Family nurses should pay attention to the most vulnerable groups: older spouses resistant to using family and community resources and hard-working female adult children, and assess each family situation individually. PMID:24777069
Analysis of complex environment effect on near-field emission
NASA Astrophysics Data System (ADS)
Ravelo, B.; Lalléchère, S.; Bonnet, P.; Paladian, F.
2014-10-01
The article is dealing with uncertainty analyses of radiofrequency circuits electromagnetic compatibility emission based on the near-field/near-field (NF/NF) transform combined with stochastic approach. By using 2D data corresponding to electromagnetic (EM) field (X=E or H) scanned in the observation plane placed at the position z0 above the circuit under test (CUT), the X field map was extracted. Then, uncertainty analyses were assessed via the statistical moments from X component. In addition, stochastic collocation based was considered and calculations were applied to planar EM NF radiated by the CUTs as Wilkinson power divider and a microstrip line operating at GHz levels. After Matlab implementation, the mean and standard deviation were assessed. The present study illustrates how the variations of environmental parameters may impact EM fields. The NF uncertainty methodology can be applied to any physical parameter effects in complex environment and useful for printed circuit board (PCBs) design guideline.
The Problem of Auto-Correlation in Parasitology
Pollitt, Laura C.; Reece, Sarah E.; Mideo, Nicole; Nussey, Daniel H.; Colegrave, Nick
2012-01-01
Explaining the contribution of host and pathogen factors in driving infection dynamics is a major ambition in parasitology. There is increasing recognition that analyses based on single summary measures of an infection (e.g., peak parasitaemia) do not adequately capture infection dynamics and so, the appropriate use of statistical techniques to analyse dynamics is necessary to understand infections and, ultimately, control parasites. However, the complexities of within-host environments mean that tracking and analysing pathogen dynamics within infections and among hosts poses considerable statistical challenges. Simple statistical models make assumptions that will rarely be satisfied in data collected on host and parasite parameters. In particular, model residuals (unexplained variance in the data) should not be correlated in time or space. Here we demonstrate how failure to account for such correlations can result in incorrect biological inference from statistical analysis. We then show how mixed effects models can be used as a powerful tool to analyse such repeated measures data in the hope that this will encourage better statistical practices in parasitology. PMID:22511865
Naidu, Sailen G; Kriegshauser, J Scott; Paden, Robert G; He, Miao; Wu, Qing; Hara, Amy K
2014-12-01
An ultra-low-dose radiation protocol reconstructed with model-based iterative reconstruction was compared with our standard-dose protocol. This prospective study evaluated 20 men undergoing surveillance-enhanced computed tomography after endovascular aneurysm repair. All patients underwent standard-dose and ultra-low-dose venous phase imaging; images were compared after reconstruction with filtered back projection, adaptive statistical iterative reconstruction, and model-based iterative reconstruction. Objective measures of aortic contrast attenuation and image noise were averaged. Images were subjectively assessed (1 = worst, 5 = best) for diagnostic confidence, image noise, and vessel sharpness. Aneurysm sac diameter and endoleak detection were compared. Quantitative image noise was 26% less with ultra-low-dose model-based iterative reconstruction than with standard-dose adaptive statistical iterative reconstruction and 58% less than with ultra-low-dose adaptive statistical iterative reconstruction. Average subjective noise scores were not different between ultra-low-dose model-based iterative reconstruction and standard-dose adaptive statistical iterative reconstruction (3.8 vs. 4.0, P = .25). Subjective scores for diagnostic confidence were better with standard-dose adaptive statistical iterative reconstruction than with ultra-low-dose model-based iterative reconstruction (4.4 vs. 4.0, P = .002). Vessel sharpness was decreased with ultra-low-dose model-based iterative reconstruction compared with standard-dose adaptive statistical iterative reconstruction (3.3 vs. 4.1, P < .0001). Ultra-low-dose model-based iterative reconstruction and standard-dose adaptive statistical iterative reconstruction aneurysm sac diameters were not significantly different (4.9 vs. 4.9 cm); concordance for the presence of endoleak was 100% (P < .001). Compared with a standard-dose technique, an ultra-low-dose model-based iterative reconstruction protocol provides comparable image quality and diagnostic assessment at a 73% lower radiation dose.
Marqués-Jiménez, Diego; Calleja-González, Julio; Arratibel, Iñaki; Delextrat, Anne; Terrados, Nicolás
2016-01-01
The aim was to identify benefits of compression garments used for recovery of exercised-induced muscle damage. Computer-based literature research was performed in September 2015 using four online databases: Medline (PubMed), Cochrane, WOS (Web Of Science) and Scopus. The analysis of risk of bias was completed in accordance with the Cochrane Collaboration Guidelines. Mean differences and 95% confidence intervals were calculated with Hedges' g for continuous outcomes. A random effect meta-analysis model was used. Systematic differences (heterogeneity) were assessed with I(2) statistic. Most results obtained had high heterogeneity, thus their interpretation should be careful. Our findings showed that creatine kinase (standard mean difference=-0.02, 9 studies) was unaffected when using compression garments for recovery purposes. In contrast, blood lactate concentration was increased (standard mean difference=0.98, 5 studies). Applying compression reduced lactate dehydrogenase (standard mean difference=-0.52, 2 studies), muscle swelling (standard mean difference=-0.73, 5 studies) and perceptual measurements (standard mean difference=-0.43, 15 studies). Analyses of power (standard mean difference=1.63, 5 studies) and strength (standard mean difference=1.18, 8 studies) indicate faster recovery of muscle function after exercise. These results suggest that the application of compression clothing may aid in the recovery of exercise induced muscle damage, although the findings need corroboration. Copyright © 2015 Elsevier Inc. All rights reserved.
Power estimation using simulations for air pollution time-series studies
2012-01-01
Background Estimation of power to assess associations of interest can be challenging for time-series studies of the acute health effects of air pollution because there are two dimensions of sample size (time-series length and daily outcome counts), and because these studies often use generalized linear models to control for complex patterns of covariation between pollutants and time trends, meteorology and possibly other pollutants. In general, statistical software packages for power estimation rely on simplifying assumptions that may not adequately capture this complexity. Here we examine the impact of various factors affecting power using simulations, with comparison of power estimates obtained from simulations with those obtained using statistical software. Methods Power was estimated for various analyses within a time-series study of air pollution and emergency department visits using simulations for specified scenarios. Mean daily emergency department visit counts, model parameter value estimates and daily values for air pollution and meteorological variables from actual data (8/1/98 to 7/31/99 in Atlanta) were used to generate simulated daily outcome counts with specified temporal associations with air pollutants and randomly generated error based on a Poisson distribution. Power was estimated by conducting analyses of the association between simulated daily outcome counts and air pollution in 2000 data sets for each scenario. Power estimates from simulations and statistical software (G*Power and PASS) were compared. Results In the simulation results, increasing time-series length and average daily outcome counts both increased power to a similar extent. Our results also illustrate the low power that can result from using outcomes with low daily counts or short time series, and the reduction in power that can accompany use of multipollutant models. Power estimates obtained using standard statistical software were very similar to those from the simulations when properly implemented; implementation, however, was not straightforward. Conclusions These analyses demonstrate the similar impact on power of increasing time-series length versus increasing daily outcome counts, which has not previously been reported. Implementation of power software for these studies is discussed and guidance is provided. PMID:22995599
Power estimation using simulations for air pollution time-series studies.
Winquist, Andrea; Klein, Mitchel; Tolbert, Paige; Sarnat, Stefanie Ebelt
2012-09-20
Estimation of power to assess associations of interest can be challenging for time-series studies of the acute health effects of air pollution because there are two dimensions of sample size (time-series length and daily outcome counts), and because these studies often use generalized linear models to control for complex patterns of covariation between pollutants and time trends, meteorology and possibly other pollutants. In general, statistical software packages for power estimation rely on simplifying assumptions that may not adequately capture this complexity. Here we examine the impact of various factors affecting power using simulations, with comparison of power estimates obtained from simulations with those obtained using statistical software. Power was estimated for various analyses within a time-series study of air pollution and emergency department visits using simulations for specified scenarios. Mean daily emergency department visit counts, model parameter value estimates and daily values for air pollution and meteorological variables from actual data (8/1/98 to 7/31/99 in Atlanta) were used to generate simulated daily outcome counts with specified temporal associations with air pollutants and randomly generated error based on a Poisson distribution. Power was estimated by conducting analyses of the association between simulated daily outcome counts and air pollution in 2000 data sets for each scenario. Power estimates from simulations and statistical software (G*Power and PASS) were compared. In the simulation results, increasing time-series length and average daily outcome counts both increased power to a similar extent. Our results also illustrate the low power that can result from using outcomes with low daily counts or short time series, and the reduction in power that can accompany use of multipollutant models. Power estimates obtained using standard statistical software were very similar to those from the simulations when properly implemented; implementation, however, was not straightforward. These analyses demonstrate the similar impact on power of increasing time-series length versus increasing daily outcome counts, which has not previously been reported. Implementation of power software for these studies is discussed and guidance is provided.
Stewart, Gavin B.; Altman, Douglas G.; Askie, Lisa M.; Duley, Lelia; Simmonds, Mark C.; Stewart, Lesley A.
2012-01-01
Background Individual participant data (IPD) meta-analyses that obtain “raw” data from studies rather than summary data typically adopt a “two-stage” approach to analysis whereby IPD within trials generate summary measures, which are combined using standard meta-analytical methods. Recently, a range of “one-stage” approaches which combine all individual participant data in a single meta-analysis have been suggested as providing a more powerful and flexible approach. However, they are more complex to implement and require statistical support. This study uses a dataset to compare “two-stage” and “one-stage” models of varying complexity, to ascertain whether results obtained from the approaches differ in a clinically meaningful way. Methods and Findings We included data from 24 randomised controlled trials, evaluating antiplatelet agents, for the prevention of pre-eclampsia in pregnancy. We performed two-stage and one-stage IPD meta-analyses to estimate overall treatment effect and to explore potential treatment interactions whereby particular types of women and their babies might benefit differentially from receiving antiplatelets. Two-stage and one-stage approaches gave similar results, showing a benefit of using anti-platelets (Relative risk 0.90, 95% CI 0.84 to 0.97). Neither approach suggested that any particular type of women benefited more or less from antiplatelets. There were no material differences in results between different types of one-stage model. Conclusions For these data, two-stage and one-stage approaches to analysis produce similar results. Although one-stage models offer a flexible environment for exploring model structure and are useful where across study patterns relating to types of participant, intervention and outcome mask similar relationships within trials, the additional insights provided by their usage may not outweigh the costs of statistical support for routine application in syntheses of randomised controlled trials. Researchers considering undertaking an IPD meta-analysis should not necessarily be deterred by a perceived need for sophisticated statistical methods when combining information from large randomised trials. PMID:23056232
Yan, Shi; Jin, YinZhe; Oh, YongSeok; Choi, YoungJun
2016-06-01
The aim of this study was to assess the effect of exercise on depression in university students. A systematic literature search was conducted in PubMed, EMBASE and the Cochrane library from their inception through December 10, 2014 to identify relevant articles. The heterogeneity across studies was examined by Cochran's Q statistic and the I2 statistic. Standardized mean difference (SMD) and 95% confidence interval (CI) were pooled to evaluate the effect of exercise on depression. Then, sensitivity and subgroup analyses were performed. In addition, publication bias was assessed by drawing a funnel plot. A total of 352 participants (154 cases and 182 controls) from eight included trials were included. Our pooled result showed a significant alleviative depression after exercise (SMD=-0.50, 95% CI: -0.97 to -0.03, P=0.04) with significant heterogeneity (P=0.003, I2=67%). Sensitivity analyses showed that the pooled result may be unstable. Subgroup analysis indicated that sample size may be a source of heterogeneity. Moreover, no publication bias was observed in this study. Exercise may be an effective therapy for treating depression in university students. However, further clinical studies with strict design and large samples focused on this specific population should be warranted in the future.
Characterising the disintegration properties of tablets in opaque media using texture analysis.
Scheuerle, Rebekah L; Gerrard, Stephen E; Kendall, Richard A; Tuleu, Catherine; Slater, Nigel K H; Mahbubani, Krishnaa T
2015-01-01
Tablet disintegration characterisation is used in pharmaceutical research, development, and quality control. Standard methods used to characterise tablet disintegration are often dependent on visual observation in measurement of disintegration times. This presents a challenge for disintegration studies of tablets in opaque, physiologically relevant media that could be useful for tablet formulation optimisation. This study has explored an application of texture analysis disintegration testing, a non-visual, quantitative means of determining tablet disintegration end point, by analysing the disintegration behaviour of two tablet formulations in opaque media. In this study, the disintegration behaviour of one tablet formulation manufactured in-house, and Sybedia Flashtab placebo tablets in water, bovine, and human milk were characterised. A novel method is presented to characterise the disintegration process and to quantify the disintegration end points of the tablets in various media using load data generated by a texture analyser probe. The disintegration times in the different media were found to be statistically different (P<0.0001) from one another for both tablet formulations using one-way ANOVA. Using the Tukey post-hoc test, the Sybedia Flashtab placebo tablets were found not to have statistically significant disintegration times from each other in human versus bovine milk (adjusted P value 0.1685). Copyright © 2015 Elsevier B.V. All rights reserved.
MOLSIM: A modular molecular simulation software
Jurij, Reščič
2015-01-01
The modular software MOLSIM for all‐atom molecular and coarse‐grained simulations is presented with focus on the underlying concepts used. The software possesses four unique features: (1) it is an integrated software for molecular dynamic, Monte Carlo, and Brownian dynamics simulations; (2) simulated objects are constructed in a hierarchical fashion representing atoms, rigid molecules and colloids, flexible chains, hierarchical polymers, and cross‐linked networks; (3) long‐range interactions involving charges, dipoles and/or anisotropic dipole polarizabilities are handled either with the standard Ewald sum, the smooth particle mesh Ewald sum, or the reaction‐field technique; (4) statistical uncertainties are provided for all calculated observables. In addition, MOLSIM supports various statistical ensembles, and several types of simulation cells and boundary conditions are available. Intermolecular interactions comprise tabulated pairwise potentials for speed and uniformity and many‐body interactions involve anisotropic polarizabilities. Intramolecular interactions include bond, angle, and crosslink potentials. A very large set of analyses of static and dynamic properties is provided. The capability of MOLSIM can be extended by user‐providing routines controlling, for example, start conditions, intermolecular potentials, and analyses. An extensive set of case studies in the field of soft matter is presented covering colloids, polymers, and crosslinked networks. © 2015 The Authors. Journal of Computational Chemistry Published by Wiley Periodicals, Inc. PMID:25994597
Rasova, Kamila; Prochazkova, Marie; Tintera, Jaroslav; Ibrahim, Ibrahim; Zimova, Denisa; Stetkarova, Ivana
2015-03-01
There is still little scientific evidence for the efficacy of neurofacilitation approaches and their possible influence on brain plasticity and adaptability. In this study, the outcome of a new kind of neurofacilitation approach, motor programme activating therapy (MPAT), was evaluated on the basis of a set of clinical functions and with MRI. Eighteen patients were examined four times with standardized clinical tests and diffusion tensor imaging to monitor changes without therapy, immediately after therapy and 1 month after therapy. Moreover, the strength of effective connectivity was analysed before and after therapy. Patients underwent a 1-h session of MPAT twice a week for 2 months. The data were analysed by nonparametric tests of association and were subsequently statistically evaluated. The therapy led to significant improvement in clinical functions, significant increment of fractional anisotropy and significant decrement of mean diffusivity, and decrement of effective connectivity at supplementary motor areas was observed immediately after the therapy. Changes in clinical functions and diffusion tensor images persisted 1 month after completing the programme. No statistically significant changes in clinical functions and no differences in MRI-diffusion tensor images were observed without physiotherapy. Positive immediate and long-term effects of MPAT on clinical and brain functions, as well as brain microstructure, were confirmed.
Maternal smoking and newborn sex, birth weight and breastfeeding: a population-based study.
Timur Taşhan, Sermin; Hotun Sahin, Nevin; Omaç Sönmez, Mehtap
2017-11-01
Today, it is acknowledged that smoking during pregnancy and/or the postnatal period has significant risks for a foetus and newborn child. This research examines the relationship between smoking only postnatally, both during pregnancy and postnatally, and the newborn sex, birth weight and breastfeeding. Total 664 women of randomly selected five primary healthcare centres between the dates 20 February 2010 and 20 July 2010 were included in the research. Statistical analyses were performed with SPSS for Windows 19.0 (Statistical Package for Social Sciences software package). Data were described as mean, standard deviation, percentages and Chi-square tests and backward stepwise logistic regression were analysed. It was found that the percentage of smoking women with daughters is 2.5 times higher than women with sons. Women who smoke are 3.9 times more likely to start feeding their baby with supplementary infant foods at 4 months or earlier than those who do not smoke. Finally, the risk of a birth weight under 2500 g is 3.8 times higher for maternal smokers. This study suggests that women who expect a girl smoke more heavily than those who expect a boy. The birth weight of maternal smokers' newborns is lower. Those women who smoke while breastfeeding start feeding their babies with supplementary infant foods at an earlier age.
Analyses and assessments of span wise gust gradient data from NASA B-57B aircraft
NASA Technical Reports Server (NTRS)
Frost, Walter; Chang, Ho-Pen; Ringnes, Erik A.
1987-01-01
Analysis of turbulence measured across the airfoil of a Cambera B-57 aircraft is reported. The aircraft is instrumented with probes for measuring wind at both wing tips and at the nose. Statistical properties of the turbulence are reported. These consist of the standard deviations of turbulence measured by each individual probe, standard deviations and probability distribution of differences in turbulence measured between probes and auto- and two-point spatial correlations and spectra. Procedures associated with calculations of two-point spatial correlations and spectra utilizing data were addressed. Methods and correction procedures for assuring the accuracy of aircraft measured winds are also described. Results are found, in general, to agree with correlations existing in the literature. The velocity spatial differences fit a Gaussian/Bessel type probability distribution. The turbulence agrees with the von Karman turbulence correlation and with two-point spatial correlations developed from the von Karman correlation.
Routine sampling and the control of Legionella spp. in cooling tower water systems.
Bentham, R H
2000-10-01
Cooling water samples from 31 cooling tower systems were cultured for Legionella over a 16-week summer period. The selected systems were known to be colonized by Legionella. Mean Legionella counts and standard deviations were calculated and time series correlograms prepared for each system. The standard deviations of Legionella counts in all the systems were very large, indicating great variability in the systems over the time period. Time series analyses demonstrated that in the majority of cases there was no significant relationship between the Legionella counts in the cooling tower at time of collection and the culture result once it was available. In the majority of systems (25/28), culture results from Legionella samples taken from the same systems 2 weeks apart were not statistically related. The data suggest that determinations of health risks from cooling towers cannot be reliably based upon single or infrequent Legionella tests.
Adapt-Mix: learning local genetic correlation structure improves summary statistics-based analyses
Park, Danny S.; Brown, Brielin; Eng, Celeste; Huntsman, Scott; Hu, Donglei; Torgerson, Dara G.; Burchard, Esteban G.; Zaitlen, Noah
2015-01-01
Motivation: Approaches to identifying new risk loci, training risk prediction models, imputing untyped variants and fine-mapping causal variants from summary statistics of genome-wide association studies are playing an increasingly important role in the human genetics community. Current summary statistics-based methods rely on global ‘best guess’ reference panels to model the genetic correlation structure of the dataset being studied. This approach, especially in admixed populations, has the potential to produce misleading results, ignores variation in local structure and is not feasible when appropriate reference panels are missing or small. Here, we develop a method, Adapt-Mix, that combines information across all available reference panels to produce estimates of local genetic correlation structure for summary statistics-based methods in arbitrary populations. Results: We applied Adapt-Mix to estimate the genetic correlation structure of both admixed and non-admixed individuals using simulated and real data. We evaluated our method by measuring the performance of two summary statistics-based methods: imputation and joint-testing. When using our method as opposed to the current standard of ‘best guess’ reference panels, we observed a 28% decrease in mean-squared error for imputation and a 73.7% decrease in mean-squared error for joint-testing. Availability and implementation: Our method is publicly available in a software package called ADAPT-Mix available at https://github.com/dpark27/adapt_mix. Contact: noah.zaitlen@ucsf.edu PMID:26072481
Using operations research to plan improvement of the transport of critically ill patients.
Chen, Jing; Awasthi, Anjali; Shechter, Steven; Atkins, Derek; Lemke, Linda; Fisher, Les; Dodek, Peter
2013-01-01
Operations research is the application of mathematical modeling, statistical analysis, and mathematical optimization to understand and improve processes in organizations. The objective of this study was to illustrate how the methods of operations research can be used to identify opportunities to reduce the absolute value and variability of interfacility transport intervals for critically ill patients. After linking data from two patient transport organizations in British Columbia, Canada, for all critical care transports during the calendar year 2006, the steps for transfer of critically ill patients were tabulated into a series of time intervals. Statistical modeling, root-cause analysis, Monte Carlo simulation, and sensitivity analysis were used to test the effect of changes in component intervals on overall duration and variation of transport times. Based on quality improvement principles, we focused on reducing the 75th percentile and standard deviation of these intervals. We analyzed a total of 3808 ground and air transports. Constraining time spent by transport personnel at sending and receiving hospitals was projected to reduce the total time taken by 33 minutes with as much as a 20% reduction in standard deviation of these transport intervals in 75% of ground transfers. Enforcing a policy of requiring acceptance of patients who have life- or limb-threatening conditions or organ failure was projected to reduce the standard deviation of air transport time by 63 minutes and the standard deviation of ground transport time by 68 minutes. Based on findings from our analyses, we developed recommendations for technology renovation, personnel training, system improvement, and policy enforcement. Use of the tools of operations research identifies opportunities for improvement in a complex system of critical care transport.
NASA Astrophysics Data System (ADS)
Castillo, Carlos; Gomez, Jose Alfonso
2016-04-01
Standardization is the process of developing common conventions or proceedings to facilitate the communication, use, comparison and exchange of products or information among different parties. It has been an useful tool in different fields from industry to statistics due to technical, economic and social reasons. In science the need for standardization has been recognised in the definition of methods as well as in publication formats. With respect to gully erosion, a number of initiatives have been carried out to propose common methodologies, for instance, for gully delineation (Castillo et al., 2014) and geometrical measurements (Casalí et al., 2015). The main aims of this work are: 1) to examine previous proposals in gully erosion literature implying standardization processes; 2) to contribute with new approaches to improve the homogeneity of methodologies and presentation of results for a better communication among the gully erosion community. For this purpose, we evaluated the basic information provided on environmental factors, discussed the delineation and measurement procedures proposed in previous works and, finally, we analysed statistically the severity of degradation levels derived from different indicators at the world scale. As a result, we presented suggestions aiming to serve as guidance for survey design as well as for the interpretation of vulnerability levels and degradation rates for future gully erosion studies. References Casalí, J., Giménez, R., and Campo-Bescós, M. A.: Gully geometry: what are we measuring?, SOIL, 1, 509-513, doi:10.5194/soil-1-509-2015, 2015. Castillo C., Taguas E. V., Zarco-Tejada P., James M. R., and Gómez J. A. (2014), The normalized topographic method: an automated procedure for gully mapping using GIS, Earth Surf. Process. Landforms, 39, 2002-2015, doi: 10.1002/esp.3595
BlueSNP: R package for highly scalable genome-wide association studies using Hadoop clusters.
Huang, Hailiang; Tata, Sandeep; Prill, Robert J
2013-01-01
Computational workloads for genome-wide association studies (GWAS) are growing in scale and complexity outpacing the capabilities of single-threaded software designed for personal computers. The BlueSNP R package implements GWAS statistical tests in the R programming language and executes the calculations across computer clusters configured with Apache Hadoop, a de facto standard framework for distributed data processing using the MapReduce formalism. BlueSNP makes computationally intensive analyses, such as estimating empirical p-values via data permutation, and searching for expression quantitative trait loci over thousands of genes, feasible for large genotype-phenotype datasets. http://github.com/ibm-bioinformatics/bluesnp
Moulding techniques in lipstick manufacture: a comparative evaluation.
Dweck, A C; Burnham, C A
1980-06-01
Synopsis This paper examines two methods of lipstick bulk manufacture: one via a direct method and the other via stock concentrates. The paper continues with a comparison of two manufactured bulks moulded in three different ways - first by split moulding, secondly by Rotamoulding, and finally by Ejectoret moulding. Full consideration is paid to time, labour and cost standards of each approach and the resultant moulding examined using some novel physical testing methods. The results of these tests are statistically analysed. Finally, on the basis of the gathered data and photomicrographical work a theoretical lipstick structure is proposed by which the results may be explained.
NONPARAMETRIC MANOVA APPROACHES FOR NON-NORMAL MULTIVARIATE OUTCOMES WITH MISSING VALUES
He, Fanyin; Mazumdar, Sati; Tang, Gong; Bhatia, Triptish; Anderson, Stewart J.; Dew, Mary Amanda; Krafty, Robert; Nimgaonkar, Vishwajit; Deshpande, Smita; Hall, Martica; Reynolds, Charles F.
2017-01-01
Between-group comparisons often entail many correlated response variables. The multivariate linear model, with its assumption of multivariate normality, is the accepted standard tool for these tests. When this assumption is violated, the nonparametric multivariate Kruskal-Wallis (MKW) test is frequently used. However, this test requires complete cases with no missing values in response variables. Deletion of cases with missing values likely leads to inefficient statistical inference. Here we extend the MKW test to retain information from partially-observed cases. Results of simulated studies and analysis of real data show that the proposed method provides adequate coverage and superior power to complete-case analyses. PMID:29416225
Identification of differentially expressed genes and false discovery rate in microarray studies.
Gusnanto, Arief; Calza, Stefano; Pawitan, Yudi
2007-04-01
To highlight the development in microarray data analysis for the identification of differentially expressed genes, particularly via control of false discovery rate. The emergence of high-throughput technology such as microarrays raises two fundamental statistical issues: multiplicity and sensitivity. We focus on the biological problem of identifying differentially expressed genes. First, multiplicity arises due to testing tens of thousands of hypotheses, rendering the standard P value meaningless. Second, known optimal single-test procedures such as the t-test perform poorly in the context of highly multiple tests. The standard approach of dealing with multiplicity is too conservative in the microarray context. The false discovery rate concept is fast becoming the key statistical assessment tool replacing the P value. We review the false discovery rate approach and argue that it is more sensible for microarray data. We also discuss some methods to take into account additional information from the microarrays to improve the false discovery rate. There is growing consensus on how to analyse microarray data using the false discovery rate framework in place of the classical P value. Further research is needed on the preprocessing of the raw data, such as the normalization step and filtering, and on finding the most sensitive test procedure.
Mortality and employment after in-patient opiate detoxification.
Naderi-Heiden, A; Gleiss, A; Bäcker, C; Bieber, D; Nassan-Agha, H; Kasper, S; Frey, R
2012-05-01
We considered that completed opiate detoxification resulted in increased life expectancy and earning capacity as compared to non-completed detoxification. The cohort study sample included pure opioid or poly-substance addicts admitted for voluntary in-patient detoxification between 1997 and 2004. Of 404 patients, 58.7% completed the detoxification program and 41.3% did not. The Austrian Social Security Institution supplied data on survival and employment records for every single day in the individual observation period between discharge and December 2007. Statistical analyses included the calculation of standardized mortality rates for the follow-up period of up to 11 years. The standardized mortality ratios (SMRs) were between 13.5 and 17.9 during the first five years after discharge, thereafter they fell clearly with time. Mortality did not differ statistically significantly between completers and non-completers. The median employment rate was insignificantly higher in completers (12.0%) than in non-completers (5.5%). The odds for being employed were higher in pure opioid addicts than in poly-substance addicts (p=0.003). The assumption that completers of detoxification treatment have a better outcome than non-completers has not been confirmed. The decrease in mortality with time elapsed since detoxification is interesting. Pure opioid addicts had better employment prospects than poly-substance addicts. Copyright © 2010 Elsevier Masson SAS. All rights reserved.
da Silva, Luiz Bueno; Coutinho, Antonio Souto; da Costa Eulálio, Eliza Juliana; Soares, Elaine Victor Gonçalves
2012-01-01
The main objective of this study is to evaluate the impact of school furniture and work surface lighting on the body posture of public Middle School students from Paraíba (Brazil). The survey was carried out in two public schools and the target population for the study included 8th grade groups involving a total of 31 students. Brazilian standards for lighting levels, the CEBRACE standards for furniture measurements and the Postural Assessment Software (SAPO) for the postural misalignment assay were adopted for the measurements comparison. The statistic analysis includes analyses of parametric and non-parametric correlations. The results show that the students' most affected parts of the body were the spine, the regions of the knees and head and neck, with 90% of the total number of students presenting postural misalignment. The lighting levels were usually found below 300 lux, below recommended levels. The statistic analysis show that the more adequate the furniture seems to be to the user, the less the user will complain of pain. Such results indicate the need of investments in more suitable school furniture and structural reforms aimed at improving the lighting in the classrooms, which could fulfill the students' profile and reduce their complaints.
1992-10-01
N=8) and Results of 44 Statistical Analyses for Impact Test Performed on Forefoot of Unworn Footwear A-2. Summary Statistics (N=8) and Results of...on Forefoot of Worn Footwear Vlll Tables (continued) Table Page B-2. Summary Statistics (N=4) and Results of 76 Statistical Analyses for Impact...used tests to assess heel and forefoot shock absorption, upper and sole durability, and flexibility (Cavanagh, 1978). Later, the number of tests was
Vedula, S. Swaroop; Li, Tianjing; Dickersin, Kay
2013-01-01
Background Details about the type of analysis (e.g., intent to treat [ITT]) and definitions (i.e., criteria for including participants in the analysis) are necessary for interpreting a clinical trial's findings. Our objective was to compare the description of types of analyses and criteria for including participants in the publication (i.e., what was reported) with descriptions in the corresponding internal company documents (i.e., what was planned and what was done). Trials were for off-label uses of gabapentin sponsored by Pfizer and Parke-Davis, and documents were obtained through litigation. Methods and Findings For each trial, we compared internal company documents (protocols, statistical analysis plans, and research reports, all unpublished), with publications. One author extracted data and another verified, with a third person verifying discordant items and a sample of the rest. Extracted data included the number of participants randomized and analyzed for efficacy, and types of analyses for efficacy and safety and their definitions (i.e., criteria for including participants in each type of analysis). We identified 21 trials, 11 of which were published randomized controlled trials, and that provided the documents needed for planned comparisons. For three trials, there was disagreement on the number of randomized participants between the research report and publication. Seven types of efficacy analyses were described in the protocols, statistical analysis plans, and publications, including ITT and six others. The protocol or publication described ITT using six different definitions, resulting in frequent disagreements between the two documents (i.e., different numbers of participants were included in the analyses). Conclusions Descriptions of analyses conducted did not agree between internal company documents and what was publicly reported. Internal company documents provide extensive documentation of methods planned and used, and trial findings, and should be publicly accessible. Reporting standards for randomized controlled trials should recommend transparent descriptions and definitions of analyses performed and which study participants are excluded. Please see later in the article for the Editors' Summary PMID:23382656
Federal Register 2010, 2011, 2012, 2013, 2014
2013-02-07
... DEPARTMENT OF HEALTH AND HUMAN SERVICES Centers for Disease Control and Prevention National Center for Health Statistics (NCHS), Classifications and Public Health Data Standards Staff, Announces the..., Medical Systems Administrator, Classifications and Public Health Data Standards Staff, NCHS, 3311 Toledo...
Gaskin, Cadeyrn J; Happell, Brenda
2014-05-01
To (a) assess the statistical power of nursing research to detect small, medium, and large effect sizes; (b) estimate the experiment-wise Type I error rate in these studies; and (c) assess the extent to which (i) a priori power analyses, (ii) effect sizes (and interpretations thereof), and (iii) confidence intervals were reported. Statistical review. Papers published in the 2011 volumes of the 10 highest ranked nursing journals, based on their 5-year impact factors. Papers were assessed for statistical power, control of experiment-wise Type I error, reporting of a priori power analyses, reporting and interpretation of effect sizes, and reporting of confidence intervals. The analyses were based on 333 papers, from which 10,337 inferential statistics were identified. The median power to detect small, medium, and large effect sizes was .40 (interquartile range [IQR]=.24-.71), .98 (IQR=.85-1.00), and 1.00 (IQR=1.00-1.00), respectively. The median experiment-wise Type I error rate was .54 (IQR=.26-.80). A priori power analyses were reported in 28% of papers. Effect sizes were routinely reported for Spearman's rank correlations (100% of papers in which this test was used), Poisson regressions (100%), odds ratios (100%), Kendall's tau correlations (100%), Pearson's correlations (99%), logistic regressions (98%), structural equation modelling/confirmatory factor analyses/path analyses (97%), and linear regressions (83%), but were reported less often for two-proportion z tests (50%), analyses of variance/analyses of covariance/multivariate analyses of variance (18%), t tests (8%), Wilcoxon's tests (8%), Chi-squared tests (8%), and Fisher's exact tests (7%), and not reported for sign tests, Friedman's tests, McNemar's tests, multi-level models, and Kruskal-Wallis tests. Effect sizes were infrequently interpreted. Confidence intervals were reported in 28% of papers. The use, reporting, and interpretation of inferential statistics in nursing research need substantial improvement. Most importantly, researchers should abandon the misleading practice of interpreting the results from inferential tests based solely on whether they are statistically significant (or not) and, instead, focus on reporting and interpreting effect sizes, confidence intervals, and significance levels. Nursing researchers also need to conduct and report a priori power analyses, and to address the issue of Type I experiment-wise error inflation in their studies. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.
Wiuf, Carsten; Schaumburg-Müller Pallesen, Jonatan; Foldager, Leslie; Grove, Jakob
2016-08-01
In many areas of science it is custom to perform many, potentially millions, of tests simultaneously. To gain statistical power it is common to group tests based on a priori criteria such as predefined regions or by sliding windows. However, it is not straightforward to choose grouping criteria and the results might depend on the chosen criteria. Methods that summarize, or aggregate, test statistics or p-values, without relying on a priori criteria, are therefore desirable. We present a simple method to aggregate a sequence of stochastic variables, such as test statistics or p-values, into fewer variables without assuming a priori defined groups. We provide different ways to evaluate the significance of the aggregated variables based on theoretical considerations and resampling techniques, and show that under certain assumptions the FWER is controlled in the strong sense. Validity of the method was demonstrated using simulations and real data analyses. Our method may be a useful supplement to standard procedures relying on evaluation of test statistics individually. Moreover, by being agnostic and not relying on predefined selected regions, it might be a practical alternative to conventionally used methods of aggregation of p-values over regions. The method is implemented in Python and freely available online (through GitHub, see the Supplementary information).
Bruland, Philipp; Dugas, Martin
2017-01-07
Data capture for clinical registries or pilot studies is often performed in spreadsheet-based applications like Microsoft Excel or IBM SPSS. Usually, data is transferred into statistic software, such as SAS, R or IBM SPSS Statistics, for analyses afterwards. Spreadsheet-based solutions suffer from several drawbacks: It is generally not possible to ensure a sufficient right and role management; it is not traced who has changed data when and why. Therefore, such systems are not able to comply with regulatory requirements for electronic data capture in clinical trials. In contrast, Electronic Data Capture (EDC) software enables a reliable, secure and auditable collection of data. In this regard, most EDC vendors support the CDISC ODM standard to define, communicate and archive clinical trial meta- and patient data. Advantages of EDC systems are support for multi-user and multicenter clinical trials as well as auditable data. Migration from spreadsheet based data collection to EDC systems is labor-intensive and time-consuming at present. Hence, the objectives of this research work are to develop a mapping model and implement a converter between the IBM SPSS and CDISC ODM standard and to evaluate this approach regarding syntactic and semantic correctness. A mapping model between IBM SPSS and CDISC ODM data structures was developed. SPSS variables and patient values can be mapped and converted into ODM. Statistical and display attributes from SPSS are not corresponding to any ODM elements; study related ODM elements are not available in SPSS. The S2O converting tool was implemented as command-line-tool using the SPSS internal Java plugin. Syntactic and semantic correctness was validated with different ODM tools and reverse transformation from ODM into SPSS format. Clinical data values were also successfully transformed into the ODM structure. Transformation between the spreadsheet format IBM SPSS and the ODM standard for definition and exchange of trial data is feasible. S2O facilitates migration from Excel- or SPSS-based data collections towards reliable EDC systems. Thereby, advantages of EDC systems like reliable software architecture for secure and traceable data collection and particularly compliance with regulatory requirements are achievable.
A concept for holistic whole body MRI data analysis, Imiomics
Malmberg, Filip; Johansson, Lars; Lind, Lars; Sundbom, Magnus; Ahlström, Håkan; Kullberg, Joel
2017-01-01
Purpose To present and evaluate a whole-body image analysis concept, Imiomics (imaging–omics) and an image registration method that enables Imiomics analyses by deforming all image data to a common coordinate system, so that the information in each voxel can be compared between persons or within a person over time and integrated with non-imaging data. Methods The presented image registration method utilizes relative elasticity constraints of different tissue obtained from whole-body water-fat MRI. The registration method is evaluated by inverse consistency and Dice coefficients and the Imiomics concept is evaluated by example analyses of importance for metabolic research using non-imaging parameters where we know what to expect. The example analyses include whole body imaging atlas creation, anomaly detection, and cross-sectional and longitudinal analysis. Results The image registration method evaluation on 128 subjects shows low inverse consistency errors and high Dice coefficients. Also, the statistical atlas with fat content intensity values shows low standard deviation values, indicating successful deformations to the common coordinate system. The example analyses show expected associations and correlations which agree with explicit measurements, and thereby illustrate the usefulness of the proposed Imiomics concept. Conclusions The registration method is well-suited for Imiomics analyses, which enable analyses of relationships to non-imaging data, e.g. clinical data, in new types of holistic targeted and untargeted big-data analysis. PMID:28241015
Baume, M; Garrelly, L; Facon, J P; Bouton, S; Fraisse, P O; Yardin, C; Reyrolle, M; Jarraud, S
2013-06-01
The characterization and certification of a Legionella DNA quantitative reference material as a primary measurement standard for Legionella qPCR. Twelve laboratories participated in a collaborative certification campaign. A candidate reference DNA material was analysed through PCR-based limiting dilution assays (LDAs). The validated data were used to statistically assign both a reference value and an associated uncertainty to the reference material. This LDA method allowed for the direct quantification of the amount of Legionella DNA per tube in genomic units (GU) and the determination of the associated uncertainties. This method could be used for the certification of all types of microbiological standards for qPCR. The use of this primary standard will improve the accuracy of Legionella qPCR measurements and the overall consistency of these measurements among different laboratories. The extensive use of this certified reference material (CRM) has been integrated in the French standard NF T90-471 (April 2010) and in the ISO Technical Specification 12 869 (Anon 2012 International Standardisation Organisation) for validating qPCR methods and ensuring the reliability of these methods. © 2013 The Society for Applied Microbiology.
The effect of rare variants on inflation of the test statistics in case-control analyses.
Pirie, Ailith; Wood, Angela; Lush, Michael; Tyrer, Jonathan; Pharoah, Paul D P
2015-02-20
The detection of bias due to cryptic population structure is an important step in the evaluation of findings of genetic association studies. The standard method of measuring this bias in a genetic association study is to compare the observed median association test statistic to the expected median test statistic. This ratio is inflated in the presence of cryptic population structure. However, inflation may also be caused by the properties of the association test itself particularly in the analysis of rare variants. We compared the properties of the three most commonly used association tests: the likelihood ratio test, the Wald test and the score test when testing rare variants for association using simulated data. We found evidence of inflation in the median test statistics of the likelihood ratio and score tests for tests of variants with less than 20 heterozygotes across the sample, regardless of the total sample size. The test statistics for the Wald test were under-inflated at the median for variants below the same minor allele frequency. In a genetic association study, if a substantial proportion of the genetic variants tested have rare minor allele frequencies, the properties of the association test may mask the presence or absence of bias due to population structure. The use of either the likelihood ratio test or the score test is likely to lead to inflation in the median test statistic in the absence of population structure. In contrast, the use of the Wald test is likely to result in under-inflation of the median test statistic which may mask the presence of population structure.
Heckman, Timothy G; Heckman, Bernadette D; Anderson, Timothy; Lovejoy, Travis I; Markowitz, John C; Shen, Ye; Sutton, Mark
2017-01-01
Human immunodeficiency virus (HIV)-positive rural individuals carry a 1.3-times greater risk of a depressive diagnosis than their urban counterparts. This randomized clinical trial tested whether telephone-administered interpersonal psychotherapy (tele-IPT) acutely relieved depressive symptoms in 132 HIV-infected rural persons from 28 states diagnosed with Diagnostic and Statistical Manual of Mental Disorders-IV major depressive disorder (MDD), partially remitted MDD, or dysthymic disorder. Patients were randomized to either 9 sessions of one-on-one tele-IPT (n = 70) or standard care (SC; n = 62). A series of intent-to-treat (ITT), therapy completer, and sensitivity analyses assessed changes in depressive symptoms, interpersonal problems, and social support from pre- to postintervention. Across all analyses, tele-IPT patients reported significantly lower depressive symptoms and interpersonal problems than SC controls; 22% of tele-IPT patients were categorized as a priori "responders" who reported 50% or higher reductions in depressive symptoms compared to only 4% of SC controls in ITT analyses. Brief tele-IPT acutely decreased depressive symptoms and interpersonal problems in depressed rural people living with HIV.
Langley Wind Tunnel Data Quality Assurance-Check Standard Results
NASA Technical Reports Server (NTRS)
Hemsch, Michael J.; Grubb, John P.; Krieger, William B.; Cler, Daniel L.
2000-01-01
A framework for statistical evaluation, control and improvement of wind funnel measurement processes is presented The methodology is adapted from elements of the Measurement Assurance Plans developed by the National Bureau of Standards (now the National Institute of Standards and Technology) for standards and calibration laboratories. The present methodology is based on the notions of statistical quality control (SQC) together with check standard testing and a small number of customer repeat-run sets. The results of check standard and customer repeat-run -sets are analyzed using the statistical control chart-methods of Walter A. Shewhart long familiar to the SQC community. Control chart results are presented for. various measurement processes in five facilities at Langley Research Center. The processes include test section calibration, force and moment measurements with a balance, and instrument calibration.
A framework for the meta-analysis of Bland-Altman studies based on a limits of agreement approach.
Tipton, Elizabeth; Shuster, Jonathan
2017-10-15
Bland-Altman method comparison studies are common in the medical sciences and are used to compare a new measure to a gold-standard (often costlier or more invasive) measure. The distribution of these differences is summarized by two statistics, the 'bias' and standard deviation, and these measures are combined to provide estimates of the limits of agreement (LoA). When these LoA are within the bounds of clinically insignificant differences, the new non-invasive measure is preferred. Very often, multiple Bland-Altman studies have been conducted comparing the same two measures, and random-effects meta-analysis provides a means to pool these estimates. We provide a framework for the meta-analysis of Bland-Altman studies, including methods for estimating the LoA and measures of uncertainty (i.e., confidence intervals). Importantly, these LoA are likely to be wider than those typically reported in Bland-Altman meta-analyses. Frequently, Bland-Altman studies report results based on repeated measures designs but do not properly adjust for this design in the analysis. Meta-analyses of Bland-Altman studies frequently exclude these studies for this reason. We provide a meta-analytic approach that allows inclusion of estimates from these studies. This includes adjustments to the estimate of the standard deviation and a method for pooling the estimates based upon robust variance estimation. An example is included based on a previously published meta-analysis. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Kim, C H; Lim, J K; Lee, D H; Yoo, S S; Lee, S Y; Cha, S I; Park, J Y; Lee, J
2016-11-01
In an era of increasing concerns about drug resistance, there are limited data on treatment outcomes and recurrence rates after standard short-course anti-tuberculosis treatment in patients with culture-negative tuberculous pleural effusion (TPE). To compare treatment outcomes and recurrence rates between a standard anti-tuberculosis regimen with negative culture and unavailable drug susceptibility testing (DST) data, and a tailored anti-tuberculosis regimen based on individual DST data. We analysed the data of all patients with TPE from the TB registry database at Kyungpook National University Hospital, South Korea, during 2008-2012. The study population was divided into two groups according to regimen. Standard and tailored anti-tuberculosis regimens were administered to respectively 124 and 146 patients with TPE. Drug resistance was detected in 10% of patients with TPE, about a quarter of whom were multidrug-resistant. The treatment completion rate was not significantly different between the two groups (91% vs. 93%). During a median 20-month follow-up, the recurrence rate was also similar in both groups (1% vs.1%). Despite limited statistical power, these preliminary results support the hypothesis that immunocompetent patients with culture-negative TPE can be appropriately managed with a standard short-course anti-tuberculosis regimen, even in this era of increasing concerns about drug resistance.
Standardized Symptom Measurement of Individuals with Early Lyme Disease Over Time.
Bechtold, Kathleen T; Rebman, Alison W; Crowder, Lauren A; Johnson-Greene, Doug; Aucott, John N
2017-03-01
Understanding the Lyme disease (LD) literature is challenging given the lack of consistent methodology and standardized measurement of symptoms and the impact on functioning. This prospective study incorporates well-validated measures to capture the symptom picture of individuals with early LD from time of diagnosis through 6-months post-treatment. One hundred seven patients with confirmed early LD and 26 healthy controls were evaluated using standardized instruments for pain, fatigue, depressive symptoms, functional impact, and cognitive functioning. Prior to antibiotic treatment, patients experience notable symptoms of fatigue and pain statistically higher than controls. After treatment, there are no group differences, suggesting that symptoms resolve and that there are no residual cognitive impairments at the level of group analysis. However, using subgroup analyses, some individuals experience persistent symptoms that lead to functional decline and these individuals can be identified immediately post-completion of standard antibiotic treatment using well-validated symptom measures. Overall, the findings suggest that ideally-treated early LD patients recover well and experience symptom resolution over time, though a small subgroup continue to suffer with symptoms that lead to functional decline. The authors discuss use of standardized instruments for identification of individuals who warrant further clinical follow-up. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-08
... DEPARTMENT OF HEALTH AND HUMAN SERVICES Centers for Disease Control and Prevention National Center for Health Statistics (NCHS), Classifications and Public Health Data Standards Staff, Announces the... Prevention, Classifications and Public Health Data Standards, 3311 Toledo Road, Room 2337, Hyattsville, MD...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-28
... DEPARTMENT OF HEALTH AND HUMAN SERVICES Centers for Disease Control and Prevention National Center for Health Statistics (NCHS), Classifications and Public Health Data Standards Staff, Announces the... Administrator, Classifications and Public Health Data Standards Staff, NCHS, 3311 Toledo Road, Room 2337...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-09-16
... DEPARTMENT OF HEALTH AND HUMAN SERVICES Centers for Disease Control and Prevention National Center for Health Statistics (NCHS), Classifications and Public Health Data Standards Staff, Announces the... Public Health Data Standards Staff, NCHS, 3311 Toledo Road, Room 2337, Hyattsville, Maryland 20782, e...
Statistics as Unbiased Estimators: Exploring the Teaching of Standard Deviation
ERIC Educational Resources Information Center
Wasserman, Nicholas H.; Casey, Stephanie; Champion, Joe; Huey, Maryann
2017-01-01
This manuscript presents findings from a study about the knowledge for and planned teaching of standard deviation. We investigate how understanding variance as an unbiased (inferential) estimator--not just a descriptive statistic for the variation (spread) in data--is related to teachers' instruction regarding standard deviation, particularly…
The epidemiology of suicide in Jamaica 2002-2010: rates and patterns.
Abell, W D; James, K; Bridgelal-Nagassar, R; Holder-Nevins, D; Eldemire, H; Thompson, E; Sewell, C
2012-08-01
Suicide is increasingly recognized as a worldwide problem. There is a paucity of quality data pertaining to suicide in developing countries. Epidemiological analysis of suicide data elucidates prevailing patterns that facilitate risk factor identification and the development of germane programmatic responses. This paper analyses temporal variations in suicide rates for the years 2002-2010 in Jamaica and describes the sociodemographic profile of cases and method of suicide for the latter four years. Data pertaining to suicides were extracted from the police (The Jamaica Constabulary Force) records. These were summarized and analysed with respect to person, place and time. Population statistics for the computation of rates were obtained from publications of the Statistical Institute of Jamaica. Age-standardized rates were generated for comparison of trends over time. Poisson and binomial probabilities were used to determine statistically significant differences in rates. Suicide rates in Jamaica have remained relatively stable for the period reviewed with mean overall annual incidence of 2.1 per 100 000 population. Rates for males were significantly higher than those for females. The majority (90.4%) of suicide cases were males. A trend for higher rates of suicide was generally noted in the 25-34-year and the 75-year and over age groups. Hanging was the main method used to commit suicide (77.5%). Age-adjusted rates of suicide indicate no significant changes in Jamaica over the period 2002 to 2010. Continued surveillance of suicide as well as improved recording of the circumstances surrounding suicides are recommended to promote greater understanding of suicides and this will ultimately inform intervention strategies.
Wartberg, Lutz; Kriston, Levente; Kammerl, Rudolf
2017-07-01
Internet Gaming Disorder (IGD) has been included in the current edition of the Diagnostic and Statistical Manual of Mental Disorders-Fifth Edition (DSM-5). In the present study, the relationship among social support, friends only known through the Internet, health-related quality of life, and IGD in adolescence was explored for the first time. For this purpose, 1,095 adolescents aged from 12 to 14 years were surveyed with a standardized questionnaire concerning IGD, self-perceived social support, proportion of friends only known through the Internet, and health-related quality of life. The authors conducted unpaired t-tests, a chi-square test, as well as correlation and logistic regression analyses. According to the statistical analyses, adolescents with IGD reported lower self-perceived social support, more friends only known through the Internet, and a lower health-related quality of life compared with the group without IGD. Both in bivariate and multivariate logistic regression models, statistically significant associations between IGD and male gender, a higher proportion of friends only known through the Internet, and a lower health-related quality of life (multivariate model: Nagelkerke's R 2 = 0.37) were revealed. Lower self-perceived social support was related to IGD in the bivariate model only. In summary, quality of life and social aspects seem to be important factors for IGD in adolescence and therefore should be incorporated in further (longitudinal) studies. The findings of the present survey may provide starting points for the development of prevention and intervention programs for adolescents affected by IGD.
Accounting for standard errors of vision-specific latent trait in regression models.
Wong, Wan Ling; Li, Xiang; Li, Jialiang; Wong, Tien Yin; Cheng, Ching-Yu; Lamoureux, Ecosse L
2014-07-11
To demonstrate the effectiveness of Hierarchical Bayesian (HB) approach in a modeling framework for association effects that accounts for SEs of vision-specific latent traits assessed using Rasch analysis. A systematic literature review was conducted in four major ophthalmic journals to evaluate Rasch analysis performed on vision-specific instruments. The HB approach was used to synthesize the Rasch model and multiple linear regression model for the assessment of the association effects related to vision-specific latent traits. The effectiveness of this novel HB one-stage "joint-analysis" approach allows all model parameters to be estimated simultaneously and was compared with the frequently used two-stage "separate-analysis" approach in our simulation study (Rasch analysis followed by traditional statistical analyses without adjustment for SE of latent trait). Sixty-six reviewed articles performed evaluation and validation of vision-specific instruments using Rasch analysis, and 86.4% (n = 57) performed further statistical analyses on the Rasch-scaled data using traditional statistical methods; none took into consideration SEs of the estimated Rasch-scaled scores. The two models on real data differed for effect size estimations and the identification of "independent risk factors." Simulation results showed that our proposed HB one-stage "joint-analysis" approach produces greater accuracy (average of 5-fold decrease in bias) with comparable power and precision in estimation of associations when compared with the frequently used two-stage "separate-analysis" procedure despite accounting for greater uncertainty due to the latent trait. Patient-reported data, using Rasch analysis techniques, do not take into account the SE of latent trait in association analyses. The HB one-stage "joint-analysis" is a better approach, producing accurate effect size estimations and information about the independent association of exposure variables with vision-specific latent traits. Copyright 2014 The Association for Research in Vision and Ophthalmology, Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Casar, B; Carot, I Mendez; Peterlin, P
2016-06-15
Purpose: Aim of the multi-centre study was to analyse beam hardening effect of the Integral Quality Monitor (IQM) for high energy photon beams used in radiotherapy with linear accelerators. Generic values for attenuation coefficient k(IQM) of IQM system were additionally investigated. Methods: Beam hardening effect of the IQM system was studied for a set of standard nominal photon energies (6 MV–18 MV) and two flattening filter free (FFF) energies (6 MV FFF and 10 MV FFF). PDD curves were measured and analysed for various square radiation fields, with and without IQM in place. Differences between PDD curves were statistically analysedmore » through comparison of respective PDD-20,10 values. Attenuation coefficients k(IQM) were determined for the same range of photon energies. Results: Statistically significant differences in beam qualities for all evaluated high energy photon beams were found, comparing PDD-20,10 values derived from PDD curves with and without IQM in place. Significance of beam hardening effect was statistically proven with high confidence (p < 0,01) for all analysed photon beams except for 15 MV (p = 0,078), although relative differences in beam qualities were minimal, ranging from 0,1 % to 0,5 %. Attenuation of the IQM system showed negligible dependence on radiation field size. However, clinically important dependence of kIQM versus TPRs20,10 was found: 0,941 for 6 MV photon beams, to 0,959 for 18 MV photon beams, with highest uncertainty below 0,006. k(IQM) versus TPRs were tabulated and polynomial equation for the determination of k(IQM) is suggested for clinical use. Conclusion: There was no clinically relevant beam hardening, when IQM system was on linear accelerators. Consequently, no additional commissioning is needed for the IQM system regarding the determination of beam qualities. Generic values for k(IQM) are proposed and can be used as tray factors for complete range of examined photon beam energies.« less
75 FR 53925 - Sea Turtle Conservation; Shrimp and Summer Flounder Trawling Requirements
Federal Register 2010, 2011, 2012, 2013, 2014
2010-09-02
... because of the statistical probability the candidate TED may not achieve the standard (i.e., control TED... the test with 4 turtle captures because of the statistical probability the candidate TED may not... because of the statistical probability the candidate TED may not achieve the standard (i.e., [[Page 53930...
Ahearn, Elizabeth A.
2010-01-01
Multiple linear regression equations for determining flow-duration statistics were developed to estimate select flow exceedances ranging from 25- to 99-percent for six 'bioperiods'-Salmonid Spawning (November), Overwinter (December-February), Habitat Forming (March-April), Clupeid Spawning (May), Resident Spawning (June), and Rearing and Growth (July-October)-in Connecticut. Regression equations also were developed to estimate the 25- and 99-percent flow exceedances without reference to a bioperiod. In total, 32 equations were developed. The predictive equations were based on regression analyses relating flow statistics from streamgages to GIS-determined basin and climatic characteristics for the drainage areas of those streamgages. Thirty-nine streamgages (and an additional 6 short-term streamgages and 28 partial-record sites for the non-bioperiod 99-percent exceedance) in Connecticut and adjacent areas of neighboring States were used in the regression analysis. Weighted least squares regression analysis was used to determine the predictive equations; weights were assigned based on record length. The basin characteristics-drainage area, percentage of area with coarse-grained stratified deposits, percentage of area with wetlands, mean monthly precipitation (November), mean seasonal precipitation (December, January, and February), and mean basin elevation-are used as explanatory variables in the equations. Standard errors of estimate of the 32 equations ranged from 10.7 to 156 percent with medians of 19.2 and 55.4 percent to predict the 25- and 99-percent exceedances, respectively. Regression equations to estimate high and median flows (25- to 75-percent exceedances) are better predictors (smaller variability of the residual values around the regression line) than the equations to estimate low flows (less than 75-percent exceedance). The Habitat Forming (March-April) bioperiod had the smallest standard errors of estimate, ranging from 10.7 to 20.9 percent. In contrast, the Rearing and Growth (July-October) bioperiod had the largest standard errors, ranging from 30.9 to 156 percent. The adjusted coefficient of determination of the equations ranged from 77.5 to 99.4 percent with medians of 98.5 and 90.6 percent to predict the 25- and 99-percent exceedances, respectively. Descriptive information on the streamgages used in the regression, measured basin and climatic characteristics, and estimated flow-duration statistics are provided in this report. Flow-duration statistics and the 32 regression equations for estimating flow-duration statistics in Connecticut are stored on the U.S. Geological Survey World Wide Web application ?StreamStats? (http://water.usgs.gov/osw/streamstats/index.html). The regression equations developed in this report can be used to produce unbiased estimates of select flow exceedances statewide.
40 CFR 91.512 - Request for public hearing.
Code of Federal Regulations, 2010 CFR
2010-07-01
... plans and statistical analyses have been properly applied (specifically, whether sampling procedures and statistical analyses specified in this subpart were followed and whether there exists a basis for... will be made available to the public during Agency business hours. ...
Jin, Zhichao; Yu, Danghui; Zhang, Luoman; Meng, Hong; Lu, Jian; Gao, Qingbin; Cao, Yang; Ma, Xiuqiang; Wu, Cheng; He, Qian; Wang, Rui; He, Jia
2010-05-25
High quality clinical research not only requires advanced professional knowledge, but also needs sound study design and correct statistical analyses. The number of clinical research articles published in Chinese medical journals has increased immensely in the past decade, but study design quality and statistical analyses have remained suboptimal. The aim of this investigation was to gather evidence on the quality of study design and statistical analyses in clinical researches conducted in China for the first decade of the new millennium. Ten (10) leading Chinese medical journals were selected and all original articles published in 1998 (N = 1,335) and 2008 (N = 1,578) were thoroughly categorized and reviewed. A well-defined and validated checklist on study design, statistical analyses, results presentation, and interpretation was used for review and evaluation. Main outcomes were the frequencies of different types of study design, error/defect proportion in design and statistical analyses, and implementation of CONSORT in randomized clinical trials. From 1998 to 2008: The error/defect proportion in statistical analyses decreased significantly ( = 12.03, p<0.001), 59.8% (545/1,335) in 1998 compared to 52.2% (664/1,578) in 2008. The overall error/defect proportion of study design also decreased ( = 21.22, p<0.001), 50.9% (680/1,335) compared to 42.40% (669/1,578). In 2008, design with randomized clinical trials remained low in single digit (3.8%, 60/1,578) with two-third showed poor results reporting (defects in 44 papers, 73.3%). Nearly half of the published studies were retrospective in nature, 49.3% (658/1,335) in 1998 compared to 48.2% (761/1,578) in 2008. Decreases in defect proportions were observed in both results presentation ( = 93.26, p<0.001), 92.7% (945/1,019) compared to 78.2% (1023/1,309) and interpretation ( = 27.26, p<0.001), 9.7% (99/1,019) compared to 4.3% (56/1,309), some serious ones persisted. Chinese medical research seems to have made significant progress regarding statistical analyses, but there remains ample room for improvement regarding study designs. Retrospective clinical studies are the most often used design, whereas randomized clinical trials are rare and often show methodological weaknesses. Urgent implementation of the CONSORT statement is imperative.
ERIC Educational Resources Information Center
Cafri, Guy; Kromrey, Jeffrey D.; Brannick, Michael T.
2010-01-01
This article uses meta-analyses published in "Psychological Bulletin" from 1995 to 2005 to describe meta-analyses in psychology, including examination of statistical power, Type I errors resulting from multiple comparisons, and model choice. Retrospective power estimates indicated that univariate categorical and continuous moderators, individual…
Eash, David A.; Barnes, Kimberlee K.
2017-01-01
A statewide study was conducted to develop regression equations for estimating six selected low-flow frequency statistics and harmonic mean flows for ungaged stream sites in Iowa. The estimation equations developed for the six low-flow frequency statistics include: the annual 1-, 7-, and 30-day mean low flows for a recurrence interval of 10 years, the annual 30-day mean low flow for a recurrence interval of 5 years, and the seasonal (October 1 through December 31) 1- and 7-day mean low flows for a recurrence interval of 10 years. Estimation equations also were developed for the harmonic-mean-flow statistic. Estimates of these seven selected statistics are provided for 208 U.S. Geological Survey continuous-record streamgages using data through September 30, 2006. The study area comprises streamgages located within Iowa and 50 miles beyond the State's borders. Because trend analyses indicated statistically significant positive trends when considering the entire period of record for the majority of the streamgages, the longest, most recent period of record without a significant trend was determined for each streamgage for use in the study. The median number of years of record used to compute each of these seven selected statistics was 35. Geographic information system software was used to measure 54 selected basin characteristics for each streamgage. Following the removal of two streamgages from the initial data set, data collected for 206 streamgages were compiled to investigate three approaches for regionalization of the seven selected statistics. Regionalization, a process using statistical regression analysis, provides a relation for efficiently transferring information from a group of streamgages in a region to ungaged sites in the region. The three regionalization approaches tested included statewide, regional, and region-of-influence regressions. For the regional regression, the study area was divided into three low-flow regions on the basis of hydrologic characteristics, landform regions, and soil regions. A comparison of root mean square errors and average standard errors of prediction for the statewide, regional, and region-of-influence regressions determined that the regional regression provided the best estimates of the seven selected statistics at ungaged sites in Iowa. Because a significant number of streams in Iowa reach zero flow as their minimum flow during low-flow years, four different types of regression analyses were used: left-censored, logistic, generalized-least-squares, and weighted-least-squares regression. A total of 192 streamgages were included in the development of 27 regression equations for the three low-flow regions. For the northeast and northwest regions, a censoring threshold was used to develop 12 left-censored regression equations to estimate the 6 low-flow frequency statistics for each region. For the southern region a total of 12 regression equations were developed; 6 logistic regression equations were developed to estimate the probability of zero flow for the 6 low-flow frequency statistics and 6 generalized least-squares regression equations were developed to estimate the 6 low-flow frequency statistics, if nonzero flow is estimated first by use of the logistic equations. A weighted-least-squares regression equation was developed for each region to estimate the harmonic-mean-flow statistic. Average standard errors of estimate for the left-censored equations for the northeast region range from 64.7 to 88.1 percent and for the northwest region range from 85.8 to 111.8 percent. Misclassification percentages for the logistic equations for the southern region range from 5.6 to 14.0 percent. Average standard errors of prediction for generalized least-squares equations for the southern region range from 71.7 to 98.9 percent and pseudo coefficients of determination for the generalized-least-squares equations range from 87.7 to 91.8 percent. Average standard errors of prediction for weighted-least-squares equations developed for estimating the harmonic-mean-flow statistic for each of the three regions range from 66.4 to 80.4 percent. The regression equations are applicable only to stream sites in Iowa with low flows not significantly affected by regulation, diversion, or urbanization and with basin characteristics within the range of those used to develop the equations. If the equations are used at ungaged sites on regulated streams, or on streams affected by water-supply and agricultural withdrawals, then the estimates will need to be adjusted by the amount of regulation or withdrawal to estimate the actual flow conditions if that is of interest. Caution is advised when applying the equations for basins with characteristics near the applicable limits of the equations and for basins located in karst topography. A test of two drainage-area ratio methods using 31 pairs of streamgages, for the annual 7-day mean low-flow statistic for a recurrence interval of 10 years, indicates a weighted drainage-area ratio method provides better estimates than regional regression equations for an ungaged site on a gaged stream in Iowa when the drainage-area ratio is between 0.5 and 1.4. These regression equations will be implemented within the U.S. Geological Survey StreamStats web-based geographic-information-system tool. StreamStats allows users to click on any ungaged site on a river and compute estimates of the seven selected statistics; in addition, 90-percent prediction intervals and the measured basin characteristics for the ungaged sites also are provided. StreamStats also allows users to click on any streamgage in Iowa and estimates computed for these seven selected statistics are provided for the streamgage.
The History of the AutoChemist®: From Vision to Reality.
Peterson, H E; Jungner, I
2014-05-22
This paper discusses the early history and development of a clinical analyser system in Sweden (AutoChemist, 1965). It highlights the importance of such high capacity system both for clinical use and health care screening. The device was developed to assure the quality of results and to automatically handle the orders, store the results in digital form for later statistical analyses and distribute the results to the patients' physicians by using the computer used for the analyser. The most important result of the construction of an analyser able to produce analytical results on a mass scale was the development of a mechanical multi-channel analyser for clinical laboratories that handled discrete sample technology and could prevent carry-over to the next test samples while incorporating computer technology to improve the quality of test results. The AutoChemist could handle 135 samples per hour in an 8-hour shift and up to 24 possible analyses channels resulting in 3,200 results per hour. Later versions would double this capacity. Some customers used the equipment 24 hours per day. With a capacity of 3,000 to 6,000 analyses per hour, pneumatic driven pipettes, special units for corrosive liquids or special activities, and an integrated computer, the AutoChemist system was unique and the largest of its kind for many years. Its follower - The AutoChemist PRISMA (PRogrammable Individually Selective Modular Analyzer) - was smaller in size but had a higher capacity. Both analysers established new standards of operation for clinical laboratories and encouraged others to use new technologies for building new analysers.
Visualizing statistical significance of disease clusters using cartograms.
Kronenfeld, Barry J; Wong, David W S
2017-05-15
Health officials and epidemiological researchers often use maps of disease rates to identify potential disease clusters. Because these maps exaggerate the prominence of low-density districts and hide potential clusters in urban (high-density) areas, many researchers have used density-equalizing maps (cartograms) as a basis for epidemiological mapping. However, we do not have existing guidelines for visual assessment of statistical uncertainty. To address this shortcoming, we develop techniques for visual determination of statistical significance of clusters spanning one or more districts on a cartogram. We developed the techniques within a geovisual analytics framework that does not rely on automated significance testing, and can therefore facilitate visual analysis to detect clusters that automated techniques might miss. On a cartogram of the at-risk population, the statistical significance of a disease cluster is determinate from the rate, area and shape of the cluster under standard hypothesis testing scenarios. We develop formulae to determine, for a given rate, the area required for statistical significance of a priori and a posteriori designated regions under certain test assumptions. Uniquely, our approach enables dynamic inference of aggregate regions formed by combining individual districts. The method is implemented in interactive tools that provide choropleth mapping, automated legend construction and dynamic search tools to facilitate cluster detection and assessment of the validity of tested assumptions. A case study of leukemia incidence analysis in California demonstrates the ability to visually distinguish between statistically significant and insignificant regions. The proposed geovisual analytics approach enables intuitive visual assessment of statistical significance of arbitrarily defined regions on a cartogram. Our research prompts a broader discussion of the role of geovisual exploratory analyses in disease mapping and the appropriate framework for visually assessing the statistical significance of spatial clusters.
DISSCO: direct imputation of summary statistics allowing covariates
Xu, Zheng; Duan, Qing; Yan, Song; Chen, Wei; Li, Mingyao; Lange, Ethan; Li, Yun
2015-01-01
Background: Imputation of individual level genotypes at untyped markers using an external reference panel of genotyped or sequenced individuals has become standard practice in genetic association studies. Direct imputation of summary statistics can also be valuable, for example in meta-analyses where individual level genotype data are not available. Two methods (DIST and ImpG-Summary/LD), that assume a multivariate Gaussian distribution for the association summary statistics, have been proposed for imputing association summary statistics. However, both methods assume that the correlations between association summary statistics are the same as the correlations between the corresponding genotypes. This assumption can be violated in the presence of confounding covariates. Methods: We analytically show that in the absence of covariates, correlation among association summary statistics is indeed the same as that among the corresponding genotypes, thus serving as a theoretical justification for the recently proposed methods. We continue to prove that in the presence of covariates, correlation among association summary statistics becomes the partial correlation of the corresponding genotypes controlling for covariates. We therefore develop direct imputation of summary statistics allowing covariates (DISSCO). Results: We consider two real-life scenarios where the correlation and partial correlation likely make practical difference: (i) association studies in admixed populations; (ii) association studies in presence of other confounding covariate(s). Application of DISSCO to real datasets under both scenarios shows at least comparable, if not better, performance compared with existing correlation-based methods, particularly for lower frequency variants. For example, DISSCO can reduce the absolute deviation from the truth by 3.9–15.2% for variants with minor allele frequency <5%. Availability and implementation: http://www.unc.edu/∼yunmli/DISSCO. Contact: yunli@med.unc.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25810429
DISSCO: direct imputation of summary statistics allowing covariates.
Xu, Zheng; Duan, Qing; Yan, Song; Chen, Wei; Li, Mingyao; Lange, Ethan; Li, Yun
2015-08-01
Imputation of individual level genotypes at untyped markers using an external reference panel of genotyped or sequenced individuals has become standard practice in genetic association studies. Direct imputation of summary statistics can also be valuable, for example in meta-analyses where individual level genotype data are not available. Two methods (DIST and ImpG-Summary/LD), that assume a multivariate Gaussian distribution for the association summary statistics, have been proposed for imputing association summary statistics. However, both methods assume that the correlations between association summary statistics are the same as the correlations between the corresponding genotypes. This assumption can be violated in the presence of confounding covariates. We analytically show that in the absence of covariates, correlation among association summary statistics is indeed the same as that among the corresponding genotypes, thus serving as a theoretical justification for the recently proposed methods. We continue to prove that in the presence of covariates, correlation among association summary statistics becomes the partial correlation of the corresponding genotypes controlling for covariates. We therefore develop direct imputation of summary statistics allowing covariates (DISSCO). We consider two real-life scenarios where the correlation and partial correlation likely make practical difference: (i) association studies in admixed populations; (ii) association studies in presence of other confounding covariate(s). Application of DISSCO to real datasets under both scenarios shows at least comparable, if not better, performance compared with existing correlation-based methods, particularly for lower frequency variants. For example, DISSCO can reduce the absolute deviation from the truth by 3.9-15.2% for variants with minor allele frequency <5%. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Xiao, Yongling; Abrahamowicz, Michal
2010-03-30
We propose two bootstrap-based methods to correct the standard errors (SEs) from Cox's model for within-cluster correlation of right-censored event times. The cluster-bootstrap method resamples, with replacement, only the clusters, whereas the two-step bootstrap method resamples (i) the clusters, and (ii) individuals within each selected cluster, with replacement. In simulations, we evaluate both methods and compare them with the existing robust variance estimator and the shared gamma frailty model, which are available in statistical software packages. We simulate clustered event time data, with latent cluster-level random effects, which are ignored in the conventional Cox's model. For cluster-level covariates, both proposed bootstrap methods yield accurate SEs, and type I error rates, and acceptable coverage rates, regardless of the true random effects distribution, and avoid serious variance under-estimation by conventional Cox-based standard errors. However, the two-step bootstrap method over-estimates the variance for individual-level covariates. We also apply the proposed bootstrap methods to obtain confidence bands around flexible estimates of time-dependent effects in a real-life analysis of cluster event times.
The National Ballistics Imaging Comparison (NBIC) project.
Song, J; Vorburger, T V; Ballou, S; Thompson, R M; Yen, J; Renegar, T B; Zheng, A; Silver, R M; Ols, M
2012-03-10
In response to the guidelines issued by the American Society of Crime Laboratory Directors/Laboratory Accreditation Board (ASCLD/LAB-International) to establish traceability and quality assurance in U.S. crime laboratories, a NIST/ATF joint project entitled National Ballistics Imaging Comparison (NBIC) was initialized in 2008. The NBIC project aims to establish a National Traceability and Quality System for ballistics identifications in crime laboratories within the National Integrated Ballistics Information Network (NIBIN) of the U.S. NIST Standard Reference Material (SRM) 2460 bullets and 2461 cartridge cases are used as reference standards. 19 ballistics examiners from 13 U.S. crime laboratories participated in this project. They each performed 24 periodic image acquisitions and correlations of the SRM bullets and cartridge cases over the course of a year, but one examiner only participated in Phase 1 tests of SRM cartridge case. The correlation scores were collected by NIST for statistical analyses, from which control charts and control limits were developed for the proposed Quality System and for promoting future assessments and accreditations for firearm evidence in U.S. forensic laboratories in accordance with the ISO 17025 Standard. Published by Elsevier Ireland Ltd.
Algorithm for Identifying Erroneous Rain-Gauge Readings
NASA Technical Reports Server (NTRS)
Rickman, Doug
2005-01-01
An algorithm analyzes rain-gauge data to identify statistical outliers that could be deemed to be erroneous readings. Heretofore, analyses of this type have been performed in burdensome manual procedures that have involved subjective judgements. Sometimes, the analyses have included computational assistance for detecting values falling outside of arbitrary limits. The analyses have been performed without statistically valid knowledge of the spatial and temporal variations of precipitation within rain events. In contrast, the present algorithm makes it possible to automate such an analysis, makes the analysis objective, takes account of the spatial distribution of rain gauges in conjunction with the statistical nature of spatial variations in rainfall readings, and minimizes the use of arbitrary criteria. The algorithm implements an iterative process that involves nonparametric statistics.
Multiple imputation of missing fMRI data in whole brain analysis
Vaden, Kenneth I.; Gebregziabher, Mulugeta; Kuchinsky, Stefanie E.; Eckert, Mark A.
2012-01-01
Whole brain fMRI analyses rarely include the entire brain because of missing data that result from data acquisition limits and susceptibility artifact, in particular. This missing data problem is typically addressed by omitting voxels from analysis, which may exclude brain regions that are of theoretical interest and increase the potential for Type II error at cortical boundaries or Type I error when spatial thresholds are used to establish significance. Imputation could significantly expand statistical map coverage, increase power, and enhance interpretations of fMRI results. We examined multiple imputation for group level analyses of missing fMRI data using methods that leverage the spatial information in fMRI datasets for both real and simulated data. Available case analysis, neighbor replacement, and regression based imputation approaches were compared in a general linear model framework to determine the extent to which these methods quantitatively (effect size) and qualitatively (spatial coverage) increased the sensitivity of group analyses. In both real and simulated data analysis, multiple imputation provided 1) variance that was most similar to estimates for voxels with no missing data, 2) fewer false positive errors in comparison to mean replacement, and 3) fewer false negative errors in comparison to available case analysis. Compared to the standard analysis approach of omitting voxels with missing data, imputation methods increased brain coverage in this study by 35% (from 33,323 to 45,071 voxels). In addition, multiple imputation increased the size of significant clusters by 58% and number of significant clusters across statistical thresholds, compared to the standard voxel omission approach. While neighbor replacement produced similar results, we recommend multiple imputation because it uses an informed sampling distribution to deal with missing data across subjects that can include neighbor values and other predictors. Multiple imputation is anticipated to be particularly useful for 1) large fMRI data sets with inconsistent missing voxels across subjects and 2) addressing the problem of increased artifact at ultra-high field, which significantly limit the extent of whole brain coverage and interpretations of results. PMID:22500925
Müller, Ueli C; Asherson, Philip; Banaschewski, Tobias; Buitelaar, Jan K; Ebstein, Richard P; Eisenberg, Jaques; Gill, Michael; Manor, Iris; Miranda, Ana; Oades, Robert D; Roeyers, Herbert; Rothenberger, Aribert; Sergeant, Joseph A; Sonuga-Barke, Edmund Js; Thompson, Margaret; Faraone, Stephen V; Steinhausen, Hans-Christoph
2011-04-07
The International Multi-centre ADHD Genetics (IMAGE) project with 11 participating centres from 7 European countries and Israel has collected a large behavioural and genetic database for present and future research. Behavioural data were collected from 1068 probands with ADHD and 1446 unselected siblings. The aim was to describe and analyse questionnaire data and IQ measures from all probands and siblings. In particular, to investigate the influence of age, gender, family status (proband vs. sibling), informant, and centres on sample homogeneity in psychopathological measures. Conners' Questionnaires, Strengths and Difficulties Questionnaires, and Wechsler Intelligence Scores were used to describe the phenotype of the sample. Data were analysed by use of robust statistical multi-way procedures. Besides main effects of age, gender, informant, and centre, there were considerable interaction effects on questionnaire data. The larger differences between probands and siblings at home than at school may reflect contrast effects in the parents. Furthermore, there were marked gender by status effects on the ADHD symptom ratings with girls scoring one standard deviation higher than boys in the proband sample but lower than boys in the siblings sample. The multi-centre design is another important source of heterogeneity, particularly in the interaction with the family status. To a large extent the centres differed from each other with regard to differences between proband and sibling scores. When ADHD probands are diagnosed by use of fixed symptom counts, the severity of the disorder in the proband sample may markedly differ between boys and girls and across age, particularly in samples with a large age range. A multi-centre design carries the risk of considerable phenotypic differences between centres and, consequently, of additional heterogeneity of the sample even if standardized diagnostic procedures are used. These possible sources of variance should be counteracted in genetic analyses either by using age and gender adjusted diagnostic procedures and regional normative data or by adjusting for design artefacts by use of covariate statistics, by eliminating outliers, or by other methods suitable for reducing heterogeneity.
Citation of previous meta-analyses on the same topic: a clue to perpetuation of incorrect methods?
Li, Tianjing; Dickersin, Kay
2013-06-01
Systematic reviews and meta-analyses serve as a basis for decision-making and clinical practice guidelines and should be carried out using appropriate methodology to avoid incorrect inferences. We describe the characteristics, statistical methods used for meta-analyses, and citation patterns of all 21 glaucoma systematic reviews we identified pertaining to the effectiveness of prostaglandin analog eye drops in treating primary open-angle glaucoma, published between December 2000 and February 2012. We abstracted data, assessed whether appropriate statistical methods were applied in meta-analyses, and examined citation patterns of included reviews. We identified two forms of problematic statistical analyses in 9 of the 21 systematic reviews examined. Except in 1 case, none of the 9 reviews that used incorrect statistical methods cited a previously published review that used appropriate methods. Reviews that used incorrect methods were cited 2.6 times more often than reviews that used appropriate statistical methods. We speculate that by emulating the statistical methodology of previous systematic reviews, systematic review authors may have perpetuated incorrect approaches to meta-analysis. The use of incorrect statistical methods, perhaps through emulating methods described in previous research, calls conclusions of systematic reviews into question and may lead to inappropriate patient care. We urge systematic review authors and journal editors to seek the advice of experienced statisticians before undertaking or accepting for publication a systematic review and meta-analysis. The author(s) have no proprietary or commercial interest in any materials discussed in this article. Copyright © 2013 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.
Preiksaitis, J.; Tong, Y.; Pang, X.; Sun, Y.; Tang, L.; Cook, L.; Pounds, S.; Fryer, J.; Caliendo, A. M.
2015-01-01
Quantitative detection of cytomegalovirus (CMV) DNA has become a standard part of care for many groups of immunocompromised patients; recent development of the first WHO international standard for human CMV DNA has raised hopes of reducing interlaboratory variability of results. Commutability of reference material has been shown to be necessary if such material is to reduce variability among laboratories. Here we evaluated the commutability of the WHO standard using 10 different real-time quantitative CMV PCR assays run by eight different laboratories. Test panels, including aliquots of 50 patient samples (40 positive samples and 10 negative samples) and lyophilized CMV standard, were run, with each testing center using its own quantitative calibrators, reagents, and nucleic acid extraction methods. Commutability was assessed both on a pairwise basis and over the entire group of assays, using linear regression and correspondence analyses. Commutability of the WHO material differed among the tests that were evaluated, and these differences appeared to vary depending on the method of statistical analysis used and the cohort of assays included in the analysis. Depending on the methodology used, the WHO material showed poor or absent commutability with up to 50% of assays. Determination of commutability may require a multifaceted approach; the lack of commutability seen when using the WHO standard with several of the assays here suggests that further work is needed to bring us toward true consensus. PMID:26269622
Standard deviation and standard error of the mean.
Lee, Dong Kyu; In, Junyong; Lee, Sangseok
2015-06-01
In most clinical and experimental studies, the standard deviation (SD) and the estimated standard error of the mean (SEM) are used to present the characteristics of sample data and to explain statistical analysis results. However, some authors occasionally muddle the distinctive usage between the SD and SEM in medical literature. Because the process of calculating the SD and SEM includes different statistical inferences, each of them has its own meaning. SD is the dispersion of data in a normal distribution. In other words, SD indicates how accurately the mean represents sample data. However the meaning of SEM includes statistical inference based on the sampling distribution. SEM is the SD of the theoretical distribution of the sample means (the sampling distribution). While either SD or SEM can be applied to describe data and statistical results, one should be aware of reasonable methods with which to use SD and SEM. We aim to elucidate the distinctions between SD and SEM and to provide proper usage guidelines for both, which summarize data and describe statistical results.
Standard deviation and standard error of the mean
In, Junyong; Lee, Sangseok
2015-01-01
In most clinical and experimental studies, the standard deviation (SD) and the estimated standard error of the mean (SEM) are used to present the characteristics of sample data and to explain statistical analysis results. However, some authors occasionally muddle the distinctive usage between the SD and SEM in medical literature. Because the process of calculating the SD and SEM includes different statistical inferences, each of them has its own meaning. SD is the dispersion of data in a normal distribution. In other words, SD indicates how accurately the mean represents sample data. However the meaning of SEM includes statistical inference based on the sampling distribution. SEM is the SD of the theoretical distribution of the sample means (the sampling distribution). While either SD or SEM can be applied to describe data and statistical results, one should be aware of reasonable methods with which to use SD and SEM. We aim to elucidate the distinctions between SD and SEM and to provide proper usage guidelines for both, which summarize data and describe statistical results. PMID:26045923
2013-01-01
Background Previous studies have reported the lower reference limit (LRL) of quantitative cord glucose-6-phosphate dehydrogenase (G6PD), but they have not used approved international statistical methodology. Using common standards is expecting to yield more true findings. Therefore, we aimed to estimate LRL of quantitative G6PD detection in healthy term neonates by using statistical analyses endorsed by the International Federation of Clinical Chemistry (IFCC) and the Clinical and Laboratory Standards Institute (CLSI) for reference interval estimation. Methods This cross sectional retrospective study was performed at King Abdulaziz Hospital, Saudi Arabia, between March 2010 and June 2012. The study monitored consecutive neonates born to mothers from one Arab Muslim tribe that was assumed to have a low prevalence of G6PD-deficiency. Neonates that satisfied the following criteria were included: full-term birth (37 weeks); no admission to the special care nursery; no phototherapy treatment; negative direct antiglobulin test; and fathers of female neonates were from the same mothers’ tribe. The G6PD activity (Units/gram Hemoglobin) was measured spectrophotometrically by an automated kit. This study used statistical analyses endorsed by IFCC and CLSI for reference interval estimation. The 2.5th percentiles and the corresponding 95% confidence intervals (CI) were estimated as LRLs, both in presence and absence of outliers. Results 207 males and 188 females term neonates who had cord blood quantitative G6PD testing met the inclusion criteria. Method of Horn detected 20 G6PD values as outliers (8 males and 12 females). Distributions of quantitative cord G6PD values exhibited a normal distribution in absence of the outliers only. The Harris-Boyd method and proportion criteria revealed that combined gender LRLs were reliable. The combined bootstrap LRL in presence of the outliers was 10.0 (95% CI: 7.5-10.7) and the combined parametric LRL in absence of the outliers was 11.0 (95% CI: 10.5-11.3). Conclusion These results contribute to the LRL of quantitative cord G6PD detection in full-term neonates. They are transferable to another laboratory when pre-analytical factors and testing methods are comparable and the IFCC-CLSI requirements of transference are satisfied. We are suggesting using estimated LRL in absence of the outliers as mislabeling G6PD-deficient neonates as normal is intolerable whereas mislabeling G6PD-normal neonates as deficient is tolerable. PMID:24016342
Hansen, Anne-Sophie K; Madsen, Ida E H; Thorsen, Sannie Vester; Melkevik, Ole; Bjørner, Jakob Bue; Andersen, Ingelise; Rugulies, Reiner
2018-05-01
Most previous prospective studies have examined workplace social capital as a resource of the individual. However, literature suggests that social capital is a collective good. In the present study we examined whether a high level of workplace aggregated social capital (WASC) predicts a decreased risk of individual-level long-term sickness absence (LTSA) in Danish private sector employees. A sample of 2043 employees (aged 18-64 years, 38.5% women) from 260 Danish private-sector companies filled in a questionnaire on workplace social capital and covariates. WASC was calculated by assigning the company-averaged social capital score to all employees of each company. We derived LTSA, defined as sickness absence of more than three weeks, from a national register. We examined if WASC predicted employee LTSA using multilevel survival analyses, while excluding participants with LTSA in the three months preceding baseline. We found no statistically significant association in any of the analyses. The hazard ratio for LTSA in the fully adjusted model was 0.93 (95% CI 0.77-1.13) per one standard deviation increase in WASC. When using WASC as a categorical exposure we found a statistically non-significant tendency towards a decreased risk of LTSA in employees with medium WASC (fully adjusted model: HR 0.78 (95% CI 0.48-1.27)). Post hoc analyses with workplace social capital as a resource of the individual showed similar results. WASC did not predict LTSA in this sample of Danish private-sector employees.
MGAS: a powerful tool for multivariate gene-based genome-wide association analysis.
Van der Sluis, Sophie; Dolan, Conor V; Li, Jiang; Song, Youqiang; Sham, Pak; Posthuma, Danielle; Li, Miao-Xin
2015-04-01
Standard genome-wide association studies, testing the association between one phenotype and a large number of single nucleotide polymorphisms (SNPs), are limited in two ways: (i) traits are often multivariate, and analysis of composite scores entails loss in statistical power and (ii) gene-based analyses may be preferred, e.g. to decrease the multiple testing problem. Here we present a new method, multivariate gene-based association test by extended Simes procedure (MGAS), that allows gene-based testing of multivariate phenotypes in unrelated individuals. Through extensive simulation, we show that under most trait-generating genotype-phenotype models MGAS has superior statistical power to detect associated genes compared with gene-based analyses of univariate phenotypic composite scores (i.e. GATES, multiple regression), and multivariate analysis of variance (MANOVA). Re-analysis of metabolic data revealed 32 False Discovery Rate controlled genome-wide significant genes, and 12 regions harboring multiple genes; of these 44 regions, 30 were not reported in the original analysis. MGAS allows researchers to conduct their multivariate gene-based analyses efficiently, and without the loss of power that is often associated with an incorrectly specified genotype-phenotype models. MGAS is freely available in KGG v3.0 (http://statgenpro.psychiatry.hku.hk/limx/kgg/download.php). Access to the metabolic dataset can be requested at dbGaP (https://dbgap.ncbi.nlm.nih.gov/). The R-simulation code is available from http://ctglab.nl/people/sophie_van_der_sluis. Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press.
Hansen, Anne-Sophie K.; Madsen, Ida E. H.; Thorsen, Sannie Vester; Melkevik, Ole; Bjørner, Jakob Bue; Andersen, Ingelise; Rugulies, Reiner
2017-01-01
Aims: Most previous prospective studies have examined workplace social capital as a resource of the individual. However, literature suggests that social capital is a collective good. In the present study we examined whether a high level of workplace aggregated social capital (WASC) predicts a decreased risk of individual-level long-term sickness absence (LTSA) in Danish private sector employees. Methods: A sample of 2043 employees (aged 18–64 years, 38.5% women) from 260 Danish private-sector companies filled in a questionnaire on workplace social capital and covariates. WASC was calculated by assigning the company-averaged social capital score to all employees of each company. We derived LTSA, defined as sickness absence of more than three weeks, from a national register. We examined if WASC predicted employee LTSA using multilevel survival analyses, while excluding participants with LTSA in the three months preceding baseline. Results: We found no statistically significant association in any of the analyses. The hazard ratio for LTSA in the fully adjusted model was 0.93 (95% CI 0.77–1.13) per one standard deviation increase in WASC. When using WASC as a categorical exposure we found a statistically non-significant tendency towards a decreased risk of LTSA in employees with medium WASC (fully adjusted model: HR 0.78 (95% CI 0.48–1.27)). Post hoc analyses with workplace social capital as a resource of the individual showed similar results. Conclusions: WASC did not predict LTSA in this sample of Danish private-sector employees. PMID:28784025
Statistical analyses of commercial vehicle accident factors. Volume 1 Part 1
DOT National Transportation Integrated Search
1978-02-01
Procedures for conducting statistical analyses of commercial vehicle accidents have been established and initially applied. A file of some 3,000 California Highway Patrol accident reports from two areas of California during a period of about one year...
40 CFR 90.712 - Request for public hearing.
Code of Federal Regulations, 2010 CFR
2010-07-01
... sampling plans and statistical analyses have been properly applied (specifically, whether sampling procedures and statistical analyses specified in this subpart were followed and whether there exists a basis... Clerk and will be made available to the public during Agency business hours. ...
Bias and inference from misspecified mixed‐effect models in stepped wedge trial analysis
Fielding, Katherine L.; Davey, Calum; Aiken, Alexander M.; Hargreaves, James R.; Hayes, Richard J.
2017-01-01
Many stepped wedge trials (SWTs) are analysed by using a mixed‐effect model with a random intercept and fixed effects for the intervention and time periods (referred to here as the standard model). However, it is not known whether this model is robust to misspecification. We simulated SWTs with three groups of clusters and two time periods; one group received the intervention during the first period and two groups in the second period. We simulated period and intervention effects that were either common‐to‐all or varied‐between clusters. Data were analysed with the standard model or with additional random effects for period effect or intervention effect. In a second simulation study, we explored the weight given to within‐cluster comparisons by simulating a larger intervention effect in the group of the trial that experienced both the control and intervention conditions and applying the three analysis models described previously. Across 500 simulations, we computed bias and confidence interval coverage of the estimated intervention effect. We found up to 50% bias in intervention effect estimates when period or intervention effects varied between clusters and were treated as fixed effects in the analysis. All misspecified models showed undercoverage of 95% confidence intervals, particularly the standard model. A large weight was given to within‐cluster comparisons in the standard model. In the SWTs simulated here, mixed‐effect models were highly sensitive to departures from the model assumptions, which can be explained by the high dependence on within‐cluster comparisons. Trialists should consider including a random effect for time period in their SWT analysis model. © 2017 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. PMID:28556355
New Standards Require Teaching More Statistics: Are Preservice Secondary Mathematics Teachers Ready?
ERIC Educational Resources Information Center
Lovett, Jennifer N.; Lee, Hollylynne S.
2017-01-01
Mathematics teacher education programs often need to respond to changing expectations and standards for K-12 curriculum and accreditation. New standards for high school mathematics in the United States include a strong emphasis in statistics. This article reports results from a mixed methods cross-institutional study examining the preparedness of…
PyMVPA: A python toolbox for multivariate pattern analysis of fMRI data.
Hanke, Michael; Halchenko, Yaroslav O; Sederberg, Per B; Hanson, Stephen José; Haxby, James V; Pollmann, Stefan
2009-01-01
Decoding patterns of neural activity onto cognitive states is one of the central goals of functional brain imaging. Standard univariate fMRI analysis methods, which correlate cognitive and perceptual function with the blood oxygenation-level dependent (BOLD) signal, have proven successful in identifying anatomical regions based on signal increases during cognitive and perceptual tasks. Recently, researchers have begun to explore new multivariate techniques that have proven to be more flexible, more reliable, and more sensitive than standard univariate analysis. Drawing on the field of statistical learning theory, these new classifier-based analysis techniques possess explanatory power that could provide new insights into the functional properties of the brain. However, unlike the wealth of software packages for univariate analyses, there are few packages that facilitate multivariate pattern classification analyses of fMRI data. Here we introduce a Python-based, cross-platform, and open-source software toolbox, called PyMVPA, for the application of classifier-based analysis techniques to fMRI datasets. PyMVPA makes use of Python's ability to access libraries written in a large variety of programming languages and computing environments to interface with the wealth of existing machine learning packages. We present the framework in this paper and provide illustrative examples on its usage, features, and programmability.
PyMVPA: A Python toolbox for multivariate pattern analysis of fMRI data
Hanke, Michael; Halchenko, Yaroslav O.; Sederberg, Per B.; Hanson, Stephen José; Haxby, James V.; Pollmann, Stefan
2009-01-01
Decoding patterns of neural activity onto cognitive states is one of the central goals of functional brain imaging. Standard univariate fMRI analysis methods, which correlate cognitive and perceptual function with the blood oxygenation-level dependent (BOLD) signal, have proven successful in identifying anatomical regions based on signal increases during cognitive and perceptual tasks. Recently, researchers have begun to explore new multivariate techniques that have proven to be more flexible, more reliable, and more sensitive than standard univariate analysis. Drawing on the field of statistical learning theory, these new classifier-based analysis techniques possess explanatory power that could provide new insights into the functional properties of the brain. However, unlike the wealth of software packages for univariate analyses, there are few packages that facilitate multivariate pattern classification analyses of fMRI data. Here we introduce a Python-based, cross-platform, and open-source software toolbox, called PyMVPA, for the application of classifier-based analysis techniques to fMRI datasets. PyMVPA makes use of Python's ability to access libraries written in a large variety of programming languages and computing environments to interface with the wealth of existing machine-learning packages. We present the framework in this paper and provide illustrative examples on its usage, features, and programmability. PMID:19184561
Groundwater quality in the Upper Susquehanna River Basin, New York, 2009
Reddy, James E.; Risen, Amy J.
2012-01-01
Water samples were collected from 16 production wells and 14 private residential wells in the Upper Susquehanna River Basin from August through December 2009 and were analyzed to characterize the groundwater quality in the basin. Wells at 16 of the sites were completed in sand and gravel aquifers, and 14 were finished in bedrock aquifers. In 2004–2005, six of these wells were sampled in the first Upper Susquehanna River Basin study. Water samples from the 2009 study were analyzed for 10 physical properties and 137 constituents that included nutrients, organic carbon, major inorganic ions, trace elements, radionuclides, pesticides, volatile organic compounds, and 4 types of bacterial analyses. Results of the water-quality analyses are presented in tabular form for individual wells, and summary statistics for specific constituents are presented by aquifer type. The results are compared with Federal and New York State drinking-water standards, which typically are identical. The results indicate that groundwater genrally is of acceptable quality, although concentrations of some constituents exceeded at least one drinking-water standard at 28 of the 30 wells. These constituents include: pH, sodium, aluminum, manganese, iron, arsenic, radon-222, residue on evaporation, total and fecal coliform including Escherichia coli and heterotrophic plate count.
Bodner, Todd E.
2017-01-01
Wilkinson and Task Force on Statistical Inference (1999) recommended that researchers include information on the practical magnitude of effects (e.g., using standardized effect sizes) to distinguish between the statistical and practical significance of research results. To date, however, researchers have not widely incorporated this recommendation into the interpretation and communication of the conditional effects and differences in conditional effects underlying statistical interactions involving a continuous moderator variable where at least one of the involved variables has an arbitrary metric. This article presents a descriptive approach to investigate two-way statistical interactions involving continuous moderator variables where the conditional effects underlying these interactions are expressed in standardized effect size metrics (i.e., standardized mean differences and semi-partial correlations). This approach permits researchers to evaluate and communicate the practical magnitude of particular conditional effects and differences in conditional effects using conventional and proposed guidelines, respectively, for the standardized effect size and therefore provides the researcher important supplementary information lacking under current approaches. The utility of this approach is demonstrated with two real data examples and important assumptions underlying the standardization process are highlighted. PMID:28484404
... Standards Act and Program MQSA Insights MQSA National Statistics Share Tweet Linkedin Pin it More sharing options ... but should level off with time. Archived Scorecard Statistics 2018 Scorecard Statistics 2017 Scorecard Statistics 2016 Scorecard ...
Explorations in Statistics: Standard Deviations and Standard Errors
ERIC Educational Resources Information Center
Curran-Everett, Douglas
2008-01-01
Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This series in "Advances in Physiology Education" provides an opportunity to do just that: we will investigate basic concepts in statistics using the free software package R. Because this series uses R solely as a vehicle…
Comparing Simulated and Theoretical Sampling Distributions of the U3 Person-Fit Statistic.
ERIC Educational Resources Information Center
Emons, Wilco H. M.; Meijer, Rob R.; Sijtsma, Klaas
2002-01-01
Studied whether the theoretical sampling distribution of the U3 person-fit statistic is in agreement with the simulated sampling distribution under different item response theory models and varying item and test characteristics. Simulation results suggest that the use of standard normal deviates for the standardized version of the U3 statistic may…
Watson, Kara M.; McHugh, Amy R.
2014-01-01
Regional regression equations were developed for estimating monthly flow-duration and monthly low-flow frequency statistics for ungaged streams in Coastal Plain and non-coastal regions of New Jersey for baseline and current land- and water-use conditions. The equations were developed to estimate 87 different streamflow statistics, which include the monthly 99-, 90-, 85-, 75-, 50-, and 25-percentile flow-durations of the minimum 1-day daily flow; the August–September 99-, 90-, and 75-percentile minimum 1-day daily flow; and the monthly 7-day, 10-year (M7D10Y) low-flow frequency. These 87 streamflow statistics were computed for 41 continuous-record streamflow-gaging stations (streamgages) with 20 or more years of record and 167 low-flow partial-record stations in New Jersey with 10 or more streamflow measurements. The regression analyses used to develop equations to estimate selected streamflow statistics were performed by testing the relation between flow-duration statistics and low-flow frequency statistics for 32 basin characteristics (physical characteristics, land use, surficial geology, and climate) at the 41 streamgages and 167 low-flow partial-record stations. The regression analyses determined drainage area, soil permeability, average April precipitation, average June precipitation, and percent storage (water bodies and wetlands) were the significant explanatory variables for estimating the selected flow-duration and low-flow frequency statistics. Streamflow estimates were computed for two land- and water-use conditions in New Jersey—land- and water-use during the baseline period of record (defined as the years a streamgage had little to no change in development and water use) and current land- and water-use conditions (1989–2008)—for each selected station using data collected through water year 2008. The baseline period of record is representative of a period when the basin was unaffected by change in development. The current period is representative of the increased development of the last 20 years (1989–2008). The two different land- and water-use conditions were used as surrogates for development to determine whether there have been changes in low-flow statistics as a result of changes in development over time. The State was divided into two low-flow regression regions, the Coastal Plain and the non-coastal region, in order to improve the accuracy of the regression equations. The left-censored parametric survival regression method was used for the analyses to account for streamgages and partial-record stations that had zero flow values for some of the statistics. The average standard error of estimate for the 348 regression equations ranged from 16 to 340 percent. These regression equations and basin characteristics are presented in the U.S. Geological Survey (USGS) StreamStats Web-based geographic information system application. This tool allows users to click on an ungaged site on a stream in New Jersey and get the estimated flow-duration and low-flow frequency statistics. Additionally, the user can click on a streamgage or partial-record station and get the “at-site” streamflow statistics. The low-flow characteristics of a stream ultimately affect the use of the stream by humans. Specific information on the low-flow characteristics of streams is essential to water managers who deal with problems related to municipal and industrial water supply, fish and wildlife conservation, and dilution of wastewater.
Analysis of statistical misconception in terms of statistical reasoning
NASA Astrophysics Data System (ADS)
Maryati, I.; Priatna, N.
2018-05-01
Reasoning skill is needed for everyone to face globalization era, because every person have to be able to manage and use information from all over the world which can be obtained easily. Statistical reasoning skill is the ability to collect, group, process, interpret, and draw conclusion of information. Developing this skill can be done through various levels of education. However, the skill is low because many people assume that statistics is just the ability to count and using formulas and so do students. Students still have negative attitude toward course which is related to research. The purpose of this research is analyzing students’ misconception in descriptive statistic course toward the statistical reasoning skill. The observation was done by analyzing the misconception test result and statistical reasoning skill test; observing the students’ misconception effect toward statistical reasoning skill. The sample of this research was 32 students of math education department who had taken descriptive statistic course. The mean value of misconception test was 49,7 and standard deviation was 10,6 whereas the mean value of statistical reasoning skill test was 51,8 and standard deviation was 8,5. If the minimal value is 65 to state the standard achievement of a course competence, students’ mean value is lower than the standard competence. The result of students’ misconception study emphasized on which sub discussion that should be considered. Based on the assessment result, it was found that students’ misconception happen on this: 1) writing mathematical sentence and symbol well, 2) understanding basic definitions, 3) determining concept that will be used in solving problem. In statistical reasoning skill, the assessment was done to measure reasoning from: 1) data, 2) representation, 3) statistic format, 4) probability, 5) sample, and 6) association.
Supply Chain Collaboration: Information Sharing in a Tactical Operating Environment
2013-06-01
architecture, there are four tiers: Client (Web Application Clients ), Presentation (Web-Server), Processing (Application-Server), Data (Database...organization in each period. This data will be collected to analyze. i) Analyses and Validation: We will do a statistics test in this data, Pareto ...notes, outstanding deliveries, and inventory. i) Analyses and Validation: We will do a statistics test in this data, Pareto analyses and confirmation
Impact of South American heroin on the US heroin market 1993-2004.
Ciccarone, Daniel; Unick, George J; Kraus, Allison
2009-09-01
The past two decades have seen an increase in heroin-related morbidity and mortality in the United States. We report on trends in US heroin retail price and purity, including the effect of entry of Colombian-sourced heroin on the US heroin market. The average standardized price ($/mg-pure) and purity (% by weight) of heroin from 1993 to 2004 was from obtained from US Drug Enforcement Agency retail purchase data for 20 metropolitan statistical areas. Univariate statistics, robust Ordinary Least Squares regression and mixed fixed and random effect growth curve models were used to predict the price and purity data in each metropolitan statistical area over time. Over the 12 study years, heroin price decreased 62%. The median percentage of all heroin samples that are of South American origin increased an absolute 7% per year. Multivariate models suggest percent South American heroin is a significant predictor of lower heroin price and higher purity adjusting for time and demographics. These analyses reveal trends to historically low-cost heroin in many US cities. These changes correspond to the entrance into and rapid domination of the US heroin market by Colombian-sourced heroin. The implications of these changes are discussed.
Multivariate model of female black bear habitat use for a Geographic Information System
Clark, Joseph D.; Dunn, James E.; Smith, Kimberly G.
1993-01-01
Simple univariate statistical techniques may not adequately assess the multidimensional nature of habitats used by wildlife. Thus, we developed a multivariate method to model habitat-use potential using a set of female black bear (Ursus americanus) radio locations and habitat data consisting of forest cover type, elevation, slope, aspect, distance to roads, distance to streams, and forest cover type diversity score in the Ozark Mountains of Arkansas. The model is based on the Mahalanobis distance statistic coupled with Geographic Information System (GIS) technology. That statistic is a measure of dissimilarity and represents a standardized squared distance between a set of sample variates and an ideal based on the mean of variates associated with animal observations. Calculations were made with the GIS to produce a map containing Mahalanobis distance values within each cell on a 60- × 60-m grid. The model identified areas of high habitat use potential that could not otherwise be identified by independent perusal of any single map layer. This technique avoids many pitfalls that commonly affect typical multivariate analyses of habitat use and is a useful tool for habitat manipulation or mitigation to favor terrestrial vertebrates that use habitats on a landscape scale.
Methods for estimating low-flow statistics for Massachusetts streams
Ries, Kernell G.; Friesz, Paul J.
2000-01-01
Methods and computer software are described in this report for determining flow duration, low-flow frequency statistics, and August median flows. These low-flow statistics can be estimated for unregulated streams in Massachusetts using different methods depending on whether the location of interest is at a streamgaging station, a low-flow partial-record station, or an ungaged site where no data are available. Low-flow statistics for streamgaging stations can be estimated using standard U.S. Geological Survey methods described in the report. The MOVE.1 mathematical method and a graphical correlation method can be used to estimate low-flow statistics for low-flow partial-record stations. The MOVE.1 method is recommended when the relation between measured flows at a partial-record station and daily mean flows at a nearby, hydrologically similar streamgaging station is linear, and the graphical method is recommended when the relation is curved. Equations are presented for computing the variance and equivalent years of record for estimates of low-flow statistics for low-flow partial-record stations when either a single or multiple index stations are used to determine the estimates. The drainage-area ratio method or regression equations can be used to estimate low-flow statistics for ungaged sites where no data are available. The drainage-area ratio method is generally as accurate as or more accurate than regression estimates when the drainage-area ratio for an ungaged site is between 0.3 and 1.5 times the drainage area of the index data-collection site. Regression equations were developed to estimate the natural, long-term 99-, 98-, 95-, 90-, 85-, 80-, 75-, 70-, 60-, and 50-percent duration flows; the 7-day, 2-year and the 7-day, 10-year low flows; and the August median flow for ungaged sites in Massachusetts. Streamflow statistics and basin characteristics for 87 to 133 streamgaging stations and low-flow partial-record stations were used to develop the equations. The streamgaging stations had from 2 to 81 years of record, with a mean record length of 37 years. The low-flow partial-record stations had from 8 to 36 streamflow measurements, with a median of 14 measurements. All basin characteristics were determined from digital map data. The basin characteristics that were statistically significant in most of the final regression equations were drainage area, the area of stratified-drift deposits per unit of stream length plus 0.1, mean basin slope, and an indicator variable that was 0 in the eastern region and 1 in the western region of Massachusetts. The equations were developed by use of weighted-least-squares regression analyses, with weights assigned proportional to the years of record and inversely proportional to the variances of the streamflow statistics for the stations. Standard errors of prediction ranged from 70.7 to 17.5 percent for the equations to predict the 7-day, 10-year low flow and 50-percent duration flow, respectively. The equations are not applicable for use in the Southeast Coastal region of the State, or where basin characteristics for the selected ungaged site are outside the ranges of those for the stations used in the regression analyses. A World Wide Web application was developed that provides streamflow statistics for data collection stations from a data base and for ungaged sites by measuring the necessary basin characteristics for the site and solving the regression equations. Output provided by the Web application for ungaged sites includes a map of the drainage-basin boundary determined for the site, the measured basin characteristics, the estimated streamflow statistics, and 90-percent prediction intervals for the estimates. An equation is provided for combining regression and correlation estimates to obtain improved estimates of the streamflow statistics for low-flow partial-record stations. An equation is also provided for combining regression and drainage-area ratio estimates to obtain improved e
Statistical analysis of tire treadwear data
DOT National Transportation Integrated Search
1985-03-01
This report describes the results of a statistical analysis of the treadwear : variability of radial tires subjected to the Uniform Tire Quality Grading (UTQG) : standard. Because unexplained variability in the treadwear portion of the standard : cou...
Research of Extension of the Life Cycle of Helicopter Rotor Blade in Hungary
2003-02-01
Radiography (DXR), and (iii) Vibration Diagnostics (VD) with Statistical Energy Analysis (SEA) were semi- simultaneously applied [1]. The used three...2.2. Vibration Diagnostics (VD)) Parallel to the NDT measurements the Statistical Energy Analysis (SEA) as a vibration diagnostical tool were...noises were analysed with a dual-channel real time frequency analyser (BK2035). In addition to the Statistical Energy Analysis measurement a small
Hamel, Jean-Francois; Saulnier, Patrick; Pe, Madeline; Zikos, Efstathios; Musoro, Jammbe; Coens, Corneel; Bottomley, Andrew
2017-09-01
Over the last decades, Health-related Quality of Life (HRQoL) end-points have become an important outcome of the randomised controlled trials (RCTs). HRQoL methodology in RCTs has improved following international consensus recommendations. However, no international recommendations exist concerning the statistical analysis of such data. The aim of our study was to identify and characterise the quality of the statistical methods commonly used for analysing HRQoL data in cancer RCTs. Building on our recently published systematic review, we analysed a total of 33 published RCTs studying the HRQoL methods reported in RCTs since 1991. We focussed on the ability of the methods to deal with the three major problems commonly encountered when analysing HRQoL data: their multidimensional and longitudinal structure and the commonly high rate of missing data. All studies reported HRQoL being assessed repeatedly over time for a period ranging from 2 to 36 months. Missing data were common, with compliance rates ranging from 45% to 90%. From the 33 studies considered, 12 different statistical methods were identified. Twenty-nine studies analysed each of the questionnaire sub-dimensions without type I error adjustment. Thirteen studies repeated the HRQoL analysis at each assessment time again without type I error adjustment. Only 8 studies used methods suitable for repeated measurements. Our findings show a lack of consistency in statistical methods for analysing HRQoL data. Problems related to multiple comparisons were rarely considered leading to a high risk of false positive results. It is therefore critical that international recommendations for improving such statistical practices are developed. Copyright © 2017. Published by Elsevier Ltd.
Sunspot activity and influenza pandemics: a statistical assessment of the purported association.
Towers, S
2017-10-01
Since 1978, a series of papers in the literature have claimed to find a significant association between sunspot activity and the timing of influenza pandemics. This paper examines these analyses, and attempts to recreate the three most recent statistical analyses by Ertel (1994), Tapping et al. (2001), and Yeung (2006), which all have purported to find a significant relationship between sunspot numbers and pandemic influenza. As will be discussed, each analysis had errors in the data. In addition, in each analysis arbitrary selections or assumptions were also made, and the authors did not assess the robustness of their analyses to changes in those arbitrary assumptions. Varying the arbitrary assumptions to other, equally valid, assumptions negates the claims of significance. Indeed, an arbitrary selection made in one of the analyses appears to have resulted in almost maximal apparent significance; changing it only slightly yields a null result. This analysis applies statistically rigorous methodology to examine the purported sunspot/pandemic link, using more statistically powerful un-binned analysis methods, rather than relying on arbitrarily binned data. The analyses are repeated using both the Wolf and Group sunspot numbers. In all cases, no statistically significant evidence of any association was found. However, while the focus in this particular analysis was on the purported relationship of influenza pandemics to sunspot activity, the faults found in the past analyses are common pitfalls; inattention to analysis reproducibility and robustness assessment are common problems in the sciences, that are unfortunately not noted often enough in review.
Browne, Richard W; Whitcomb, Brian W
2010-07-01
Problems in the analysis of laboratory data commonly arise in epidemiologic studies in which biomarkers subject to lower detection thresholds are used. Various thresholds exist including limit of detection (LOD), limit of quantification (LOQ), and limit of blank (LOB). Choosing appropriate strategies for dealing with data affected by such limits relies on proper understanding of the nature of the detection limit and its determination. In this paper, we demonstrate experimental and statistical procedures generally used for estimating different detection limits according to standard procedures in the context of analysis of fat-soluble vitamins and micronutrients in human serum. Fat-soluble vitamins and micronutrients were analyzed by high-performance liquid chromatography with diode array detection. A simulated serum matrix blank was repeatedly analyzed for determination of LOB parametrically by using the observed blank distribution as well as nonparametrically by using ranks. The LOD was determined by combining information regarding the LOB with data from repeated analysis of standard reference materials (SRMs), diluted to low levels; from LOB to 2-3 times LOB. The LOQ was determined experimentally by plotting the observed relative standard deviation (RSD) of SRM replicates compared with the concentration, where the LOQ is the concentration at an RSD of 20%. Experimental approaches and example statistical procedures are given for determination of LOB, LOD, and LOQ. These quantities are reported for each measured analyte. For many analyses, there is considerable information available below the LOQ. Epidemiologic studies must understand the nature of these detection limits and how they have been estimated for appropriate treatment of affected data.
NASA Astrophysics Data System (ADS)
Winn, Kathleen Mary
The Next Generation Science Standards (NGSS) are the newest K-12 science content standards created by a coalition of educators, scientists, and researchers available for adoption by states and schools. Principals are important actors during policy implementation especially since principals are charged with assuming the role of an instructional leader for their teachers in all subject areas. Science poses a unique challenge to the elementary curricular landscape because traditionally, elementary teachers report low levels of self-efficacy in the subject. Support in this area therefore becomes important for a successful integration of a new science education agenda. This study analyzed self-reported survey data from public elementary principals (N=667) to address the following three research questions: (1) What type of science backgrounds do elementary principals have? (2) What indicators predict if elementary principals will engage in instructional leadership behaviors in science? (3) Does self-efficacy mediate the relationship between science background and a capacity for instructional leadership in science? The survey data were analyzed quantitatively. Descriptive statistics address the first research question and inferential statistics (hierarchal regression analysis and a mediation analysis) answer the second and third research questions.The sample data show that about 21% of elementary principals have a formal science degree and 26% have a degree in a STEM field. Most principals have not had recent experience teaching science, nor were they every exclusively a science teacher. The analyses suggests that demographic, experiential, and self-efficacy variables predict instructional leadership practices in science.
Lám, Judit; Merész, Gergő; Bakacsi, Gyula; Belicza, Éva; Surján, Cecília; Takács, Erika
2016-10-01
The accreditation system for health care providers was developed in Hungary aiming to increase safety, efficiency, and efficacy of care and optimise its organisational operation. The aim of this study was to assess changes of organisational culture in pilot institutes of the accreditation program. 7 volunteer pilot institutes using an internationally validated questionnaire were included. The impact study was performed in 2 rounds: the first before the introduction of the accreditation program, and the second a year later, when the standards were already known. Data were analysed using descriptive statistics and logistic regression models. Statistically significant (p<0.05) positive changes were detected in hospitals in three dimensions: organisational learning - continuous improvement, communication openness, teamwork within the unit while in outpatient clinics: overall perceptions of patient safety, and patient safety within the unit. Organisational culture in the observed institutes needs improvement, but positive changes already point to a safer care. Orv. Hetil., 2016, 157(42), 1667-1673.
NASA Technical Reports Server (NTRS)
Fomenkova, M. N.
1997-01-01
The computer-intensive project consisted of the analysis and synthesis of existing data on composition of comet Halley dust particles. The main objective was to obtain a complete inventory of sulfur containing compounds in the comet Halley dust by building upon the existing classification of organic and inorganic compounds and applying a variety of statistical techniques for cluster and cross-correlational analyses. A student hired for this project wrote and tested the software to perform cluster analysis. The following tasks were carried out: (1) selecting the data from existing database for the proposed project; (2) finding access to a standard library of statistical routines for cluster analysis; (3) reformatting the data as necessary for input into the library routines; (4) performing cluster analysis and constructing hierarchical cluster trees using three methods to define the proximity of clusters; (5) presenting the output results in different formats to facilitate the interpretation of the obtained cluster trees; (6) selecting groups of data points common for all three trees as stable clusters. We have also considered the chemistry of sulfur in inorganic compounds.
Statistical methods and computing for big data.
Wang, Chun; Chen, Ming-Hui; Schifano, Elizabeth; Wu, Jing; Yan, Jun
2016-01-01
Big data are data on a massive scale in terms of volume, intensity, and complexity that exceed the capacity of standard analytic tools. They present opportunities as well as challenges to statisticians. The role of computational statisticians in scientific discovery from big data analyses has been under-recognized even by peer statisticians. This article summarizes recent methodological and software developments in statistics that address the big data challenges. Methodologies are grouped into three classes: subsampling-based, divide and conquer, and online updating for stream data. As a new contribution, the online updating approach is extended to variable selection with commonly used criteria, and their performances are assessed in a simulation study with stream data. Software packages are summarized with focuses on the open source R and R packages, covering recent tools that help break the barriers of computer memory and computing power. Some of the tools are illustrated in a case study with a logistic regression for the chance of airline delay.
Effect of tulle on the mechanical properties of a maxillofacial silicone elastomer.
Gunay, Yumushan; Kurtoglu, Cem; Atay, Arzu; Karayazgan, Banu; Gurbuz, Cihan Cem
2008-11-01
The purpose of this research was to investigate if physical properties could be improved by incorporating a tulle reinforcement material into a maxillofacial silicone elastomer. A-2186 silicone elastomer was used in this study. The study group consisted of 20 elastomer specimens incorporated with tulle and fabricated in dumbbell-shaped silicone patterns using ASTM D412 and D624 standards. The control group consisted of 20 elastomer specimens fabricated without tulle. Tensile strength, ultimate elongation, and tear strength of all specimens were measured and analyzed. Statistical analyses were performed using Mann-Whitney U test with a statistical significance at 95% confidence level. It was found that the tensile and tear strengths of tulle-incorporated maxillofacial silicone elastomer were higher than those without tulle incorporation (p < 0.05). Therefore, findings of this study suggested that tulle successfully reinforced a maxillofacial silicone elastomer by providing it with better mechanical properties and augmented strength--especially for the delicate edges of maxillofacial prostheses.
NASA Astrophysics Data System (ADS)
Afifah, M. R. Nurul; Aziz, A. Che; Roslan, M. Kamal
2015-09-01
Sediment samples were collected from the shallow marine from Kuala Besar, Kelantan outwards to the basin floor of South China Sea which consisted of quaternary bottom sediments. Sixty five samples were analysed for their grain size distribution and statistical relationships. Basic statistical analysis like mean, standard deviation, skewness and kurtosis were calculated and used to differentiate the depositional environment of the sediments and to derive the uniformity of depositional environment either from the beach or river environment. The sediments of all areas were varied in their sorting ranging from very well sorted to poorly sorted, strongly negative skewed to strongly positive skewed, and extremely leptokurtic to very platykurtic in nature. Bivariate plots between the grain-size parameters were then interpreted and the Coarsest-Median (CM) pattern showed the trend suggesting relationships between sediments influenced by three ongoing hydrodynamic factors namely turbidity current, littoral drift and waves dynamic, which functioned to control the sediments distribution pattern in various ways.
A Vignette (User's Guide) for “An R Package for Statistical ...
StatCharrms is a graphical user front-end for ease of use in analyzing data generated from OCSPP 890.2200, Medaka Extended One Generation Reproduction Test (MEOGRT) and OCSPP 890.2300, Larval Amphibian Gonad Development Assay (LAGDA). The analyses StatCharrms is capable of performing are: Rao-Scott adjusted Cochran-Armitage test for trend By Slices (RSCABS), a Standard Cochran-Armitage test for trend By Slices (SCABS), mixed effects Cox proportional model, Jonckheere-Terpstra step down trend test, Dunn test, one way ANOVA, weighted ANOVA, mixed effects ANOVA, repeated measures ANOVA, and Dunnett test. This document provides a User’s Manual (termed a Vignette by the Comprehensive R Archive Network (CRAN)) for the previously created R-code tool called StatCharrms (Statistical analysis of Chemistry, Histopathology, and Reproduction endpoints using Repeated measures and Multi-generation Studies). The StatCharrms R-code has been publically available directly from EPA staff since the approval of OCSPP 890.2200 and 890.2300, and now is available publically available at the CRAN.
Statistical methods and computing for big data
Wang, Chun; Chen, Ming-Hui; Schifano, Elizabeth; Wu, Jing
2016-01-01
Big data are data on a massive scale in terms of volume, intensity, and complexity that exceed the capacity of standard analytic tools. They present opportunities as well as challenges to statisticians. The role of computational statisticians in scientific discovery from big data analyses has been under-recognized even by peer statisticians. This article summarizes recent methodological and software developments in statistics that address the big data challenges. Methodologies are grouped into three classes: subsampling-based, divide and conquer, and online updating for stream data. As a new contribution, the online updating approach is extended to variable selection with commonly used criteria, and their performances are assessed in a simulation study with stream data. Software packages are summarized with focuses on the open source R and R packages, covering recent tools that help break the barriers of computer memory and computing power. Some of the tools are illustrated in a case study with a logistic regression for the chance of airline delay. PMID:27695593
Koh, Dong-Hee; Locke, Sarah J.; Chen, Yu-Cheng; Purdue, Mark P.; Friesen, Melissa C.
2016-01-01
Background Retrospective exposure assessment of occupational lead exposure in population-based studies requires historical exposure information from many occupations and industries. Methods We reviewed published US exposure monitoring studies to identify lead exposure measurement data. We developed an occupational lead exposure database from the 175 identified papers containing 1,111 sets of lead concentration summary statistics (21% area air, 47% personal air, 32% blood). We also extracted ancillary exposure-related information, including job, industry, task/location, year collected, sampling strategy, control measures in place, and sampling and analytical methods. Results Measurements were published between 1940 and 2010 and represented 27 2-digit standardized industry classification codes. The majority of the measurements were related to lead-based paint work, joining or cutting metal using heat, primary and secondary metal manufacturing, and lead acid battery manufacturing. Conclusions This database can be used in future statistical analyses to characterize differences in lead exposure across time, jobs, and industries. PMID:25968240
International employment in clinical practice: influencing factors for the dental hygienist.
Abbott, A; Barrow, S-Y; Lopresti, F; Hittelman, E
2005-02-01
To assess demographics, job characteristics, geographical regions, resources and commitment, which influence dental hygienists seeking international clinical practice employment opportunities. Questionnaires were mailed to a convenience sample of members of the Dental Hygienists' Association of the City of New York. Statistical analyses were conducted and frequency distributions and relationships between variables were calculated. Seventy-two percent of respondents reported that they are or may be interested in working overseas. Italy and Spain (67%) were the regions of most interest. Salary (65%) was cited as the most influencing factor in selection, whereas non-compliance with the equivalency to Occupational Safety and Health Administration standards (74%) was the most frequently perceived barrier. Multiple language fluency was statistically significant (0.003) regarding interest in overseas employment. Policy makers, employers and educators need to be aware of these findings should recruitment be a possibility to render urgently needed oral hygiene care in regions where there is a perceived shortage of dental hygienists.
He, Yiping; He, Tongqiang; Wang, Yanxia; Xu, Zhao; Xu, Yehong; Wu, Yiqing; Ji, Jing; Mi, Yang
2014-11-01
To explore the effect of different diagnositic criteria of subclinical hypothyroidism using thyroid stimulating hormone (TSH) and positive thyroid peroxidase antibodies (TPO-Ab) on the pregnancy outcomes. 3 244 pregnant women who had their antenatal care and delivered in Child and Maternity Health Hospital of Shannxi Province August from 2011 to February 2013 were recruited prospectively. According to the standard of American Thyroid Association (ATA), pregnant women with normal serum free thyroxine (FT4) whose serum TSH level> 2.50 mU/L were diagnosed as subclinical hypothyroidism in pregnancy (foreign standard group). According to the Guideline of Diagnosis and Therapy of Prenatal and Postpartum Thyroid Disease made by Chinese Society of Endocrinology and Chinese Society of Perinatal Medicine in 2012, pregnant women with serum TSH level> 5.76 mU/L, and normal FT4 were diagnosed as subclinical hypothyroidism in pregnancy(national standard group). Pregnant women with subclinical hypothyroidism whose serum TSH levels were between 2.50-5.76 mU/L were referred as the study observed group; and pregnant women with serum TSH level< 2.50 mU/L and negative TPO- Ab were referred as the control group. Positive TPO-Ab results and the pregnancy outcomes were analyzed. (1) There were 635 cases in the foreign standard group, with the incidence of 19.57% (635/3 244). And there were 70 cases in the national standard group, with the incidence of 2.16% (70/3 244). There were statistically significant difference between the two groups (P < 0.01). There were 565 cases in the study observed group, with the incidence of 17.42% (565/3 244). There was statistically significant difference (P < 0.01) when compared with the national standard group; while there was no statistically significant difference (P > 0.05) when compared with the foreign standard group. (2) Among the 3 244 cases, 402 cases had positive TPO-Ab. 318 positive cases were in the foreign standard group, and the incidence of subclinical hypothyroidism was 79.10% (318/402). There were 317 negative cases in the foreign standard group, with the incidence of 11.15% (317/2 842). The difference was statistically significant (P < 0.01) between them. In the national standard group, 46 cases had positive TPO-Ab, with the incidence of 11.44% (46/402), and 24 cases had negative result, with the incidence of 0.84% (24/2 842). There were statistically significant difference (P < 0.01) between them. In the study observed group, 272 cases were TPO-Ab positive, with the incidence of 67.66% (272/402), and 293 cases were negative, with the incidence of 10.31% (293/2 842), the difference was statistically significant (P < 0.01). (3) The incidence of miscarriage, premature delivery, gestational hypertension disease, gestational diabetes mellitus(GDM)in the foreign standard group had statistically significant differences (P < 0.05) when compared with the control group, respectively. While there was no statistically significant difference (P > 0.05) in the incidence of placental abruption or fetal distress. And the incidence of miscarriage, premature delivery, gestational hypertension disease, GDM in the national standard group had statistical significant difference (P < 0.05) compared with the control group, respectively. While there was no statistically significant difference (P > 0.05) in the incidence of placental abruption or fetal distress. This study observed group of pregnant women's abortion, gestational hypertension disease, GDM incidence respectively compared with control group, the difference had statistical significance (P < 0.05); but in preterm labor, placental abruption, and fetal distress incidence, there were no statistically significant difference (P > 0.05). (4) The incidence of miscarriage, premature delivery, gestational hypertension disease, GDM, placental abruption, fetal distress in the TPO-Ab positive cases of the national standard group showed an increase trend when compared with TPO-Ab negative cases, with no statistically significant difference (P > 0.05). The incidence of gestational hypertension disease and GDM in the TPO-Ab positive cases of the study observed group had statistical significance difference (P < 0.05) when compared with TPO-Ab negative cases; while the incidence of miscarriage, premature birth, placental abruption, fetal distress had no statistically significant difference (P > 0.05). The incidence of gestational hypertension disease and GDM in the TPO-Ab positive cases had statistically significance difference when compared with TPO-Ab negtive cases of foreign standard group (P < 0.05). (1) The incidence of subclinical hypothyroidism is rather high during early pregnancy and can lead to adverse pregnancy outcome. (2) Positive TPO-Ab result has important predictive value of the thyroid dysfunction and GDM. (3) Relatively, the ATA standard of diagnosis (serum TSH level> 2.50 mU/L) is safer for the antenatal care; the national standard (serum TSH level> 5.76 mU/L) is not conducive to pregnancy management.
Statistics and the Question of Standards
Stigler, Stephen M.
1996-01-01
This is a written version of a memorial lecture given in honor of Churchill Eisenhart at the National Institute of Standards and Technology on May 5, 1995. The relationship and the interplay between statistics and standards over the past centuries are described. Historical examples are presented to illustrate mutual dependency and development in the two fields. PMID:27805077
Visually guided tube thoracostomy insertion comparison to standard of care in a large animal model.
Hernandez, Matthew C; Vogelsang, David; Anderson, Jeff R; Thiels, Cornelius A; Beilman, Gregory; Zielinski, Martin D; Aho, Johnathon M
2017-04-01
Tube thoracostomy (TT) is a lifesaving procedure for a variety of thoracic pathologies. The most commonly utilized method for placement involves open dissection and blind insertion. Image guided placement is commonly utilized but is limited by an inability to see distal placement location. Unfortunately, TT is not without complications. We aim to demonstrate the feasibility of a disposable device to allow for visually directed TT placement compared to the standard of care in a large animal model. Three swine were sequentially orotracheally intubated and anesthetized. TT was conducted utilizing a novel visualization device, tube thoracostomy visual trocar (TTVT) and standard of care (open technique). Position of the TT in the chest cavity were recorded using direct thoracoscopic inspection and radiographic imaging with the operator blinded to results. Complications were evaluated using a validated complication grading system. Standard descriptive statistical analyses were performed. Thirty TT were placed, 15 using TTVT technique, 15 using standard of care open technique. All of the TT placed using TTVT were without complication and in optimal position. Conversely, 27% of TT placed using standard of care open technique resulted in complications. Necropsy revealed no injury to intrathoracic organs. Visual directed TT placement using TTVT is feasible and non-inferior to the standard of care in a large animal model. This improvement in instrumentation has the potential to greatly improve the safety of TT. Further study in humans is required. Therapeutic Level II. Copyright © 2017 Elsevier Ltd. All rights reserved.
Tonin, Fernanda S; Wiecek, Elyssa; Torres-Robles, Andrea; Pontarolo, Roberto; Benrimoj, Shalom Charlie I; Fernandez-Llimos, Fernando; Garcia-Cardenas, Victoria
2018-05-19
Poor medication adherence is associated with adverse health outcomes and higher costs of care. However, inconsistencies in the assessment of adherence are found in the literature. To evaluate the effect of different measures of adherence in the comparative effectiveness of complex interventions to enhance patients' adherence to prescribed medications. A systematic review with network meta-analysis was performed. Electronic searches for relevant pairwise meta-analysis including trials of interventions that aimed to improve medication adherence were performed in PubMed. Data extraction was conducted with eligible trials evaluating short-period adherence follow-up (until 3 months) using any measure of adherence: self-report, pill count, or MEMS (medication event monitoring system). To standardize the results obtained with these different measures, an overall composite measure and an objective composite measure were also calculated. Network meta-analyses for each measure of adherence were built. Rank order and surface under the cumulative ranking curve analyses (SUCRA) were performed. Ninety-one trials were included in the network meta-analyses. The five network meta-analyses demonstrated robustness and reliability. Results obtained for all measures of adherence were similar across them and to both composite measures. For both composite measures, interventions comprising economic + technical components were the best option (90% of probability in SUCRA analysis) with statistical superiority against almost all other interventions and against standard care (odds ratio with 95% credibility interval ranging from 0.09 to 0.25 [0.02, 0.98]). The use of network meta-analysis was reliable to compare different measures of adherence of complex interventions in short-periods follow-up. Analyses with longer follow-up periods are needed to confirm these results. Different measures of adherence produced similar results. The use of composite measures revealed reliable alternatives to establish a broader and more detailed picture of adherence. Copyright © 2018 Elsevier Inc. All rights reserved.
Quality Space and Launch Requirements Addendum to AS9100C
2015-03-05
45 8.9.1 Statistical Process Control (SPC) .......................................................................... 45 8.9.1.1 Out of Control...Systems Center SME Subject Matter Expert SOW Statement of Work SPC Statistical Process Control SPO System Program Office SRP Standard Repair...individual data exceeding the control limits. Control limits are developed using standard statistical methods or other approved techniques and are based on
[The evaluation of costs: standards of medical care and clinical statistic groups].
Semenov, V Iu; Samorodskaia, I V
2014-01-01
The article presents the comparative analysis of techniques of evaluation of costs of hospital treatment using medical economic standards of medical care and clinical statistical groups. The technique of evaluation of costs on the basis of clinical statistical groups was developed almost fifty years ago and is largely applied in a number of countries. Nowadays, in Russia the payment for completed case of treatment on the basis of medical economic standards is the main mode of payment for medical care in hospital. It is very conditionally a Russian analogue of world-wide prevalent system of diagnostic related groups. The tariffs for these cases of treatment as opposed to clinical statistical groups are counted on basis of standards of provision of medical care approved by Minzdrav of Russia. The information derived from generalization of cases of treatment of real patients is not applied.
The space of ultrametric phylogenetic trees.
Gavryushkin, Alex; Drummond, Alexei J
2016-08-21
The reliability of a phylogenetic inference method from genomic sequence data is ensured by its statistical consistency. Bayesian inference methods produce a sample of phylogenetic trees from the posterior distribution given sequence data. Hence the question of statistical consistency of such methods is equivalent to the consistency of the summary of the sample. More generally, statistical consistency is ensured by the tree space used to analyse the sample. In this paper, we consider two standard parameterisations of phylogenetic time-trees used in evolutionary models: inter-coalescent interval lengths and absolute times of divergence events. For each of these parameterisations we introduce a natural metric space on ultrametric phylogenetic trees. We compare the introduced spaces with existing models of tree space and formulate several formal requirements that a metric space on phylogenetic trees must possess in order to be a satisfactory space for statistical analysis, and justify them. We show that only a few known constructions of the space of phylogenetic trees satisfy these requirements. However, our results suggest that these basic requirements are not enough to distinguish between the two metric spaces we introduce and that the choice between metric spaces requires additional properties to be considered. Particularly, that the summary tree minimising the square distance to the trees from the sample might be different for different parameterisations. This suggests that further fundamental insight is needed into the problem of statistical consistency of phylogenetic inference methods. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
Martin, Lisa; Watanabe, Sharon; Fainsinger, Robin; Lau, Francis; Ghosh, Sunita; Quan, Hue; Atkins, Marlis; Fassbender, Konrad; Downing, G Michael; Baracos, Vickie
2010-10-01
To determine whether elements of a standard nutritional screening assessment are independently prognostic of survival in patients with advanced cancer. A prospective nested cohort of patients with metastatic cancer were accrued from different units of a Regional Palliative Care Program. Patients completed a nutritional screen on admission. Data included age, sex, cancer site, height, weight history, dietary intake, 13 nutrition impact symptoms, and patient- and physician-reported performance status (PS). Univariate and multivariate survival analyses were conducted. Concordance statistics (c-statistics) were used to test the predictive accuracy of models based on training and validation sets; a c-statistic of 0.5 indicates the model predicts the outcome as well as chance; perfect prediction has a c-statistic of 1.0. A training set of patients in palliative home care (n = 1,164) was used to identify prognostic variables. Primary disease site, PS, short-term weight change (either gain or loss), dietary intake, and dysphagia predicted survival in multivariate analysis (P < .05). A model including only patients separated by disease site and PS with high c-statistics between predicted and observed responses for survival in the training set (0.90) and validation set (0.88; n = 603). The addition of weight change, dietary intake, and dysphagia did not further improve the c-statistic of the model. The c-statistic was also not altered by substituting physician-rated palliative PS for patient-reported PS. We demonstrate a high probability of concordance between predicted and observed survival for patients in distinct palliative care settings (home care, tertiary inpatient, ambulatory outpatient) based on patient-reported information.
A Framework for the Optimization of Discrete-Event Simulation Models
NASA Technical Reports Server (NTRS)
Joshi, B. D.; Unal, R.; White, N. H.; Morris, W. D.
1996-01-01
With the growing use of computer modeling and simulation, in all aspects of engineering, the scope of traditional optimization has to be extended to include simulation models. Some unique aspects have to be addressed while optimizing via stochastic simulation models. The optimization procedure has to explicitly account for the randomness inherent in the stochastic measures predicted by the model. This paper outlines a general purpose framework for optimization of terminating discrete-event simulation models. The methodology combines a chance constraint approach for problem formulation, together with standard statistical estimation and analyses techniques. The applicability of the optimization framework is illustrated by minimizing the operation and support resources of a launch vehicle, through a simulation model.
Parton distributions in the LHC era
NASA Astrophysics Data System (ADS)
Del Debbio, Luigi
2018-03-01
Analyses of LHC (and other!) experiments require robust and statistically accurate determinations of the structure of the proton, encoded in the parton distribution functions (PDFs). The standard description of hadronic processes relies on factorization theorems, which allow a separation of process-dependent short-distance physics from the universal long-distance structure of the proton. Traditionally the PDFs are obtained from fits to experimental data. However, understanding the long-distance properties of hadrons is a nonperturbative problem, and lattice QCD can play a role in providing useful results from first principles. In this talk we compare the different approaches used to determine PDFs, and try to assess the impact of existing, and future, lattice calculations.
Percolation Analysis of a Wiener Reconstruction of the IRAS 1.2 Jy Redshift Catalog
NASA Astrophysics Data System (ADS)
Yess, Capp; Shandarin, Sergei F.; Fisher, Karl B.
1997-01-01
We present percolation analyses of Wiener reconstructions of the IRAS 1.2 Jy redshift survey. There are 10 reconstructions of galaxy density fields in real space spanning the range β = 0.1-1.0, where β = Ω0.6/b, Ω is the present dimensionless density, and b is the bias factor. Our method uses the growth of the largest cluster statistic to characterize the topology of a density field, where Gaussian randomized versions of the reconstructions are used as standards for analysis. For the reconstruction volume of radius R ~ 100 h-1 Mpc, percolation analysis reveals a slight ``meatball'' topology for the real space, galaxy distribution of the IRAS survey.
25 CFR 542.19 - What are the minimum internal control standards for accounting?
Code of Federal Regulations, 2013 CFR
2013-04-01
...; (3) Individual and statistical game records to reflect statistical drop, statistical win, and the percentage of statistical win to statistical drop by each table game, and to reflect statistical drop, statistical win, and the percentage of statistical win to statistical drop for each type of table game, by...
25 CFR 542.19 - What are the minimum internal control standards for accounting?
Code of Federal Regulations, 2014 CFR
2014-04-01
...; (3) Individual and statistical game records to reflect statistical drop, statistical win, and the percentage of statistical win to statistical drop by each table game, and to reflect statistical drop, statistical win, and the percentage of statistical win to statistical drop for each type of table game, by...
25 CFR 542.19 - What are the minimum internal control standards for accounting?
Code of Federal Regulations, 2010 CFR
2010-04-01
...; (3) Individual and statistical game records to reflect statistical drop, statistical win, and the percentage of statistical win to statistical drop by each table game, and to reflect statistical drop, statistical win, and the percentage of statistical win to statistical drop for each type of table game, by...
25 CFR 542.19 - What are the minimum internal control standards for accounting?
Code of Federal Regulations, 2011 CFR
2011-04-01
...; (3) Individual and statistical game records to reflect statistical drop, statistical win, and the percentage of statistical win to statistical drop by each table game, and to reflect statistical drop, statistical win, and the percentage of statistical win to statistical drop for each type of table game, by...
25 CFR 542.19 - What are the minimum internal control standards for accounting?
Code of Federal Regulations, 2012 CFR
2012-04-01
...; (3) Individual and statistical game records to reflect statistical drop, statistical win, and the percentage of statistical win to statistical drop by each table game, and to reflect statistical drop, statistical win, and the percentage of statistical win to statistical drop for each type of table game, by...
Jin, Zhichao; Yu, Danghui; Zhang, Luoman; Meng, Hong; Lu, Jian; Gao, Qingbin; Cao, Yang; Ma, Xiuqiang; Wu, Cheng; He, Qian; Wang, Rui; He, Jia
2010-01-01
Background High quality clinical research not only requires advanced professional knowledge, but also needs sound study design and correct statistical analyses. The number of clinical research articles published in Chinese medical journals has increased immensely in the past decade, but study design quality and statistical analyses have remained suboptimal. The aim of this investigation was to gather evidence on the quality of study design and statistical analyses in clinical researches conducted in China for the first decade of the new millennium. Methodology/Principal Findings Ten (10) leading Chinese medical journals were selected and all original articles published in 1998 (N = 1,335) and 2008 (N = 1,578) were thoroughly categorized and reviewed. A well-defined and validated checklist on study design, statistical analyses, results presentation, and interpretation was used for review and evaluation. Main outcomes were the frequencies of different types of study design, error/defect proportion in design and statistical analyses, and implementation of CONSORT in randomized clinical trials. From 1998 to 2008: The error/defect proportion in statistical analyses decreased significantly ( = 12.03, p<0.001), 59.8% (545/1,335) in 1998 compared to 52.2% (664/1,578) in 2008. The overall error/defect proportion of study design also decreased ( = 21.22, p<0.001), 50.9% (680/1,335) compared to 42.40% (669/1,578). In 2008, design with randomized clinical trials remained low in single digit (3.8%, 60/1,578) with two-third showed poor results reporting (defects in 44 papers, 73.3%). Nearly half of the published studies were retrospective in nature, 49.3% (658/1,335) in 1998 compared to 48.2% (761/1,578) in 2008. Decreases in defect proportions were observed in both results presentation ( = 93.26, p<0.001), 92.7% (945/1,019) compared to 78.2% (1023/1,309) and interpretation ( = 27.26, p<0.001), 9.7% (99/1,019) compared to 4.3% (56/1,309), some serious ones persisted. Conclusions/Significance Chinese medical research seems to have made significant progress regarding statistical analyses, but there remains ample room for improvement regarding study designs. Retrospective clinical studies are the most often used design, whereas randomized clinical trials are rare and often show methodological weaknesses. Urgent implementation of the CONSORT statement is imperative. PMID:20520824
The Standard Deviation of Launch Vehicle Environments
NASA Technical Reports Server (NTRS)
Yunis, Isam
2005-01-01
Statistical analysis is used in the development of the launch vehicle environments of acoustics, vibrations, and shock. The standard deviation of these environments is critical to accurate statistical extrema. However, often very little data exists to define the standard deviation and it is better to use a typical standard deviation than one derived from a few measurements. This paper uses Space Shuttle and expendable launch vehicle flight data to define a typical standard deviation for acoustics and vibrations. The results suggest that 3dB is a conservative and reasonable standard deviation for the source environment and the payload environment.
Diabetes mellitus in children and adolescents with genetic syndromes.
Schmidt, F; Kapellen, T M; Wiegand, S; Herbst, A; Wolf, J; Fröhlich-Reiterer, E E; Rabl, W; Rohrer, T R; Holl, R W
2012-11-01
Several genetic syndromes are associated with diabetes mellitus (DM). This study aimed to analyse data from the DPV database with regard to frequency, treatment strategies and long-term complications in paediatric DM patients with genetic syndromes, including Turner syndrome (TS), Prader-Willi syndrome (PWS), Friedreich ataxia (FA), Alström syndrome (AS), Klinefelter syndrome (KS), Bardet-Biedl syndrome (BBS), Berardinelli-Seip syndrome (BSS) and Down syndrome (DS). Longitudinal data for 43 521 patients with DM onset at age < 20 years were collected from 309 treatment centres in Germany and Austria using the DPV software. Data included anthropometric parameters, type of diabetes, mean age, age at diabetes onset, daily insulin dose, HbA 1c , micro- and macroalbuminuria, retinopathy and dyslipidaemia. Descriptive statistics and standard statistical tests were used for data analysis. In total, 205 DM patients had one of the following syndromes: DS (141 patients), TS (24), PWS (23), FA (5), AS (5), KS (4), BBS (2) and BSS (1). Diabetes-specific antibodies were positive in the majority of patients with DS, TS and FA. Despite the well-known association between DM and certain syndromic disorders, the number of affected patients in the German and Austrian paediatric diabetic population is very low. Nevertheless, physicians should be aware of syndromic forms of diabetes. Joint multicentre analyses are needed to draw relevant conclusions. © J. A. Barth Verlag in Georg Thieme Verlag KG Stuttgart · New York.
Wöhl, C; Siebert, H; Blättner, B
2017-08-01
Among residents of nursing homes, physical activity might be beneficial in maintaining health-related quality of life because impairment is caused in particular by functional decline. The aim is the evaluation of the effectiveness of universal preventive interventions directed at increasing physical activity on activities of daily living in nursing home residents. Relevant studies were identified through database searching in MEDLINE, the Cochrane library, EMBASE, CINAHL, PsycINFO and PEDro. Two review authors independently selected articles, assessed the risk of bias and extracted data. Results were combined in random effects meta-analyses. By including 14 primary studies, nursing home residents participating in physical activities showed a statistically significant greater physical functioning compared to controls (standardized mean difference [SMD] = 0.48, 95% confidence interval [95% CI] 0.26-0.71, p < 0.0001). Subgroup analyses suggest that especially nursing home residents with severe physical and cognitive impairment might benefit from participation in physical activities. Results after non-training periods substantiate the necessity of a sustained implementation. Due to the high risk of bias in included studies, the results must be interpreted with caution. Physical activity for nursing home residents can be effective. Considering the low-quality evidence, performance of high-quality studies is essential in order to verify the statistical results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Erdmann, Christine A.; Apte, Michael G.
Using the US EPA 100 office-building BASE Study dataset, they conducted multivariate logistic regression analyses to quantify the relationship between indoor CO{sub 2} concentrations (dCO{sub 2}) and mucous membrane (MM) and lower respiratory system (LResp) building related symptoms, adjusting for age, sex, smoking status, presence of carpet in workspace, thermal exposure, relative humidity, and a marker for entrained automobile exhaust. In addition, they tested the hypothesis that certain environmentally-mediated health conditions (e.g., allergies and asthma) confer increased susceptibility to building related symptoms within office buildings. Adjusted odds ratios (ORs) for statistically significant, dose-dependent associations (p < 0.05) for dry eyes,more » sore throat, nose/sinus congestion, and wheeze symptoms with 100 ppm increases in dCO{sub 2} ranged from 1.1 to 1.2. These results suggest that increases in the ventilation rates per person among typical office buildings will, on average, reduce the prevalence of several building related symptoms by up to 70%, even when these buildings meet the existing ASHRAE ventilation standards for office buildings. Building occupants with certain environmentally-mediated health conditions are more likely to experience building related symptoms than those without these conditions (statistically significant ORs ranged from 2 to 11).« less
Vardhanabhuti, Varut; James, Julia; Nensey, Rehaan; Hyde, Christopher; Roobottom, Carl
2015-05-01
To compare image quality on computed tomographic colonography (CTC) acquired at standard dose (STD) and low dose (LD) using filtered-back projection, adaptive statistical iterative reconstruction, and model-based iterative reconstruction (MBIR) techniques. A total of 65 symptomatic patients were prospectively enrolled for the study and underwent STD and LD CTC with filtered-back projection, adaptive statistical iterative reconstruction, and MBIR to allow direct per-patient comparison. Objective image noise, subjective image analyses, and polyp detection were assessed. Objective image noise analysis demonstrates significant noise reduction using MBIR technique (P < .05) despite being acquired at lower doses. Subjective image analyses were superior for LD MBIR in all parameters except visibility of extracolonic lesions (two-dimensional) and visibility of colonic wall (three-dimensional) where there were no significant differences. There was no significant difference in polyp detection rates (P > .05). Doses: LD (dose-length product, 257.7), STD (dose-length product, 483.6). LD MBIR CTC objectively shows improved image noise using parameters in our study. Subjectively, image quality is maintained. Polyp detection shows no significant difference but because of small numbers needs further validation. Average dose reduction of 47% can be achieved. This study confirms feasibility of using MBIR in this context of CTC in symptomatic population. Copyright © 2015 AUR. Published by Elsevier Inc. All rights reserved.
Koopmans, Bastijn; Smit, August B; Verhage, Matthijs; Loos, Maarten
2017-04-04
Systematic, standardized and in-depth phenotyping and data analyses of rodent behaviour empowers gene-function studies, drug testing and therapy design. However, no data repositories are currently available for standardized quality control, data analysis and mining at the resolution of individual mice. Here, we present AHCODA-DB, a public data repository with standardized quality control and exclusion criteria aimed to enhance robustness of data, enabled with web-based mining tools for the analysis of individually and group-wise collected mouse phenotypic data. AHCODA-DB allows monitoring in vivo effects of compounds collected from conventional behavioural tests and from automated home-cage experiments assessing spontaneous behaviour, anxiety and cognition without human interference. AHCODA-DB includes such data from mutant mice (transgenics, knock-out, knock-in), (recombinant) inbred strains, and compound effects in wildtype mice and disease models. AHCODA-DB provides real time statistical analyses with single mouse resolution and versatile suite of data presentation tools. On March 9th, 2017 AHCODA-DB contained 650 k data points on 2419 parameters from 1563 mice. AHCODA-DB provides users with tools to systematically explore mouse behavioural data, both with positive and negative outcome, published and unpublished, across time and experiments with single mouse resolution. The standardized (automated) experimental settings and the large current dataset (1563 mice) in AHCODA-DB provide a unique framework for the interpretation of behavioural data and drug effects. The use of common ontologies allows data export to other databases such as the Mouse Phenome Database. Unbiased presentation of positive and negative data obtained under the highly standardized screening conditions increase cost efficiency of publicly funded mouse screening projects and help to reach consensus conclusions on drug responses and mouse behavioural phenotypes. The website is publicly accessible through https://public.sylics.com and can be viewed in every recent version of all commonly used browsers.
Use of Statistical Analyses in the Ophthalmic Literature
Lisboa, Renato; Meira-Freitas, Daniel; Tatham, Andrew J.; Marvasti, Amir H.; Sharpsten, Lucie; Medeiros, Felipe A.
2014-01-01
Purpose To identify the most commonly used statistical analyses in the ophthalmic literature and to determine the likely gain in comprehension of the literature that readers could expect if they were to sequentially add knowledge of more advanced techniques to their statistical repertoire. Design Cross-sectional study Methods All articles published from January 2012 to December 2012 in Ophthalmology, American Journal of Ophthalmology and Archives of Ophthalmology were reviewed. A total of 780 peer-reviewed articles were included. Two reviewers examined each article and assigned categories to each one depending on the type of statistical analyses used. Discrepancies between reviewers were resolved by consensus. Main Outcome Measures Total number and percentage of articles containing each category of statistical analysis were obtained. Additionally we estimated the accumulated number and percentage of articles that a reader would be expected to be able to interpret depending on their statistical repertoire. Results Readers with little or no statistical knowledge would be expected to be able to interpret the statistical methods presented in only 20.8% of articles. In order to understand more than half (51.4%) of the articles published, readers were expected to be familiar with at least 15 different statistical methods. Knowledge of 21 categories of statistical methods was necessary to comprehend 70.9% of articles, while knowledge of more than 29 categories was necessary to comprehend more than 90% of articles. Articles in retina and glaucoma subspecialties showed a tendency for using more complex analysis when compared to cornea. Conclusions Readers of clinical journals in ophthalmology need to have substantial knowledge of statistical methodology to understand the results of published studies in the literature. The frequency of use of complex statistical analyses also indicates that those involved in the editorial peer-review process must have sound statistical knowledge in order to critically appraise articles submitted for publication. The results of this study could provide guidance to direct the statistical learning of clinical ophthalmologists, researchers and educators involved in the design of courses for residents and medical students. PMID:24612977
ERIC Educational Resources Information Center
Raymond, Mark R.; Clauser, Brian E.; Furman, Gail E.
2010-01-01
The use of standardized patients to assess communication skills is now an essential part of assessing a physician's readiness for practice. To improve the reliability of communication scores, it has become increasingly common in recent years to use statistical models to adjust ratings provided by standardized patients. This study employed ordinary…
Global atmospheric circulation statistics, 1000-1 mb
NASA Technical Reports Server (NTRS)
Randel, William J.
1992-01-01
The atlas presents atmospheric general circulation statistics derived from twelve years (1979-90) of daily National Meteorological Center (NMC) operational geopotential height analyses; it is an update of a prior atlas using data over 1979-1986. These global analyses are available on pressure levels covering 1000-1 mb (approximately 0-50 km). The geopotential grids are a combined product of the Climate Analysis Center (which produces analyses over 70-1 mb) and operational NMC analyses (over 1000-100 mb). Balance horizontal winds and hydrostatic temperatures are derived from the geopotential fields.
Sandy-Hodgetts, Kylie; Watts, Robin
2015-01-01
The treatment of post-surgical wound complications, such as surgical site infections and surgical wound dehiscence, generates a significant burden for patients and healthcare systems. The effectiveness of negative pressure wound therapy has been under investigation but to date no systematic review has been published in relation to its effectiveness in the prevention of surgical wound complications. To identify the effectiveness of negative pressure wound therapy in the prevention of post-surgical wound complications in adults with a closed surgical incision compared to standard surgical dressings. Male and female adults who have had negative pressure wound therapy applied to their surgical incision following a procedure in one of the following areas: trauma, cardiothoracic, orthopedic, abdominal, or vascular surgery.The intervention of interest was the use of negative pressure wound therapy directly over an incision following a surgical procedure; the comparator was standard surgical dressings.Both experimental and epidemiological study designs, including randomized controlled trials, pseudo-randomized trials, quasi-experimental studies, before and after studies, prospective and retrospective cohort studies, case control studies, and analytical cross sectional studies were sought.The primary outcome was the occurrence of post-surgical wound infection or dehiscence as measured by the following: surgical site infections - superficial and deep; surgical wound dehiscence; wound pain; wound seroma; wound hematoma. Published and unpublished studies in English from 1990 to 2013 were identified by searching a variety of electronic databases. Reference lists of all papers selected for retrieval were then searched for additional studies. Papers selected for retrieval were assessed by two independent reviewers for methodological validity prior to inclusion in the review using standardized critical appraisal instruments from the Joanna Briggs Institute Meta-Analysis of Statistics Assessment and Review Instrument. Data were extracted from the included papers using a standardized data extraction tool from the Joanna Briggs Institute Meta-Analysis of Statistics Assessment and Review Instrument. In addition to study results, the data extracted included details of the study population, setting, intervention and author's conclusion. Where appropriate, data were pooled using Comprehensive Meta-Analysis software. Meta-analyses were performed for three outcomes. In cases of heterogeneity between studies a narrative summary of results was undertaken. Eight studies were included in the review. Meta-analyses revealed a statistically significant difference in favor of the use of negative pressure wound therapy as compared to standard surgical dressings was found for surgical site infections. Conflicting results were found for wound dehiscence and seroma. Given the small number of studies, mostly retrospective comparative cohort in design, no definitive conclusions can be reached as to the effectiveness of the use of negative pressure wound therapy in the prevention of surgical wound complications. However, there was a demonstrated association between the use of negative pressure wound therapy and reduction in surgical site infection. Negative pressure wound therapy in preference to standard postoperative dressings, for example dry gauze, may be considered for closed surgical incisions in adults assessed as high-risk for surgical site infections. The focus of further research on this topic should be level one studies (randomized controlled trials) on patients identified as 'at risk' in the preoperative period.
Analysis of Trace Siderophile Elements at High Spatial Resolution Using Laser Ablation ICP-MS
NASA Astrophysics Data System (ADS)
Campbell, A. J.; Humayun, M.
2006-05-01
Laser ablation inductively coupled plasma mass spectometry is an increasingly important method of performing spatially resolved trace element analyses. Over the last several years we have applied this technique to measure siderophile element distributions at the ppm level in a variety of natural and synthetic samples, especially metallic phases in meteorites and experimental run products intended for trace element partitioning studies. These samples frequently require trace element analyses to be made at a finer spatial resolution (25 microns or better) than is frequently attained using LA-ICP-MS. In this presentation we review analytical protocols that were developed to optimize the LA-ICP-MS measurements for high spatial resolution. Particular attention is paid to the trade-offs involving sensitivity, ablation pit depth and diameter, background levels, and number of elements measured. To maximize signal/background ratios and avoid difficulties associated with ablating to depths greater than the ablation pit diameter, measurement involved integration of rapidly varying, transient but well-behaved signals. The abundances of platinum group elements and other siderophile elements in ferrous metals were calibrated against well-characterized standards, including iron meteorites and NIST certified steels. The calibrations can be set against the known abundance of an independently determined element, but normalization to 100 percent can also be employed, and was more useful in many circumstances. Evaluation of uncertainties incorporated counting statistics as well as a measure of instrumental uncertainty, determined by replicate analyses of the standards. These methods have led to a number of insights into the formation and chemical processing of metal in the early solar system.
The economic value of life: linking theory to practice.
Landefeld, J S; Seskin, E P
1982-01-01
Human capital estimates of the economic value of life have been routinely used in the past to perform cost-benefit analyses of health programs. Recently, however, serious questions have been raised concerning the conceptual basis for valuing human life by applying these estimates. Most economists writing on these issues tend to agree that a more conceptually correct method to value risks to human life in cost-benefit analyses would be based on individuals.' "willingness to pay" for small changes in their probability of survival. Attempts to implement the willingness-to-pay approach using survey responses or revealed-preference estimates have produced a confusing array of values fraught with statistical problems and measurement difficulties. As a result, economists have searched for a link between willingness to pay and standard human capital estimates and have found that for most individuals a lower bound for valuing risks to life can be based on their willingness to pay to avoid the expected economic losses associated with death. However, while these studies provide support for using individual's private valuation of forgone income in valuing risks to life, it is also clear that standard human capital estimates cannot be used for this purpose without reformulation. After reviewing the major approaches to valuing risks to life, this paper concludes that estimates based on the human capital approach--reformulated using a willingness-to-pay criterion--produce the only clear, consistent, and objective values for use in cost-benefit analyses of policies affecting risks to life. The paper presents the first empirical estimates of such adjusted willingness-to-pay/human capital values. PMID:6803602
Development of the Statistical Reasoning in Biology Concept Inventory (SRBCI)
Deane, Thomas; Nomme, Kathy; Jeffery, Erica; Pollock, Carol; Birol, Gülnur
2016-01-01
We followed established best practices in concept inventory design and developed a 12-item inventory to assess student ability in statistical reasoning in biology (Statistical Reasoning in Biology Concept Inventory [SRBCI]). It is important to assess student thinking in this conceptual area, because it is a fundamental requirement of being statistically literate and associated skills are needed in almost all walks of life. Despite this, previous work shows that non–expert-like thinking in statistical reasoning is common, even after instruction. As science educators, our goal should be to move students along a novice-to-expert spectrum, which could be achieved with growing experience in statistical reasoning. We used item response theory analyses (the one-parameter Rasch model and associated analyses) to assess responses gathered from biology students in two populations at a large research university in Canada in order to test SRBCI’s robustness and sensitivity in capturing useful data relating to the students’ conceptual ability in statistical reasoning. Our analyses indicated that SRBCI is a unidimensional construct, with items that vary widely in difficulty and provide useful information about such student ability. SRBCI should be useful as a diagnostic tool in a variety of biology settings and as a means of measuring the success of teaching interventions designed to improve statistical reasoning skills. PMID:26903497
Secondary Analysis of National Longitudinal Transition Study 2 Data
ERIC Educational Resources Information Center
Hicks, Tyler A.; Knollman, Greg A.
2015-01-01
This review examines published secondary analyses of National Longitudinal Transition Study 2 (NLTS2) data, with a primary focus upon statistical objectives, paradigms, inferences, and methods. Its primary purpose was to determine which statistical techniques have been common in secondary analyses of NLTS2 data. The review begins with an…
A Nonparametric Geostatistical Method For Estimating Species Importance
Andrew J. Lister; Rachel Riemann; Michael Hoppus
2001-01-01
Parametric statistical methods are not always appropriate for conducting spatial analyses of forest inventory data. Parametric geostatistical methods such as variography and kriging are essentially averaging procedures, and thus can be affected by extreme values. Furthermore, non normal distributions violate the assumptions of analyses in which test statistics are...
ERIC Educational Resources Information Center
Ellis, Barbara G.; Dick, Steven J.
1996-01-01
Employs the statistics-documentation portion of a word-processing program's grammar-check feature together with qualitative analyses to determine that Henry Watterson, long-time editor of the "Louisville Courier-Journal," was probably the South's famed Civil War correspondent "Shadow." (TB)
The clinical effectiveness of healing touch.
Wilkinson, Dawn S; Knox, Pamela L; Chatman, James E; Johnson, Terrance L; Barbour, Nilufer; Myles, Yvonne; Reel, Antonio
2002-02-01
(1) to determine the clinical effectiveness of Healing Touch (HT) on variables assumed to be related to health enhancement; (2) to determine whether practitioner training level moderates treatment effectiveness. Mixed-method repeated measures design with quasi-experimental and naturalistic approaches, paired with nomothetic and idiographic analyses. Practitioner's offices or client's home. Twenty-two (22) clients who had never experienced HT. Three treatment conditions: no treatment (NT), HT only (standard HT care), and HT+ (Standard HT care plus music plus guided imagery). Secretory immunoglobulin A (sIgA) concentrations in saliva, self-reports of stress levels, client perceptions of health enhancement, and qualitative questionnaires about individual effects. Clients of practitioners with more training experienced statistically significant positive sIgA change over the HT treatment series, while clients of practitioners with less experience did not. Clients reported a statistically significant reduction of stress level after both HT conditions. Perceived enhancement of health was reported by 13 of 22 clients (59%). Themes of relaxation, connection, and enhanced awareness were identified in the qualitative analysis of the HT experience. Pain relief was reported by 6 of 11 clients (55%) experiencing pain. The data support the clinical effectiveness of HT in health enhancement, specifically for raising sIgA concentrations, lowering stress perceptions and relieving pain. The evidence indicates that positive responses were not exclusively as a result of placebo, that is, client beliefs, expectations, and behaviors regarding HT.
NASA Astrophysics Data System (ADS)
Moreno de Castro, Maria; Schartau, Markus; Wirtz, Kai
2017-04-01
Mesocosm experiments on phytoplankton dynamics under high CO2 concentrations mimic the response of marine primary producers to future ocean acidification. However, potential acidification effects can be hindered by the high standard deviation typically found in the replicates of the same CO2 treatment level. In experiments with multiple unresolved factors and a sub-optimal number of replicates, post-processing statistical inference tools might fail to detect an effect that is present. We propose that in such cases, data-based model analyses might be suitable tools to unearth potential responses to the treatment and identify the uncertainties that could produce the observed variability. As test cases, we used data from two independent mesocosm experiments. Both experiments showed high standard deviations and, according to statistical inference tools, biomass appeared insensitive to changing CO2 conditions. Conversely, our simulations showed earlier and more intense phytoplankton blooms in modeled replicates at high CO2 concentrations and suggested that uncertainties in average cell size, phytoplankton biomass losses, and initial nutrient concentration potentially outweigh acidification effects by triggering strong variability during the bloom phase. We also estimated the thresholds below which uncertainties do not escalate to high variability. This information might help in designing future mesocosm experiments and interpreting controversial results on the effect of acidification or other pressures on ecosystem functions.
2013-01-01
Background Malnutrition is one of the principal causes of child mortality in developing countries including Bangladesh. According to our knowledge, most of the available studies, that addressed the issue of malnutrition among under-five children, considered the categorical (dichotomous/polychotomous) outcome variables and applied logistic regression (binary/multinomial) to find their predictors. In this study malnutrition variable (i.e. outcome) is defined as the number of under-five malnourished children in a family, which is a non-negative count variable. The purposes of the study are (i) to demonstrate the applicability of the generalized Poisson regression (GPR) model as an alternative of other statistical methods and (ii) to find some predictors of this outcome variable. Methods The data is extracted from the Bangladesh Demographic and Health Survey (BDHS) 2007. Briefly, this survey employs a nationally representative sample which is based on a two-stage stratified sample of households. A total of 4,460 under-five children is analysed using various statistical techniques namely Chi-square test and GPR model. Results The GPR model (as compared to the standard Poisson regression and negative Binomial regression) is found to be justified to study the above-mentioned outcome variable because of its under-dispersion (variance < mean) property. Our study also identify several significant predictors of the outcome variable namely mother’s education, father’s education, wealth index, sanitation status, source of drinking water, and total number of children ever born to a woman. Conclusions Consistencies of our findings in light of many other studies suggest that the GPR model is an ideal alternative of other statistical models to analyse the number of under-five malnourished children in a family. Strategies based on significant predictors may improve the nutritional status of children in Bangladesh. PMID:23297699
Sinclair, J C; Thorlund, K; Walter, S D
2013-01-01
In a study conducted in 1966-1969, longitudinal measurements were made of the metabolic rate in growing infants. Statistical methods for analyzing longitudinal data weren't readily accessible at that time. To measure minimal rates of oxygen consumption (V·O2, ml/min) in growing infants during the first postnatal weeks and to determine the relationships between postnatal increases in V·O2, body size and postnatal age. We studied 61 infants of any birth weight or gestational age, including 19 of very low birth weight. The infants, nursed in incubators, were clinically well and without need of oxygen supplementation or respiratory assistance. Serial measures of V·O2 using a closed-circuit method were obtained at approximately weekly intervals. V·O2 was measured under thermoneutral conditions with the infant asleep or resting quietly. Data were analyzed using mixed-effects models. During early postnatal growth, V·O2 rises as surface area (m(2))(1.94) (standard error, SE 0.054) or body weight (kg)(1.24) (SE 0.033). Multivariate analyses show statistically significant effects of both size and age. Reference intervals (RIs) for V·O2 for fixed values of body weight and postnatal age are presented. As V·O2 rises with increasing size and age, there is an increase in the skin-operative environmental temperature gradient (T skin-op) required for heat loss. Required T skin-op can be predicted from surface area and heat loss (heat production minus heat storage). Generation of RIs for minimal rates of V·O2 in growing infants from the 1960s was enabled by application of mixed-effects statistical models for analyses of longitudinal data. Results apply to the precaffeine era of neonatal care. Copyright © 2013 S. Karger AG, Basel.
Eisenberg, Dan T A; Kuzawa, Christopher W; Hayes, M Geoffrey
2015-01-01
Telomere length (TL) is commonly measured using quantitative PCR (qPCR). Although, easier than the southern blot of terminal restriction fragments (TRF) TL measurement method, one drawback of qPCR is that it introduces greater measurement error and thus reduces the statistical power of analyses. To address a potential source of measurement error, we consider the effect of well position on qPCR TL measurements. qPCR TL data from 3,638 people run on a Bio-Rad iCycler iQ are reanalyzed here. To evaluate measurement validity, correspondence with TRF, age, and between mother and offspring are examined. First, we present evidence for systematic variation in qPCR TL measurements in relation to thermocycler well position. Controlling for these well-position effects consistently improves measurement validity and yields estimated improvements in statistical power equivalent to increasing sample sizes by 16%. We additionally evaluated the linearity of the relationships between telomere and single copy gene control amplicons and between qPCR and TRF measures. We find that, unlike some previous reports, our data exhibit linear relationships. We introduce the standard error in percent, a superior method for quantifying measurement error as compared to the commonly used coefficient of variation. Using this measure, we find that excluding samples with high measurement error does not improve measurement validity in our study. Future studies using block-based thermocyclers should consider well position effects. Since additional information can be gleaned from well position corrections, rerunning analyses of previous results with well position correction could serve as an independent test of the validity of these results. © 2015 Wiley Periodicals, Inc.
Are Manitoba dentists aware of the recommendation for a first visit to the dentist by age 1 year?
Stijacic, Tijana; Schroth, Robert J; Lawrence, Herenia P
2008-12-01
The Canadian Dental Association (CDA) and the American Academy of Pediatric Dentistry (AAPD) recommend that children visit the dentist by 12 months of age. To report on how Manitoba"s general dental practitioners and pediatric dentists manage oral health in early childhood. Mailed surveys that used the modified survey methods of Dillman were sent to 390 Manitoban general dental practitioners and pediatric dentists. The sampling frame was the Manitoba Dental Association"s Membership Registry, but only those dentists who consented to the release of their mailing information were contacted. Survey data were analyzed with Number Cruncher Statistical Software (NCSS 2007). Descriptive statistics, bivariate analyses and multiple regression analyses were done. A p value of < or = 0.05 was considered statistically significant. A total of 292 (74.9%) of the 390 practitioners responded, of whom 85.1% met the eligibility criteria and 84.6% were graduates of the faculty of dentistry, University of Manitoba. Overall, infants and preschoolers constituted < 10% of patients in the respondents" practices. Slightly more than half (58.3%, 144/247) of participants were aware of professional organizations" recommendation about the timing of children"s first visit to the dentist; 52.2% (130/249) were unaware of the existence of a standardized case definition for ECC; and 32.3% (80/248) knew that ECC was defined as the presence of at least 1 primary tooth affected by caries in children < 6 years of age. On average, these participating dentists from Manitoba thought children should visit the dentist by 2 years of age. Although early visits to the dentist are now endorsed by CDA and AAPD, a significant number of dentists in Manitoba are still unaware of the recommendation that children should first visit the dentist by 12 months of age.
Zheng, Jie; Rodriguez, Santiago; Laurin, Charles; Baird, Denis; Trela-Larsen, Lea; Erzurumluoglu, Mesut A; Zheng, Yi; White, Jon; Giambartolomei, Claudia; Zabaneh, Delilah; Morris, Richard; Kumari, Meena; Casas, Juan P; Hingorani, Aroon D; Evans, David M; Gaunt, Tom R; Day, Ian N M
2017-01-01
Fine mapping is a widely used approach for identifying the causal variant(s) at disease-associated loci. Standard methods (e.g. multiple regression) require individual level genotypes. Recent fine mapping methods using summary-level data require the pairwise correlation coefficients ([Formula: see text]) of the variants. However, haplotypes rather than pairwise [Formula: see text], are the true biological representation of linkage disequilibrium (LD) among multiple loci. In this article, we present an empirical iterative method, HAPlotype Regional Association analysis Program (HAPRAP), that enables fine mapping using summary statistics and haplotype information from an individual-level reference panel. Simulations with individual-level genotypes show that the results of HAPRAP and multiple regression are highly consistent. In simulation with summary-level data, we demonstrate that HAPRAP is less sensitive to poor LD estimates. In a parametric simulation using Genetic Investigation of ANthropometric Traits height data, HAPRAP performs well with a small training sample size (N < 2000) while other methods become suboptimal. Moreover, HAPRAP's performance is not affected substantially by single nucleotide polymorphisms (SNPs) with low minor allele frequencies. We applied the method to existing quantitative trait and binary outcome meta-analyses (human height, QTc interval and gallbladder disease); all previous reported association signals were replicated and two additional variants were independently associated with human height. Due to the growing availability of summary level data, the value of HAPRAP is likely to increase markedly for future analyses (e.g. functional prediction and identification of instruments for Mendelian randomization). The HAPRAP package and documentation are available at http://apps.biocompute.org.uk/haprap/ CONTACT: : jie.zheng@bristol.ac.uk or tom.gaunt@bristol.ac.ukSupplementary information: Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.
Test-retest reliability of 3D ultrasound measurements of the thoracic spine.
Fölsch, Christian; Schlögel, Stefanie; Lakemeier, Stefan; Wolf, Udo; Timmesfeld, Nina; Skwara, Adrian
2012-05-01
To explore the reliability of the Zebris CMS 20 ultrasound analysis system with pointer application for measuring end-range flexion, end-range extension, and neutral kyphosis angle of the thoracic spine. The study was performed within the School of Physiotherapy in cooperation with the Orthopedic Department at a University Hospital. The thoracic spines of 28 healthy subjects were measured. Measurements for neutral kyphosis angle, end-range flexion, and end-range extension were taken once at each time point. The bone landmarks were palpated by one examiner and marked with a pointer containing 2 transmitters using a frequency of 40 kHz. A third transmitter was fixed to the pelvis, and 3 microphones were used as receiver. The real angle was calculated by the software. Bland-Altman plots with 95% limits of agreement, intraclass correlations (ICC), standard deviations of mean measurements, and standard error of measurements were used for statistical analyses. The test-retest reliability in this study was measured within a 24-hour interval. Statistical parameters were used to judge reliability. The mean kyphosis angle was 44.8° with a standard deviation of 17.3° at the first measurement and a mean of 45.8° with a standard deviation of 16.2° the following day. The ICC was high at 0.95 for the neutral kyphosis angle, and the Bland-Altman 95% limits of agreement were within clinical acceptable margins. The ICC was 0.71 for end-range flexion and 0.34 for end-range extension, whereas the Bland-Altman 95% limits of agreement were wider than with the static measurement of kyphosis. Compared with static measurements, the analysis of motion with 3-dimensional ultrasound showed an increased standard deviation for test-retest measurements. The test-retest reliability of ultrasound measuring of the neutral kyphosis angle of the thoracic spine was demonstrated within 24 hours. Bland-Altman 95% limits of agreement and the standard deviation of differences did not appear to be clinically acceptable for measuring flexion and extension. Copyright © 2012 American Academy of Physical Medicine and Rehabilitation. Published by Elsevier Inc. All rights reserved.
Ensuring Positiveness of the Scaled Difference Chi-square Test Statistic.
Satorra, Albert; Bentler, Peter M
2010-06-01
A scaled difference test statistic [Formula: see text] that can be computed from standard software of structural equation models (SEM) by hand calculations was proposed in Satorra and Bentler (2001). The statistic [Formula: see text] is asymptotically equivalent to the scaled difference test statistic T̄(d) introduced in Satorra (2000), which requires more involved computations beyond standard output of SEM software. The test statistic [Formula: see text] has been widely used in practice, but in some applications it is negative due to negativity of its associated scaling correction. Using the implicit function theorem, this note develops an improved scaling correction leading to a new scaled difference statistic T̄(d) that avoids negative chi-square values.
The impact of injector-based contrast agent administration in time-resolved MRA.
Budjan, Johannes; Attenberger, Ulrike I; Schoenberg, Stefan O; Pietsch, Hubertus; Jost, Gregor
2018-05-01
Time-resolved contrast-enhanced MR angiography (4D-MRA), which allows the simultaneous visualization of the vasculature and blood-flow dynamics, is widely used in clinical routine. In this study, the impact of two different contrast agent injection methods on 4D-MRA was examined in a controlled, standardized setting in an animal model. Six anesthetized Goettingen minipigs underwent two identical 4D-MRA examinations at 1.5 T in a single session. The contrast agent (0.1 mmol/kg body weight gadobutrol, followed by 20 ml saline) was injected using either manual injection or an automated injection system. A quantitative comparison of vascular signal enhancement and quantitative renal perfusion analyses were performed. Analysis of signal enhancement revealed higher peak enhancements and shorter time to peak intervals for the automated injection. Significantly different bolus shapes were found: automated injection resulted in a compact first-pass bolus shape clearly separated from the recirculation while manual injection resulted in a disrupted first-pass bolus with two peaks. In the quantitative perfusion analyses, statistically significant differences in plasma flow values were found between the injection methods. The results of both qualitative and quantitative 4D-MRA depend on the contrast agent injection method, with automated injection providing more defined bolus shapes and more standardized examination protocols. • Automated and manual contrast agent injection result in different bolus shapes in 4D-MRA. • Manual injection results in an undefined and interrupted bolus with two peaks. • Automated injection provides more defined bolus shapes. • Automated injection can lead to more standardized examination protocols.
Nitrate in drinking water and colorectal cancer risk: A nationwide population-based cohort study.
Schullehner, Jörg; Hansen, Birgitte; Thygesen, Malene; Pedersen, Carsten B; Sigsgaard, Torben
2018-07-01
Nitrate in drinking water may increase risk of colorectal cancer due to endogenous transformation into carcinogenic N-nitroso compounds. Epidemiological studies are few and often challenged by their limited ability of estimating long-term exposure on a detailed individual level. We exploited population-based health register data, linked in time and space with longitudinal drinking water quality data, on an individual level to study the association between long-term drinking water nitrate exposure and colorectal cancer (CRC) risk. Individual nitrate exposure was calculated for 2.7 million adults based on drinking water quality analyses at public waterworks and private wells between 1978 and 2011. For the main analyses, 1.7 million individuals with highest exposure assessment quality were included. Follow-up started at age 35. We identified 5,944 incident CRC cases during 23 million person-years at risk. We used Cox proportional hazards models to estimate hazard ratios (HRs) of nitrate exposure on the risk of CRC, colon and rectal cancer. Persons exposed to the highest level of drinking water nitrate had an HR of 1.16 (95% CI: 1.08-1.25) for CRC compared with persons exposed to the lowest level. We found statistically significant increased risks at drinking water levels above 3.87 mg/L, well below the current drinking water standard of 50 mg/L. Our results add to the existing evidence suggesting increased CRC risk at drinking water nitrate concentrations below the current drinking water standard. A discussion on the adequacy of the drinking water standard in regards to chronic effects is warranted. © 2018 UICC.
Raymond, Mark R; Clauser, Brian E; Furman, Gail E
2010-10-01
The use of standardized patients to assess communication skills is now an essential part of assessing a physician's readiness for practice. To improve the reliability of communication scores, it has become increasingly common in recent years to use statistical models to adjust ratings provided by standardized patients. This study employed ordinary least squares regression to adjust ratings, and then used generalizability theory to evaluate the impact of these adjustments on score reliability and the overall standard error of measurement. In addition, conditional standard errors of measurement were computed for both observed and adjusted scores to determine whether the improvements in measurement precision were uniform across the score distribution. Results indicated that measurement was generally less precise for communication ratings toward the lower end of the score distribution; and the improvement in measurement precision afforded by statistical modeling varied slightly across the score distribution such that the most improvement occurred in the upper-middle range of the score scale. Possible reasons for these patterns in measurement precision are discussed, as are the limitations of the statistical models used for adjusting performance ratings.
Cardiac arrest risk standardization using administrative data compared to registry data.
Grossestreuer, Anne V; Gaieski, David F; Donnino, Michael W; Nelson, Joshua I M; Mutter, Eric L; Carr, Brendan G; Abella, Benjamin S; Wiebe, Douglas J
2017-01-01
Methods for comparing hospitals regarding cardiac arrest (CA) outcomes, vital for improving resuscitation performance, rely on data collected by cardiac arrest registries. However, most CA patients are treated at hospitals that do not participate in such registries. This study aimed to determine whether CA risk standardization modeling based on administrative data could perform as well as that based on registry data. Two risk standardization logistic regression models were developed using 2453 patients treated from 2000-2015 at three hospitals in an academic health system. Registry and administrative data were accessed for all patients. The outcome was death at hospital discharge. The registry model was considered the "gold standard" with which to compare the administrative model, using metrics including comparing areas under the curve, calibration curves, and Bland-Altman plots. The administrative risk standardization model had a c-statistic of 0.891 (95% CI: 0.876-0.905) compared to a registry c-statistic of 0.907 (95% CI: 0.895-0.919). When limited to only non-modifiable factors, the administrative model had a c-statistic of 0.818 (95% CI: 0.799-0.838) compared to a registry c-statistic of 0.810 (95% CI: 0.788-0.831). All models were well-calibrated. There was no significant difference between c-statistics of the models, providing evidence that valid risk standardization can be performed using administrative data. Risk standardization using administrative data performs comparably to standardization using registry data. This methodology represents a new tool that can enable opportunities to compare hospital performance in specific hospital systems or across the entire US in terms of survival after CA.
Tabrizi, Jafar Sadegh; Somi, Mohammad Hossein; Asghari, Sima; Asghari Jafarabadi, Mohammad; Gharibi, Farid; Alidoost, Saeideh
2015-01-01
The Inflammatory Bowel Disease (IBD) is considered as one of the chronic diseasesre-quiring complicated treatment. This study aimed to assess technical quality of providing care for pa-tients with IBD. This cross-sectional study was conducted on 94 people with IBD using interviews and simple random sampling methods in Gastroenterology, Endoscopy and clinic of Imam Reza Hospital and Golgasht Clinic in Tabriz in 2012. The data collection tool was a researcher-designed questionnaire whose validity and reliability had been confirmed. In order to investigate the statistical relationship between the background variables and compliance with the standards the Chi-square test was applied using SPSS 17 Software. "visit by the physician" and "diet advice by the dietitian" have had the highest and the lowest levels of compliance with the standard respectively, and "the care related to the disease exacerbation" and "the care provided by the other physicians" were not compatible with the standards in 80% of the cases. Data analyses also showed that there was a significant relationship between participant's age, job, education and the smoking status and compliance of some care with the relevant standards (P<0.05). The results indicate a substantial gap between provided care for the people with IBD and the relevant standards. This indicates the areas that need of improvement and requires the serious attention of the authorities.
1993-08-01
subtitled "Simulation Data," consists of detailed infonrnation on the design parmneter variations tested, subsequent statistical analyses conducted...used with confidence during the design process. The data quality can be examined in various forms such as statistical analyses of measure of merit data...merit, such as time to capture or nmaximurn pitch rate, can be calculated from the simulation time history data. Statistical techniques are then used
Guerrero, Erick G; Fenwick, Karissa; Kong, Yinfei
2017-11-14
Leadership style and specific organizational climates have emerged as critical mechanisms to implement targeted practices in organizations. Drawing from relevant theories, we propose that climate for implementation of cultural competence reflects how transformational leadership may enhance the organizational implementation of culturally responsive practices in health care organizations. Using multilevel data from 427 employees embedded in 112 addiction treatment programs collected in 2013, confirmatory factor analysis showed adequate fit statistics for our measure of climate for implementation of cultural competence (Cronbach's alpha = .88) and three outcomes: knowledge (Cronbach's alpha = .88), services (Cronbach's alpha = .86), and personnel (Cronbach's alpha = .86) practices. Results from multilevel path analyses indicate a positive relationship between employee perceptions of transformational leadership and climate for implementation of cultural competence (standardized indirect effect = .057, bootstrap p < .001). We also found a positive indirect effect between transformational leadership and each of the culturally competent practices: knowledge (standardized indirect effect = .006, bootstrap p = .004), services (standardized indirect effect = .019, bootstrap p < .001), and personnel (standardized indirect effect = .014, bootstrap p = .005). Findings contribute to implementation science. They build on leadership theory and offer evidence of the mediating role of climate in the implementation of cultural competence in addiction health service organizations.
Reduction of Fasting Blood Glucose and Hemoglobin A1c Using Oral Aloe Vera: A Meta-Analysis.
Dick, William R; Fletcher, Emily A; Shah, Sachin A
2016-06-01
Diabetes mellitus is a global epidemic and one of the leading causes of morbidity and mortality. Additional medications that are novel, affordable, and efficacious are needed to treat this rampant disease. This meta-analysis was performed to ascertain the effectiveness of oral aloe vera consumption on the reduction of fasting blood glucose (FBG) and hemoglobin A1c (HbA1c). PubMed, CINAHL, Natural Medicines Comprehensive Database, and Natural Standard databases were searched. Studies of aloe vera's effect on FBG, HbA1c, homeostasis model assessment-estimated insulin resistance (HOMA-IR), fasting serum insulin, fructosamine, and oral glucose tolerance test (OGTT) in prediabetic and diabetic populations were examined. After data extraction, the parameters of FBG and HbA1c had appropriate data for meta-analyses. Extracted data were verified and then analyzed by StatsDirect Statistical Software. Reductions of FBG and HbA1c were reported as the weighted mean differences from baseline, calculated by a random-effects model with 95% confidence intervals. Subgroup analyses to determine clinical and statistical heterogeneity were also performed. Publication bias was assessed by using the Egger bias statistic. Nine studies were included in the FBG parameter (n = 283); 5 of these studies included HbA1c data (n = 89). Aloe vera decreased FBG by 46.6 mg/dL (p < 0.0001) and HbA1c by 1.05% (p = 0.004). Significant reductions of both endpoints were maintained in all subgroup analyses. Additionally, the data suggest that patients with an FBG ≥200 mg/dL may see a greater benefit. A mean FBG reduction of 109.9 mg/dL was observed in this population (p ≤ 0.0001). The Egger statistic showed publication bias with FBG but not with HbA1c (p = 0.010 and p = 0.602, respectively). These results support the use of oral aloe vera for significantly reducing FBG (46.6 mg/dL) and HbA1c (1.05%). Further clinical studies that are more robust and better controlled are warranted to further explore these findings.
Jenkins, Martin
2016-01-01
Objective. In clinical trials of RA, it is common to assess effectiveness using end points based upon dichotomized continuous measures of disease activity, which classify patients as responders or non-responders. Although dichotomization generally loses statistical power, there are good clinical reasons to use these end points; for example, to allow for patients receiving rescue therapy to be assigned as non-responders. We adopt a statistical technique called the augmented binary method to make better use of the information provided by these continuous measures and account for how close patients were to being responders. Methods. We adapted the augmented binary method for use in RA clinical trials. We used a previously published randomized controlled trial (Oral SyK Inhibition in Rheumatoid Arthritis-1) to assess its performance in comparison to a standard method treating patients purely as responders or non-responders. The power and error rate were investigated by sampling from this study. Results. The augmented binary method reached similar conclusions to standard analysis methods but was able to estimate the difference in response rates to a higher degree of precision. Results suggested that CI widths for ACR responder end points could be reduced by at least 15%, which could equate to reducing the sample size of a study by 29% to achieve the same statistical power. For other end points, the gain was even higher. Type I error rates were not inflated. Conclusion. The augmented binary method shows considerable promise for RA trials, making more efficient use of patient data whilst still reporting outcomes in terms of recognized response end points. PMID:27338084
Gee, Julianne; Naleway, Allison; Shui, Irene; Baggs, James; Yin, Ruihua; Li, Rong; Kulldorff, Martin; Lewis, Edwin; Fireman, Bruce; Daley, Matthew F; Klein, Nicola P; Weintraub, Eric S
2011-10-26
In 7 large managed care organizations (MCOs), we performed a post-licensure safety assessment of quadrivalent human papillomavirus vaccine (HPV4) among 9-26 year-old female vaccine recipients between August 2006 and October 2009. Sequential analyses were conducted weekly to detect associations between HPV4 exposure and pre-specified outcomes. The pre-specified outcomes identified by ICD-9 codes using computerized data at the participating MCOs included: Guillan-Barré Syndrome (GBS), stroke, venous thromboembolism (VTE), appendicitis, seizures, syncope, allergic reactions, and anaphylaxis. For rare outcomes, historical background rates were used as the comparison group. For more common outcomes, a concurrent unexposed comparison group was utilized. A standardized review of medical records was conducted for all cases of GBS, VTE, and anaphylaxis. A total of 600,558 HPV4 doses were administered during the study period. We found no statistically significant increased risk for the outcomes studied. However, a non-statistically significant relative risk (RR) for VTE ICD-9 codes following HPV4 vaccination of 1.98 was detected among females age 9-17 years. Medical record review of all 8 vaccinated potential VTE cases in this age group revealed that 5 met the standard case definition for VTE. All 5 confirmed cases had known risk factors for VTE (oral contraceptive use, coagulation disorders, smoking, obesity or prolonged hospitalization). In a study of over 600,000 HPV4 vaccine doses administered, no statistically significant increased risk for any of the pre-specified adverse events after vaccination was detected. Further study of a possible association with VTE following HPV4 vaccination is warranted. Published by Elsevier Ltd.
Lukoschek, V; Waycott, M; Keogh, J S
2008-07-01
Polymorphic microsatellites are widely considered more powerful for resolving population structure than mitochondrial DNA (mtDNA) markers, particularly for recently diverged lineages or geographically proximate populations. Weaker population subdivision for biparentally inherited nuclear markers than maternally inherited mtDNA may signal male-biased dispersal but can also be attributed to marker-specific evolutionary characteristics and sampling properties. We discriminated between these competing explanations with a population genetic study on olive sea snakes, Aipysurus laevis. A previous mtDNA study revealed strong regional population structure for A. laevis around northern Australia, where Pleistocene sea-level fluctuations have influenced the genetic signatures of shallow-water marine species. Divergences among phylogroups dated to the Late Pleistocene, suggesting recent range expansions by previously isolated matrilines. Fine-scale population structure within regions was, however, poorly resolved for mtDNA. In order to improve estimates of fine-scale genetic divergence and to compare population structure between nuclear and mtDNA, 354 olive sea snakes (previously sequenced for mtDNA) were genotyped for five microsatellite loci. F statistics and Bayesian multilocus genotype clustering analyses found similar regional population structure as mtDNA and, after standardizing microsatellite F statistics for high heterozygosities, regional divergence estimates were quantitatively congruent between marker classes. Over small spatial scales, however, microsatellites recovered almost no genetic structure and standardized F statistics were orders of magnitude smaller than for mtDNA. Three tests for male-biased dispersal were not significant, suggesting that recent demographic expansions to the typically large population sizes of A. laevis have prevented microsatellites from reaching mutation-drift equilibrium and local populations may still be diverging.
Part-time versus full-time occlusion therapy for treatment of amblyopia: A meta-analysis.
Yazdani, Negareh; Sadeghi, Ramin; Momeni-Moghaddam, Hamed; Zarifmahmoudi, Leili; Ehsaei, Asieh; Barrett, Brendan T
2017-06-01
To compare full-time occlusion (FTO) and part-time occlusion (PTO) therapy in the treatment of amblyopia, with the secondary aim of evaluating the minimum number of hours of part-time patching required for maximal effect from occlusion. A literature search was performed in PubMed, Scopus, Science Direct, Ovid, Web of Science and Cochrane library. Methodological quality of the literature was evaluated according to the Oxford Center for Evidence Based Medicine and modified Newcastle-Ottawa scale. Statistical analyses were performed using Comprehensive Meta-Analysis (version 2, Biostat Inc., USA). The present meta-analysis included six studies [three randomized controlled trials (RCTs) and three non-RCTs]. Pooled standardized difference in the mean changes in the visual acuity was 0.337 [lower and upper limits: -0.009, 0.683] higher in the FTO as compared to the PTO group; however, this difference was not statistically significant ( P = 0.056, Cochrane Q value = 20.4 ( P = 0.001), I 2 = 75.49%). Egger's regression intercept was 5.46 ( P = 0.04). The pooled standardized difference in means of visual acuity changes was 1.097 [lower and upper limits: 0.68, 1.513] higher in the FTO arm ( P < 0.001), and 0.7 [lower and upper limits: 0.315, 1.085] higher in the PTO arm ( P < 0.001) compared to PTO less than two hours. This meta-analysis shows no statistically significant difference between PTO and FTO in treatment of amblyopia. However, our results suggest that the minimum effective PTO duration, to observe maximal improvement in visual acuity is six hours per day.
Chung, Sang M; Lee, David J; Hand, Austin; Young, Philip; Vaidyanathan, Jayabharathi; Sahajwalla, Chandrahas
2015-12-01
The study evaluated whether the renal function decline rate per year with age in adults varies based on two primary statistical analyses: cross-section (CS), using one observation per subject, and longitudinal (LT), using multiple observations per subject over time. A total of 16628 records (3946 subjects; age range 30-92 years) of creatinine clearance and relevant demographic data were used. On average, four samples per subject were collected for up to 2364 days (mean: 793 days). A simple linear regression and random coefficient models were selected for CS and LT analyses, respectively. The renal function decline rates per year were 1.33 and 0.95 ml/min/year for CS and LT analyses, respectively, and were slower when the repeated individual measurements were considered. The study confirms that rates are different based on statistical analyses, and that a statistically robust longitudinal model with a proper sampling design provides reliable individual as well as population estimates of the renal function decline rates per year with age in adults. In conclusion, our findings indicated that one should be cautious in interpreting the renal function decline rate with aging information because its estimation was highly dependent on the statistical analyses. From our analyses, a population longitudinal analysis (e.g. random coefficient model) is recommended if individualization is critical, such as a dose adjustment based on renal function during a chronic therapy. Copyright © 2015 John Wiley & Sons, Ltd.
Meyers, Alysha R; Pinkerton, Lynne E; Hein, Misty J
2013-09-01
To further evaluate the association between formaldehyde and leukemia, we extended follow-up through 2008 for a cohort mortality study of 11,043 US formaldehyde-exposed garment workers. We computed standardized mortality ratios and standardized rate ratios stratified by year of first exposure, exposure duration, and time since first exposure. Associations between exposure duration and rates of leukemia and myeloid leukemia were further examined using Poisson regression models. Compared to the US population, myeloid leukemia mortality was elevated but overall leukemia mortality was not. In internal analyses, overall leukemia mortality increased with increasing exposure duration and this trend was statistically significant. We continue to see limited evidence of an association between formaldehyde and leukemia. However, the extended follow-up did not strengthen previously observed associations. In addition to continued epidemiologic research, we recommend further research to evaluate the biological plausibility of a causal relation between formaldehyde and leukemia. Copyright © 2013 Wiley Periodicals, Inc.
Mobile phone-based clinical guidance for rural health providers in India.
Gautham, Meenakshi; Iyengar, M Sriram; Johnson, Craig W
2015-12-01
There are few tried and tested mobile technology applications to enhance and standardize the quality of health care by frontline rural health providers in low-resource settings. We developed a media-rich, mobile phone-based clinical guidance system for management of fevers, diarrhoeas and respiratory problems by rural health providers. Using a randomized control design, we field tested this application with 16 rural health providers and 128 patients at two rural/tribal sites in Tamil Nadu, Southern India. Protocol compliance for both groups, phone usability, acceptability and patient feedback for the experimental group were evaluated. Linear mixed-model analyses showed statistically significant improvements in protocol compliance in the experimental group. Usability and acceptability among patients and rural health providers were very high. Our results indicate that mobile phone-based, media-rich procedural guidance applications have significant potential for achieving consistently standardized quality of care by diverse frontline rural health providers, with patient acceptance. © The Author(s) 2014.
A global fit of the MSSM with GAMBIT
NASA Astrophysics Data System (ADS)
Athron, Peter; Balázs, Csaba; Bringmann, Torsten; Buckley, Andy; Chrząszcz, Marcin; Conrad, Jan; Cornell, Jonathan M.; Dal, Lars A.; Edsjö, Joakim; Farmer, Ben; Jackson, Paul; Krislock, Abram; Kvellestad, Anders; Mahmoudi, Farvah; Martinez, Gregory D.; Putze, Antje; Raklev, Are; Rogan, Christopher; Saavedra, Aldo; Savage, Christopher; Scott, Pat; Serra, Nicola; Weniger, Christoph; White, Martin
2017-12-01
We study the seven-dimensional Minimal Supersymmetric Standard Model (MSSM7) with the new GAMBIT software framework, with all parameters defined at the weak scale. Our analysis significantly extends previous weak-scale, phenomenological MSSM fits, by adding more and newer experimental analyses, improving the accuracy and detail of theoretical predictions, including dominant uncertainties from the Standard Model, the Galactic dark matter halo and the quark content of the nucleon, and employing novel and highly-efficient statistical sampling methods to scan the parameter space. We find regions of the MSSM7 that exhibit co-annihilation of neutralinos with charginos, stops and sbottoms, as well as models that undergo resonant annihilation via both light and heavy Higgs funnels. We find high-likelihood models with light charginos, stops and sbottoms that have the potential to be within the future reach of the LHC. Large parts of our preferred parameter regions will also be accessible to the next generation of direct and indirect dark matter searches, making prospects for discovery in the near future rather good.
Inherited defects in pedigree dogs. Part 1: disorders related to breed standards.
Asher, Lucy; Diesel, Gillian; Summers, Jennifer F; McGreevy, Paul D; Collins, Lisa M
2009-12-01
The United Kingdom pedigree-dog industry has faced criticism because certain aspects of dog conformation stipulated in the UK Kennel Club breed standards have a detrimental impact on dog welfare. A review of conformation-related disorders was carried out in the top 50 UK Kennel Club registered breeds using systematic searches of existing information. A novel index to score severity of disorders along a single scale was also developed and used to conduct statistical analyses to determine the factors affecting reported breed predisposition to defects. According to the literature searched, each of the top 50 breeds was found to have at least one aspect of its conformation predisposing it to a disorder; and 84 disorders were either directly or indirectly associated with conformation. The Miniature poodle, Bulldog, Pug and Basset hound had most associations with conformation-related disorders. Further research on prevalence and severity is required to assess the impact of different disorders on the welfare of affected breeds.
INTERFACING SAS TO ORACLE IN THE UNIX ENVIRONMENT
SAS is an EPA standard data and statistical analysis software package while ORACLE is EPA's standard data base management system software package. RACLE has the advantage over SAS in data retrieval and storage capabilities but has limited data and statistical analysis capability....
Power considerations for λ inflation factor in meta-analyses of genome-wide association studies.
Georgiopoulos, Georgios; Evangelou, Evangelos
2016-05-19
The genomic control (GC) approach is extensively used to effectively control false positive signals due to population stratification in genome-wide association studies (GWAS). However, GC affects the statistical power of GWAS. The loss of power depends on the magnitude of the inflation factor (λ) that is used for GC. We simulated meta-analyses of different GWAS. Minor allele frequency (MAF) ranged from 0·001 to 0·5 and λ was sampled from two scenarios: (i) random scenario (empirically-derived distribution of real λ values) and (ii) selected scenario from simulation parameter modification. Adjustment for λ was considered under single correction (within study corrected standard errors) and double correction (additional λ corrected summary estimate). MAF was a pivotal determinant of observed power. In random λ scenario, double correction induced a symmetric power reduction in comparison to single correction. For MAF 1·2 and MAF >5%. Our results provide a quick but detailed index for power considerations of future meta-analyses of GWAS that enables a more flexible design from early steps based on the number of studies accumulated in different groups and the λ values observed in the single studies.
Ntolka, Eleni; Papadatou-Pastou, Marietta
2018-01-01
The relationship between intelligence and handedness remains a matter of debate. The present study is a systematic review of 36 studies (totaling 66,108 individuals), which have measured full IQ scores in different handedness groups. Eighteen of those studies were further included in three sets of meta-analyses (totaling 20,442 individuals), which investigated differences in standardized mean IQ scores in (i) left-handers, (ii) non-right-handers, and (iii) mixed-handers compared to right-handers. The bulk of the studies included in the systematic review reported no differences in IQ scores between left- and right-handers. In the meta-analyses, statistically significant differences in mean IQ scores were detected between right-handers and left-handers, but were marginal in magnitude (d=-0.07); the data sets were found to be homogeneous. Significance was lost when the largest study was excluded. No differences in mean IQ scores were found between right-handers and non-right-handers as well as between right-handers and mixed-handers. No sex differences were found. Overall, the intelligence differences between handedness groups in the general population are negligible. Copyright © 2017 Elsevier Ltd. All rights reserved.
Graf, Joachim; Smolka, Robert; Simoes, Elisabeth; Zipfel, Stephan; Junne, Florian; Holderried, Friederike; Wosnik, Annette; Doherty, Anne M; Menzel, Karina; Herrmann-Werner, Anne
2017-05-02
Communication skills are essential in a patient-centred health service and therefore in medical teaching. Although significant differences in communication behaviour of male and female students are known, gender differences in the performance of students are still under-reported. The aim of this study was to analyse gender differences in communication skills of medical students in the context of an OSCE exam (OSCE = Objective Structured Clinical Examination). In a longitudinal trend study based on seven semester-cohorts, it was analysed if there are gender differences in medical students' communication skills. The students (self-perception) and standardized patients (SP) (external perception) were asked to rate the communication skills using uniform questionnaires. Statistical analysis was performed by using frequency analyses and t-tests in SPSS 21. Across all ratings in the self- and the external perception, there was a significant gender difference in favour of female students performing better in the dimensions of empathy, structure, verbal expression and non-verbal expression. The results of male students deteriorated across all dimensions in the external perception between 2011 and 2014. It is important to consider if gender-specific teaching should be developed, considering the reported differences between female and male students.
Vachha, B; Adams, R
2009-09-01
This study examines the effect of family environment on language performance in children with myelomeningocele compared with age- and education-matched controls selected from the same geographic region. Seventy-five monolingual (English) speaking children with myelomeningocele [males: 30; ages: 7-16 years; mean age: 10 years 1 month, standard deviation (SD) 2 years 7 months] and 35 typically developing children (males: 16; ages 7-16 years; mean age: 10 years 9 months, SD 2 years 6 months) participated in the study. The Comprehensive Assessment of Spoken Language (CASL) and the Wechsler tests of intelligence were administered individually to all participants. The CASL measures four subsystems: lexical, syntactic, supralinguistic and pragmatic. Parents completed the Family Environment Scale (FES) questionnaire and provided background demographic information. Standard independent sample t-tests, chi-squared and Fisher's exact tests were used to make simple comparisons between groups for age, socio-economic status, gender and ethnicity. Spearman correlation coefficients were used to detect associations between language and FES data. Group differences for the language and FES scores were analysed with a multivariate analysis of variance at a P-value of 0.05. For the myelomeningocele group, both Spearman correlation and partial correlation analyses revealed statistically significant positive relationships for the FES 'intellectual-cultural orientation' (ICO) variable and language performance in all subsystems (P < 0.01). For controls, positive associations were seen between: (1) ICO and lexical/semantic and syntactic subsystems; and (2) FES 'independence' and lexical/semantic and supralinguistic tasks. The relationship between language performance and family environment appears statistically and intuitively sound. As in our previous study, the positive link between family focus on intellectually and culturally enhancing activities and language performance among children with myelomeningocele and shunted hydrocephalus remains robust. Knowledge of this relationship should assist parents and professionals in supporting language development through activities within the natural learning environment.
Injury Severity Score coding: Data analyst v. emerging m-health technology.
Spence, R T; Zargaran, E; Hameed, M; Fong, D; Shangguan, E; Martinez, R; Navsaria, P; Nicol, A
2016-09-08
The cost of Abbreviated Injury Scale (AIS) coding has limited its utility in areas of the world with the highest incidence of trauma. We hypothesised that emerging mobile health (m-health) technology could offer a cost-effective alternative to the current gold-standard AIS mechanism in a high-volume trauma centre in South Africa. A prospectively collected sample of consecutive patients admitted following a traumatic injury that required an operation during a 1-month period was selected for the study. AISs and Injury Severity Scores (ISSs) were generated by clinician-entered data using an m-health application (ISS eTHR) as well as by a team of AIS coders at Vancouver General Hospital, Canada (ISS VGH). Rater agreements for ISSs were analysed using Bland-Altman plots with 95% limits of agreement (LoA) and kappa statistics of the ISSs grouped into ordinal categories. Reliability was analysed using a two-way mixed-model intraclass correlation coefficient (ICC). Calibration and discrimination of univariate logistic regression models built to predict in-hospital complications using ISSs coded by the two methods were also compared. Fifty-seven patients were managed operatively during the study period. The mean age of the cohort was 27.2 years (range 14 - 62), and 96.3% were male. The mechanism of injury was penetrating in 93.4% of cases, of which 52.8% were gunshot injuries. The LoA fell within -8.6 - 9.4. The mean ISS difference was 0.4 (95% CI -0.8 - 1.6). The kappa statistic was 0.53. The ICC of the individual ISS was 0.88 (95% CI 0.81 - 0.93) and the categorical ISS was 0.81 (95% CI 0.68 - 0.87). Model performance to predict in-hospital complications using either the ISS eTHR or the ISS VGH was equivalent. ISSs calculated by the eTHR and gold-standard coding were comparable. Emerging m-health technology provides a cost-effective alternative for injury severity scoring.
Gallistel, C. R.; Balci, Fuat; Freestone, David; Kheifets, Aaron; King, Adam
2014-01-01
We describe a high-throughput, high-volume, fully automated, live-in 24/7 behavioral testing system for assessing the effects of genetic and pharmacological manipulations on basic mechanisms of cognition and learning in mice. A standard polypropylene mouse housing tub is connected through an acrylic tube to a standard commercial mouse test box. The test box has 3 hoppers, 2 of which are connected to pellet feeders. All are internally illuminable with an LED and monitored for head entries by infrared (IR) beams. Mice live in the environment, which eliminates handling during screening. They obtain their food during two or more daily feeding periods by performing in operant (instrumental) and Pavlovian (classical) protocols, for which we have written protocol-control software and quasi-real-time data analysis and graphing software. The data analysis and graphing routines are written in a MATLAB-based language created to simplify greatly the analysis of large time-stamped behavioral and physiological event records and to preserve a full data trail from raw data through all intermediate analyses to the published graphs and statistics within a single data structure. The data-analysis code harvests the data several times a day and subjects it to statistical and graphical analyses, which are automatically stored in the "cloud" and on in-lab computers. Thus, the progress of individual mice is visualized and quantified daily. The data-analysis code talks to the protocol-control code, permitting the automated advance from protocol to protocol of individual subjects. The behavioral protocols implemented are matching, autoshaping, timed hopper-switching, risk assessment in timed hopper-switching, impulsivity measurement, and the circadian anticipation of food availability. Open-source protocol-control and data-analysis code makes the addition of new protocols simple. Eight test environments fit in a 48 in x 24 in x 78 in cabinet; two such cabinets (16 environments) may be controlled by one computer. PMID:24637442
Gallistel, C R; Balci, Fuat; Freestone, David; Kheifets, Aaron; King, Adam
2014-02-26
We describe a high-throughput, high-volume, fully automated, live-in 24/7 behavioral testing system for assessing the effects of genetic and pharmacological manipulations on basic mechanisms of cognition and learning in mice. A standard polypropylene mouse housing tub is connected through an acrylic tube to a standard commercial mouse test box. The test box has 3 hoppers, 2 of which are connected to pellet feeders. All are internally illuminable with an LED and monitored for head entries by infrared (IR) beams. Mice live in the environment, which eliminates handling during screening. They obtain their food during two or more daily feeding periods by performing in operant (instrumental) and Pavlovian (classical) protocols, for which we have written protocol-control software and quasi-real-time data analysis and graphing software. The data analysis and graphing routines are written in a MATLAB-based language created to simplify greatly the analysis of large time-stamped behavioral and physiological event records and to preserve a full data trail from raw data through all intermediate analyses to the published graphs and statistics within a single data structure. The data-analysis code harvests the data several times a day and subjects it to statistical and graphical analyses, which are automatically stored in the "cloud" and on in-lab computers. Thus, the progress of individual mice is visualized and quantified daily. The data-analysis code talks to the protocol-control code, permitting the automated advance from protocol to protocol of individual subjects. The behavioral protocols implemented are matching, autoshaping, timed hopper-switching, risk assessment in timed hopper-switching, impulsivity measurement, and the circadian anticipation of food availability. Open-source protocol-control and data-analysis code makes the addition of new protocols simple. Eight test environments fit in a 48 in x 24 in x 78 in cabinet; two such cabinets (16 environments) may be controlled by one computer.
Missing Data in the Field of Otorhinolaryngology and Head & Neck Surgery: Need for Improvement.
Netten, Anouk P; Dekker, Friedo W; Rieffe, Carolien; Soede, Wim; Briaire, Jeroen J; Frijns, Johan H M
Clinical studies are often facing missing data. Data can be missing for various reasons, for example, patients moved, certain measurements are only administered in high-risk groups, and patients are unable to attend clinic because of their health status. There are various ways to handle these missing data (e.g., complete cases analyses, mean substitution). Each of these techniques potentially influences both the analyses and the results of a study. The first aim of this structured review was to analyze how often researchers in the field of otorhinolaryngology/head & neck surgery report missing data. The second aim was to systematically describe how researchers handle missing data in their analyses. The third aim was to provide a solution on how to deal with missing data by means of the multiple imputation technique. With this review, we aim to contribute to a higher quality of reporting in otorhinolaryngology research. Clinical studies among the 398 most recently published research articles in three major journals in the field of otorhinolaryngology/head & neck surgery were analyzed based on how researchers reported and handled missing data. Of the 316 clinical studies, 85 studies reported some form of missing data. Of those 85, only a small number (12 studies, 3.8%) actively handled the missingness in their data. The majority of researchers exclude incomplete cases, which results in biased outcomes and a drop in statistical power. Within otorhinolaryngology research, missing data are largely ignored and underreported, and consequently, handled inadequately. This has major impact on the results and conclusions drawn from this research. Based on the outcomes of this review, we provide solutions on how to deal with missing data. To illustrate, we clarify the use of multiple imputation techniques, which recently became widely available in standard statistical programs.
Augustin, J; Kis, A; Sorbe, C; Schäfer, I; Augustin, M
2018-04-06
Skin cancer being the most common cancer in Germany has shown increasing incidence in the past decade. Since mostly caused by excessive UV exposure, skin cancer is largely related to behaviour. So far, the impact of regional and sociodemographic factors on the development of skin cancer in Germany is unclear. The current study aimed to investigate the association of potential predictive factors with the prevalence of skin cancers in Germany. Nationwide ambulatory care claims data from persons insured in statutory health insurances (SHI) with malignant melanoma (MM, ICD-10 C43) and non-melanoma skin cancer (NMSC, ICD-10 C44) in the years 2009-2015 were analysed. In addition, sociodemographic population data and satellite based UV and solar radiation data were associated. Descriptive as well as multivariate (spatial) statistical analyses (for example Bayes' Smoothing) were conducted on county level. Data from 70.1 million insured persons were analysed. Age standardized prevalences per 100,000 SHI insured persons for MM and NMSC were 284.7 and 1126.9 in 2009 and 378.5 and 1708.2 in 2015. Marked regional variations were observed with prevalences between 32.9% and 51.6%. Multivariate analysis show statistically significant positive correlations between higher income and education, and MM/NMSC prevalence. Prevalence of MM and NMSC in Germany shows spatio-temporal dynamics. Our results show that regional UV radiation, sunshine hours and sociodemographic factors have significant impact on skin cancer prevalence in Germany. Individual behaviour obviously is a major determinant which should be subject to preventive interventions. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Inferential Statistics in "Language Teaching Research": A Review and Ways Forward
ERIC Educational Resources Information Center
Lindstromberg, Seth
2016-01-01
This article reviews all (quasi)experimental studies appearing in the first 19 volumes (1997-2015) of "Language Teaching Research" (LTR). Specifically, it provides an overview of how statistical analyses were conducted in these studies and of how the analyses were reported. The overall conclusion is that there has been a tight adherence…
Zhi, Ruicong; Zhao, Lei; Xie, Nan; Wang, Houyin; Shi, Bolin; Shi, Jingye
2016-01-13
A framework of establishing standard reference scale (texture) is proposed by multivariate statistical analysis according to instrumental measurement and sensory evaluation. Multivariate statistical analysis is conducted to rapidly select typical reference samples with characteristics of universality, representativeness, stability, substitutability, and traceability. The reasonableness of the framework method is verified by establishing standard reference scale of texture attribute (hardness) with Chinese well-known food. More than 100 food products in 16 categories were tested using instrumental measurement (TPA test), and the result was analyzed with clustering analysis, principal component analysis, relative standard deviation, and analysis of variance. As a result, nine kinds of foods were determined to construct the hardness standard reference scale. The results indicate that the regression coefficient between the estimated sensory value and the instrumentally measured value is significant (R(2) = 0.9765), which fits well with Stevens's theory. The research provides reliable a theoretical basis and practical guide for quantitative standard reference scale establishment on food texture characteristics.
10 CFR Appendix II to Part 504 - Fuel Price Computation
Code of Federal Regulations, 2010 CFR
2010-01-01
... 504—Fuel Price Computation (a) Introduction. This appendix provides the equations and parameters... inflation indices must follow standard statistical procedures and must be fully documented within the... the weighted average fuel price must follow standard statistical procedures and be fully documented...
Austin, Peter C; Steyerberg, Ewout W
2012-06-20
When outcomes are binary, the c-statistic (equivalent to the area under the Receiver Operating Characteristic curve) is a standard measure of the predictive accuracy of a logistic regression model. An analytical expression was derived under the assumption that a continuous explanatory variable follows a normal distribution in those with and without the condition. We then conducted an extensive set of Monte Carlo simulations to examine whether the expressions derived under the assumption of binormality allowed for accurate prediction of the empirical c-statistic when the explanatory variable followed a normal distribution in the combined sample of those with and without the condition. We also examine the accuracy of the predicted c-statistic when the explanatory variable followed a gamma, log-normal or uniform distribution in combined sample of those with and without the condition. Under the assumption of binormality with equality of variances, the c-statistic follows a standard normal cumulative distribution function with dependence on the product of the standard deviation of the normal components (reflecting more heterogeneity) and the log-odds ratio (reflecting larger effects). Under the assumption of binormality with unequal variances, the c-statistic follows a standard normal cumulative distribution function with dependence on the standardized difference of the explanatory variable in those with and without the condition. In our Monte Carlo simulations, we found that these expressions allowed for reasonably accurate prediction of the empirical c-statistic when the distribution of the explanatory variable was normal, gamma, log-normal, and uniform in the entire sample of those with and without the condition. The discriminative ability of a continuous explanatory variable cannot be judged by its odds ratio alone, but always needs to be considered in relation to the heterogeneity of the population.
NASA Astrophysics Data System (ADS)
García-Resúa, Carlos; Pena-Verdeal, Hugo; Miñones, Mercedes; Gilino, Jorge; Giraldez, Maria J.; Yebra-Pimentel, Eva
2013-11-01
High tear fluid osmolarity is a feature common to all types of dry eye. This study was designed to establish the accuracy of two osmometers, a freezing point depression osmometer (Fiske 110) and an electrical impedance osmometer (TearLab™) by using standard samples. To assess the accuracy of the measurements provided by the two instruments we used 5 solutions of known osmolarity/osmolality; 50, 290 and 850 mOsm/kg and 292 and 338 mOsm/L. Fiske 110 is designed to be used in samples of 20 μl, so measurements were made on 1:9, 1:4, 1:1 and 1:0 dilutions of the standards. Tear Lab is addressed to be used in tear film and only a sample of 0.05 μl is required, so no dilutions were employed. Due to the smaller measurement range of the TearLab, the 50 and 850 mOsm/kg standards were not included. 20 measurements per standard sample were used and differences with the reference value was analysed by one sample t-test. Fiske 110 showed that osmolarity measurements differed statistically from standard values except those recorded for 290 mOsm/kg standard diluted 1:1 (p = 0.309), the 292 mOsm/L H2O sample (1:1) and 338 mOsm/L H2O standard (1:4). The more diluted the sample, the higher the error rate. For the TearLab measurements, one-sample t-test indicated that all determinations differed from the theoretical values (p = 0.001), though differences were always small. For undiluted solutions, Fiske 110 shows similar performance than TearLab. However, for the diluted standards, Fiske 110 worsens.
SOCR Analyses - an Instructional Java Web-based Statistical Analysis Toolkit.
Chu, Annie; Cui, Jenny; Dinov, Ivo D
2009-03-01
The Statistical Online Computational Resource (SOCR) designs web-based tools for educational use in a variety of undergraduate courses (Dinov 2006). Several studies have demonstrated that these resources significantly improve students' motivation and learning experiences (Dinov et al. 2008). SOCR Analyses is a new component that concentrates on data modeling and analysis using parametric and non-parametric techniques supported with graphical model diagnostics. Currently implemented analyses include commonly used models in undergraduate statistics courses like linear models (Simple Linear Regression, Multiple Linear Regression, One-Way and Two-Way ANOVA). In addition, we implemented tests for sample comparisons, such as t-test in the parametric category; and Wilcoxon rank sum test, Kruskal-Wallis test, Friedman's test, in the non-parametric category. SOCR Analyses also include several hypothesis test models, such as Contingency tables, Friedman's test and Fisher's exact test.The code itself is open source (http://socr.googlecode.com/), hoping to contribute to the efforts of the statistical computing community. The code includes functionality for each specific analysis model and it has general utilities that can be applied in various statistical computing tasks. For example, concrete methods with API (Application Programming Interface) have been implemented in statistical summary, least square solutions of general linear models, rank calculations, etc. HTML interfaces, tutorials, source code, activities, and data are freely available via the web (www.SOCR.ucla.edu). Code examples for developers and demos for educators are provided on the SOCR Wiki website.In this article, the pedagogical utilization of the SOCR Analyses is discussed, as well as the underlying design framework. As the SOCR project is on-going and more functions and tools are being added to it, these resources are constantly improved. The reader is strongly encouraged to check the SOCR site for most updated information and newly added models.
Ferro, Ana; Morais, Samantha; Rota, Matteo; Pelucchi, Claudio; Bertuccio, Paola; Bonzi, Rossella; Galeone, Carlotta; Zhang, Zuo-Feng; Matsuo, Keitaro; Ito, Hidemi; Hu, Jinfu; Johnson, Kenneth C; Yu, Guo-Pei; Palli, Domenico; Ferraroni, Monica; Muscat, Joshua; Malekzadeh, Reza; Ye, Weimin; Song, Huan; Zaridze, David; Maximovitch, Dmitry; Fernández de Larrea, Nerea; Kogevinas, Manolis; Vioque, Jesus; Navarrete-Muñoz, Eva M; Pakseresht, Mohammadreza; Pourfarzi, Farhad; Wolk, Alicja; Orsini, Nicola; Bellavia, Andrea; Håkansson, Niclas; Mu, Lina; Pastorino, Roberta; Kurtz, Robert C; Derakhshan, Mohammad H; Lagiou, Areti; Lagiou, Pagona; Boffetta, Paolo; Boccia, Stefania; Negri, Eva; La Vecchia, Carlo; Peleteiro, Bárbara; Lunet, Nuno
2018-05-01
Individual participant data pooled analyses allow access to non-published data and statistical reanalyses based on more homogeneous criteria than meta-analyses based on systematic reviews. We quantified the impact of publication-related biases and heterogeneity in data analysis and presentation in summary estimates of the association between alcohol drinking and gastric cancer. We compared estimates obtained from conventional meta-analyses, using only data available in published reports from studies that take part in the Stomach Cancer Pooling (StoP) Project, with individual participant data pooled analyses including the same studies. A total of 22 studies from the StoP Project assessed the relation between alcohol intake and gastric cancer, 19 had specific data for levels of consumption and 18 according to cancer location; published reports addressing these associations were available from 18, 5 and 5 studies, respectively. The summary odds ratios [OR, (95%CI)] estimate obtained with published data for drinkers vs. non-drinkers was 10% higher than the one obtained with individual StoP data [18 vs. 22 studies: 1.21 (1.07-1.36) vs. 1.10 (0.99-1.23)] and more heterogeneous (I 2 : 63.6% vs 54.4%). In general, published data yielded less precise summary estimates (standard errors up to 2.6 times higher). Funnel plot analysis suggested publication bias. Meta-analyses of the association between alcohol drinking and gastric cancer tended to overestimate the magnitude of the effects, possibly due to publication bias. Additionally, individual participant data pooled analyses yielded more precise estimates for different levels of exposure or cancer subtypes. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Sadowski, T.; Kneć, M.
2016-04-01
Fatigue tests were conducted since more than two hundred years ago. Despite this long period, as fatigue phenomena are very complex, assessment of fatigue response of standard materials or composites still requires a long time. Quite precise way to estimate fatigue parameters is to test at least 30 standardized specimens for the analysed material and further statistical post processing is required. In case of structural elements analysis like hybrid joints (Figure 1), the situation is much more complex as more factors influence the fatigue load capacity due to much more complicated structure of the joint in comparison to standard materials specimen, i.e. occurrence of: welded hot spots or rivets, adhesive layers, local notches creating the stress concentrations, etc. In order to shorten testing time some rapid methods are known: Locati's method [1] - step by step load increments up to failure, Prot's method [2] - constant increase of the load amplitude up to failure; Lehr's method [2] - seeking for the point during regular fatigue loading when an increase of temperature or strains become non-linear. The present article proposes new method of the fatigue response assessment - combination of the Locati's and Lehr's method.
Integrating Dynamic Data and Sensors with Semantic 3D City Models in the Context of Smart Cities
NASA Astrophysics Data System (ADS)
Chaturvedi, K.; Kolbe, T. H.
2016-10-01
Smart cities provide effective integration of human, physical and digital systems operating in the built environment. The advancements in city and landscape models, sensor web technologies, and simulation methods play a significant role in city analyses and improving quality of life of citizens and governance of cities. Semantic 3D city models can provide substantial benefits and can become a central information backbone for smart city infrastructures. However, current generation semantic 3D city models are static in nature and do not support dynamic properties and sensor observations. In this paper, we propose a new concept called Dynamizer allowing to represent highly dynamic data and providing a method for injecting dynamic variations of city object properties into the static representation. The approach also provides direct capability to model complex patterns based on statistics and general rules and also, real-time sensor observations. The concept is implemented as an Application Domain Extension for the CityGML standard. However, it could also be applied to other GML-based application schemas including the European INSPIRE data themes and national standards for topography and cadasters like the British Ordnance Survey Mastermap or the German cadaster standard ALKIS.
Hoyer, Annika; Kuss, Oliver
2018-05-01
Meta-analysis of diagnostic studies is still a rapidly developing area of biostatistical research. Especially, there is an increasing interest in methods to compare different diagnostic tests to a common gold standard. Restricting to the case of two diagnostic tests, in these meta-analyses the parameters of interest are the differences of sensitivities and specificities (with their corresponding confidence intervals) between the two diagnostic tests while accounting for the various associations across single studies and between the two tests. We propose statistical models with a quadrivariate response (where sensitivity of test 1, specificity of test 1, sensitivity of test 2, and specificity of test 2 are the four responses) as a sensible approach to this task. Using a quadrivariate generalized linear mixed model naturally generalizes the common standard bivariate model of meta-analysis for a single diagnostic test. If information on several thresholds of the tests is available, the quadrivariate model can be further generalized to yield a comparison of full receiver operating characteristic (ROC) curves. We illustrate our model by an example where two screening methods for the diagnosis of type 2 diabetes are compared.
Assessing the spatial pattern of iron in well water from a small central Florida community.
Hudgins, Jason; Lambert, Nicholas; Duranceau, Steven; Russell Butler, J
2018-02-01
Iron is one of the most common elements in the Earth's crust, which corresponds to it being a common constituent in drinking water supplies. Residents of Bithlo, an unincorporated community in east-central Florida, have observed that their drinking water tastes like metal and stains clothing and teeth. An evaluation of water samples collected from over 200 private drinking water wells revealed iron concentrations that exceeded the US Environmental Protection Agency's (EPA's) secondary standard of 0.3 mg/L. Households with and without point-of-entry treatment were found to have over three times (0.92 mg/L) and ten times (3.86 mg/L) more iron than the EPA's secondary standard, respectively. The human health-based threshold of 4.2 mg/L established by the Centers for Disease Control and Prevention was exceeded in 38.6% of untreated residences. Community-wide statistical and spatial water-quality trends were developed by combining the collected well water quality data with historically available water quality reports. Spatial analyses revealed that greater than 99% of the Bithlo community's private household supplies would exceed the EPA's drinking water secondary standard.
Langtimm, Catherine A.; Kendall, William L.; Beck, Cathy A.; Kochman, Howard I.; Teague, Amy L.; Meigs-Friend, Gaia; Peñaloza, Claudia L.
2016-11-30
This report provides supporting details and evidence for the rationale, validity and efficacy of a new mark-recapture model, the Barker Robust Design, to estimate regional manatee survival rates used to parameterize several components of the 2012 version of the Manatee Core Biological Model (CBM) and Threats Analysis (TA). The CBM and TA provide scientific analyses on population viability of the Florida manatee subspecies (Trichechus manatus latirostris) for U.S. Fish and Wildlife Service’s 5-year reviews of the status of the species as listed under the Endangered Species Act. The model evaluation is presented in a standardized reporting framework, modified from the TRACE (TRAnsparent and Comprehensive model Evaluation) protocol first introduced for environmental threat analyses. We identify this new protocol as TRACE-MANATEE SURVIVAL and this model evaluation specifically as TRACE-MANATEE SURVIVAL, Barker RD version 1. The longer-term objectives of the manatee standard reporting format are to (1) communicate to resource managers consistent evaluation information over sequential modeling efforts; (2) build understanding and expertise on the structure and function of the models; (3) document changes in model structures and applications in response to evolving management objectives, new biological and ecological knowledge, and new statistical advances; and (4) provide greater transparency for management and research review.
Pappas, Derek J; Marin, Wesley; Hollenbach, Jill A; Mack, Steven J
2016-03-01
Bridging ImmunoGenomic Data-Analysis Workflow Gaps (BIGDAWG) is an integrated data-analysis pipeline designed for the standardized analysis of highly-polymorphic genetic data, specifically for the HLA and KIR genetic systems. Most modern genetic analysis programs are designed for the analysis of single nucleotide polymorphisms, but the highly polymorphic nature of HLA and KIR data require specialized methods of data analysis. BIGDAWG performs case-control data analyses of highly polymorphic genotype data characteristic of the HLA and KIR loci. BIGDAWG performs tests for Hardy-Weinberg equilibrium, calculates allele frequencies and bins low-frequency alleles for k×2 and 2×2 chi-squared tests, and calculates odds ratios, confidence intervals and p-values for each allele. When multi-locus genotype data are available, BIGDAWG estimates user-specified haplotypes and performs the same binning and statistical calculations for each haplotype. For the HLA loci, BIGDAWG performs the same analyses at the individual amino-acid level. Finally, BIGDAWG generates figures and tables for each of these comparisons. BIGDAWG obviates the error-prone reformatting needed to traffic data between multiple programs, and streamlines and standardizes the data-analysis process for case-control studies of highly polymorphic data. BIGDAWG has been implemented as the bigdawg R package and as a free web application at bigdawg.immunogenomics.org. Copyright © 2015 American Society for Histocompatibility and Immunogenetics. Published by Elsevier Inc. All rights reserved.
Review: visual analytics of climate networks
NASA Astrophysics Data System (ADS)
Nocke, T.; Buschmann, S.; Donges, J. F.; Marwan, N.; Schulz, H.-J.; Tominski, C.
2015-09-01
Network analysis has become an important approach in studying complex spatiotemporal behaviour within geophysical observation and simulation data. This new field produces increasing numbers of large geo-referenced networks to be analysed. Particular focus lies currently on the network analysis of the complex statistical interrelationship structure within climatological fields. The standard procedure for such network analyses is the extraction of network measures in combination with static standard visualisation methods. Existing interactive visualisation methods and tools for geo-referenced network exploration are often either not known to the analyst or their potential is not fully exploited. To fill this gap, we illustrate how interactive visual analytics methods in combination with geovisualisation can be tailored for visual climate network investigation. Therefore, the paper provides a problem analysis relating the multiple visualisation challenges to a survey undertaken with network analysts from the research fields of climate and complex systems science. Then, as an overview for the interested practitioner, we review the state-of-the-art in climate network visualisation and provide an overview of existing tools. As a further contribution, we introduce the visual network analytics tools CGV and GTX, providing tailored solutions for climate network analysis, including alternative geographic projections, edge bundling, and 3-D network support. Using these tools, the paper illustrates the application potentials of visual analytics for climate networks based on several use cases including examples from global, regional, and multi-layered climate networks.
Review: visual analytics of climate networks
NASA Astrophysics Data System (ADS)
Nocke, T.; Buschmann, S.; Donges, J. F.; Marwan, N.; Schulz, H.-J.; Tominski, C.
2015-04-01
Network analysis has become an important approach in studying complex spatiotemporal behaviour within geophysical observation and simulation data. This new field produces increasing amounts of large geo-referenced networks to be analysed. Particular focus lies currently on the network analysis of the complex statistical interrelationship structure within climatological fields. The standard procedure for such network analyses is the extraction of network measures in combination with static standard visualisation methods. Existing interactive visualisation methods and tools for geo-referenced network exploration are often either not known to the analyst or their potential is not fully exploited. To fill this gap, we illustrate how interactive visual analytics methods in combination with geovisualisation can be tailored for visual climate network investigation. Therefore, the paper provides a problem analysis, relating the multiple visualisation challenges with a survey undertaken with network analysts from the research fields of climate and complex systems science. Then, as an overview for the interested practitioner, we review the state-of-the-art in climate network visualisation and provide an overview of existing tools. As a further contribution, we introduce the visual network analytics tools CGV and GTX, providing tailored solutions for climate network analysis, including alternative geographic projections, edge bundling, and 3-D network support. Using these tools, the paper illustrates the application potentials of visual analytics for climate networks based on several use cases including examples from global, regional, and multi-layered climate networks.
Fosbury, DeEtta; Walker, Mark; Stillings, Lisa L.
2008-01-01
This report presents the chemical analyses of ground-water samples collected in 2005 from domestic wells located in the Stillwater area of the Carson Desert (fig. 1). These data were evaluated for evidence of mixing with nearby geothermal waters (Fosbury, 2007). That study used several methods to identify mixing zones of ground and geothermal waters using trace elements, chemical equilibria, water temperature, geothermometer estimates, and statistical techniques. In some regions, geothermal sources influence the chemical quality of ground water used for drinking water supplies. Typical geothermal contaminants include arsenic, mercury, antimony, selenium, thallium, boron, lithium, and fluoride (Webster and Nordstrom, 2003). The Environmental Protection Agency has established primary drinking water standards for these, with the exception of boron and lithium. Concentrations of some trace metals in geothermal water may exceed drinking water standards by several orders of magnitude. Geothermal influences on water quality are likely to be localized, depending on directions of ground water flow, the relative volumes of geothermal sources and ground water originating from other sources, and depth below the surface from which water is withdrawn. It is important to understand the areal extent of shallow mixing of geothermal water because it may have adverse chemical and aesthetic effects on domestic drinking water. It would be useful to understand the areal extent of these effects.
[Quality assurance program for pain management after obstetrical perineal injury].
Urion, L; Bayoumeu, F; Jandard, C; Fontaine, B; Bouaziz, H
2004-11-01
A quality insurance program has been set up in order to improve the relief of pain in patients with perineal injury after childbirth. The program has been developed according to the French standards of accreditation. After elaboration of a referential, a first study (103 patients) allowed to evaluate the ongoing practices and to appreciate the pain intensities. After analysis of the results, an action strategy has been elaborated, with a brand new therapeutic standard and a pain-monitoring program for nurses. Six months later, a second study (105 patients) measured the efficiency of the accomplished actions. The statistic analysis used chi2 and Kruskal-Wallis tests and a multivariate analyse (p <0.05). Several indicators led to conclude to the success of this program: analgesics prescribed systematically and earlier, best observance, larger utilisation of the NSAI, decrease of the analgesics requests, improvement of the satisfaction referred to the relief of pain. The multivariate analyse showed a risk twice as little as in the second study to have a 36th hour VAS score superior to four (p =0.03). The apply of this quality insurance program allowed to improve the analgesia after obstetric perineal injuries. A few adaptations are needed, and also more formations of the medical and paramedical staff. The durability of the accomplished actions shall be evaluated in the future.
Wohlsen, T D
2011-08-01
The equivalence of Oxoid (CM 1046) Brilliance((TM)) E. coli/coliform selective agar to mFC agar, as used in the Australian/New Zealand Standard Method to detect thermotolerant coliforms and Escherichia coli in water samples, was assessed. A total of 244 water samples were analysed in parallel over a 5-month period. Sewage effluent samples (n = 131, sites = 43), freshwater (n = 62, sites = 18) and marine/brackish water samples (n = 51, sites = 23) were analysed. The Wilcoxon matched-pairs signed-ranks test showed a varying degree of statistical difference between the two methods. All matrices had a higher recovery in the trial method. Enterococci faecalis, Aeromonas spp. and Vibrio spp. did not grow on the CM1046 agar, and Pseudomonas aeruginosa and Enterobacter aerogenes were inhibited. The use of CM 1046 for the detection and enumeration of E. coli and thermotolerant coliforms in water samples is a suitable alternative to the AS/NZS Standard Method. The use of CM1046 agar was less labour intensive and time consuming, as no secondary confirmation steps were required. Confirmed results could be reported within 24 h of sample analysis, as compared to 48 h with the reference method. Public health concerns can be addressed in a more efficient manner. © 2011 Unitywater. Letters in Applied Microbiology © 2011 The Society for Applied Microbiology.
White, J M L; McFadden, J P; White, I R
2008-03-01
Active patch test sensitization is an uncommon phenomenon which may have undesirable consequences for those undergoing this gold-standard investigation for contact allergy. To perform a retrospective analysis of the results of 241 subjects who were patch tested twice in a monocentre evaluating approximately 1500 subjects per year. Positivity to 11 common allergens in the recommended Baseline Series of contact allergens (European) was analysed: nickel sulphate; Myroxylon pereirae; fragrance mix I; para-phenylenediamine; colophonium; epoxy resin; neomycin; quaternium-15; thiuram mix; sesquiterpene lactone mix; and para-tert-butylphenol resin. Only fragrance mix I gave a statistically significant, increased rate of positivity on the second reading compared with the first (P=0.011). This trend was maintained when separately analysing a subgroup of 42 subjects who had been repeat patch tested within 1 year; this analysis was done to minimize the potential confounding factor of increased usage of fragrances with a wide interval between both tests. To reduce the confounding effect of age on our data, we calculated expected frequencies of positivity to fragrance mix I based on previously published data from our centre. This showed a marked excess of observed cases over predicted ones, particularly in women in the age range 40-60 years. We suspect that active sensitization to fragrance mix I may occur. Similar published analysis from another large group using standard methodology supports our data.
On Statistical Approaches for Demonstrating Analytical Similarity in the Presence of Correlation.
Yang, Harry; Novick, Steven; Burdick, Richard K
Analytical similarity is the foundation for demonstration of biosimilarity between a proposed product and a reference product. For this assessment, currently the U.S. Food and Drug Administration (FDA) recommends a tiered system in which quality attributes are categorized into three tiers commensurate with their risk and approaches of varying statistical rigor are subsequently used for the three-tier quality attributes. Key to the analyses of Tiers 1 and 2 quality attributes is the establishment of equivalence acceptance criterion and quality range. For particular licensure applications, the FDA has provided advice on statistical methods for demonstration of analytical similarity. For example, for Tier 1 assessment, an equivalence test can be used based on an equivalence margin of 1.5 σ R , where σ R is the reference product variability estimated by the sample standard deviation S R from a sample of reference lots. The quality range for demonstrating Tier 2 analytical similarity is of the form X̄ R ± K × σ R where the constant K is appropriately justified. To demonstrate Tier 2 analytical similarity, a large percentage (e.g., 90%) of test product must fall in the quality range. In this paper, through both theoretical derivations and simulations, we show that when the reference drug product lots are correlated, the sample standard deviation S R underestimates the true reference product variability σ R As a result, substituting S R for σ R in the Tier 1 equivalence acceptance criterion and the Tier 2 quality range inappropriately reduces the statistical power and the ability to declare analytical similarity. Also explored is the impact of correlation among drug product lots on Type I error rate and power. Three methods based on generalized pivotal quantities are introduced, and their performance is compared against a two-one-sided tests (TOST) approach. Finally, strategies to mitigate risk of correlation among the reference products lots are discussed. A biosimilar is a generic version of the original biological drug product. A key component of a biosimilar development is the demonstration of analytical similarity between the biosimilar and the reference product. Such demonstration relies on application of statistical methods to establish a similarity margin and appropriate test for equivalence between the two products. This paper discusses statistical issues with demonstration of analytical similarity and provides alternate approaches to potentially mitigate these problems. © PDA, Inc. 2016.
Muysoms, F E; Deerenberg, E B; Peeters, E; Agresta, F; Berrevoet, F; Campanelli, G; Ceelen, W; Champault, G G; Corcione, F; Cuccurullo, D; DeBeaux, A C; Dietz, U A; Fitzgibbons, R J; Gillion, J F; Hilgers, R-D; Jeekel, J; Kyle-Leinhase, I; Köckerling, F; Mandala, V; Montgomery, A; Morales-Conde, S; Simmermacher, R K J; Schumpelick, V; Smietański, M; Walgenbach, M; Miserez, M
2013-08-01
The literature dealing with abdominal wall surgery is often flawed due to lack of adherence to accepted reporting standards and statistical methodology. The EuraHS Working Group (European Registry of Abdominal Wall Hernias) organised a consensus meeting of surgical experts and researchers with an interest in abdominal wall surgery, including a statistician, the editors of the journal Hernia and scientists experienced in meta-analysis. Detailed discussions took place to identify the basic ground rules necessary to improve the quality of research reports related to abdominal wall reconstruction. A list of recommendations was formulated including more general issues on the scientific methodology and statistical approach. Standards and statements are available, each depending on the type of study that is being reported: the CONSORT statement for the Randomised Controlled Trials, the TREND statement for non randomised interventional studies, the STROBE statement for observational studies, the STARLITE statement for literature searches, the MOOSE statement for metaanalyses of observational studies and the PRISMA statement for systematic reviews and meta-analyses. A number of recommendations were made, including the use of previously published standard definitions and classifications relating to hernia variables and treatment; the use of the validated Clavien-Dindo classification to report complications in hernia surgery; the use of "time-to-event analysis" to report data on "freedom-of-recurrence" rather than the use of recurrence rates, because it is more sensitive and accounts for the patients that are lost to follow-up compared with other reporting methods. A set of recommendations for reporting outcome results of abdominal wall surgery was formulated as guidance for researchers. It is anticipated that the use of these recommendations will increase the quality and meaning of abdominal wall surgery research.
The influence of anthropometrics on physical employment standard performance.
Reilly, T; Spivock, M; Prayal-Brown, A; Stockbrugger, B; Blacklock, R
2016-10-01
The Canadian Armed Forces (CAF) recently implemented the Fitness for Operational Requirements of CAF Employment (FORCE), a new physical employment standard (PES). Data collection throughout development included anthropometric profiles of the CAF. To determine if anthropometric measurements and demographic information would predict the performance outcomes of the FORCE and/or Common Military Task Fitness Evaluation (CMTFE). We conducted a secondary analysis of data from FORCE research. We obtained bioelectrical impedance and segmental analysis. Statistical analysis included correlation and linear regression analyses. Among the 668 study subjects, as predicted, any task requiring lifting, pulling or moving of an object was significantly and positively correlated (r > 0.67) to lean body mass (LBM) measurements. LBM correlated with stretcher carry (r = 0.78) and with lifting actions such as sand bag drag (r = 0.77), vehicle extrication (r = 0.71), sand bag fortification (r = 0.68) and sand bag lift time (r = -0.67). The difference between the correlation of dead mass (DM) with task performance compared with LBM was not statistically significant. DM and LBM can be used in a PES to predict success on military tasks such as casualty evacuation and manual material handling. However, there is no minimum LBM required to perform these tasks successfully. These data direct future research on how we should diversify research participants by anthropometrics, in addition to the traditional demographic variables of gender and age, to highlight potential important adverse impact with PES design. In addition, the results can be used to develop better training regimens to facilitate passing a PES. © All rights reserved. ‘The Influence of Anthropometrics on Physical Employment Standard Performance’ has been reproduced with the permission of DND, 2016.
Gao, Hongying; Deng, Shibing; Obach, R Scott
2015-12-01
An unbiased scanning methodology using ultra high-performance liquid chromatography coupled with high-resolution mass spectrometry was used to bank data and plasma samples for comparing the data generated at different dates. This method was applied to bank the data generated earlier in animal samples and then to compare the exposure to metabolites in animal versus human for safety assessment. With neither authentic standards nor prior knowledge of the identities and structures of metabolites, full scans for precursor ions and all ion fragments (AIF) were employed with a generic gradient LC method to analyze plasma samples at positive and negative polarity, respectively. In a total of 22 tested drugs and metabolites, 21 analytes were detected using this unbiased scanning method except that naproxen was not detected due to low sensitivity at negative polarity and interference at positive polarity; and 4'- or 5-hydroxy diclofenac was not separated by a generic UPLC method. Statistical analysis of the peak area ratios of the analytes versus the internal standard in five repetitive analyses over approximately 1 year demonstrated that the analysis variation was significantly different from sample instability. The confidence limits for comparing the exposure using peak area ratio of metabolites in animal plasma versus human plasma measured over approximately 1 year apart were comparable to the analysis undertaken side by side on the same days. These statistical analysis results showed it was feasible to compare data generated at different dates with neither authentic standards nor prior knowledge of the analytes.
Austin, Peter C
2016-12-30
Propensity score methods are used to reduce the effects of observed confounding when using observational data to estimate the effects of treatments or exposures. A popular method of using the propensity score is inverse probability of treatment weighting (IPTW). When using this method, a weight is calculated for each subject that is equal to the inverse of the probability of receiving the treatment that was actually received. These weights are then incorporated into the analyses to minimize the effects of observed confounding. Previous research has found that these methods result in unbiased estimation when estimating the effect of treatment on survival outcomes. However, conventional methods of variance estimation were shown to result in biased estimates of standard error. In this study, we conducted an extensive set of Monte Carlo simulations to examine different methods of variance estimation when using a weighted Cox proportional hazards model to estimate the effect of treatment. We considered three variance estimation methods: (i) a naïve model-based variance estimator; (ii) a robust sandwich-type variance estimator; and (iii) a bootstrap variance estimator. We considered estimation of both the average treatment effect and the average treatment effect in the treated. We found that the use of a bootstrap estimator resulted in approximately correct estimates of standard errors and confidence intervals with the correct coverage rates. The other estimators resulted in biased estimates of standard errors and confidence intervals with incorrect coverage rates. Our simulations were informed by a case study examining the effect of statin prescribing on mortality. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.
Stamate, Mirela Cristina; Todor, Nicolae; Cosgarea, Marcel
2015-01-01
The clinical utility of otoacoustic emissions as a noninvasive objective test of cochlear function has been long studied. Both transient otoacoustic emissions and distorsion products can be used to identify hearing loss, but to what extent they can be used as predictors for hearing loss is still debated. Most studies agree that multivariate analyses have better test performances than univariate analyses. The aim of the study was to determine transient otoacoustic emissions and distorsion products performance in identifying normal and impaired hearing loss, using the pure tone audiogram as a gold standard procedure and different multivariate statistical approaches. The study included 105 adult subjects with normal hearing and hearing loss who underwent the same test battery: pure-tone audiometry, tympanometry, otoacoustic emission tests. We chose to use the logistic regression as a multivariate statistical technique. Three logistic regression models were developed to characterize the relations between different risk factors (age, sex, tinnitus, demographic features, cochlear status defined by otoacoustic emissions) and hearing status defined by pure-tone audiometry. The multivariate analyses allow the calculation of the logistic score, which is a combination of the inputs, weighted by coefficients, calculated within the analyses. The accuracy of each model was assessed using receiver operating characteristics curve analysis. We used the logistic score to generate receivers operating curves and to estimate the areas under the curves in order to compare different multivariate analyses. We compared the performance of each otoacoustic emission (transient, distorsion product) using three different multivariate analyses for each ear, when multi-frequency gold standards were used. We demonstrated that all multivariate analyses provided high values of the area under the curve proving the performance of the otoacoustic emissions. Each otoacoustic emission test presented high values of area under the curve, suggesting that implementing a multivariate approach to evaluate the performances of each otoacoustic emission test would serve to increase the accuracy in identifying the normal and impaired ears. We encountered the highest area under the curve value for the combined multivariate analysis suggesting that both otoacoustic emission tests should be used in assessing hearing status. Our multivariate analyses revealed that age is a constant predictor factor of the auditory status for both ears, but the presence of tinnitus was the most important predictor for the hearing level, only for the left ear. Age presented similar coefficients, but tinnitus coefficients, by their high value, produced the highest variations of the logistic scores, only for the left ear group, thus increasing the risk of hearing loss. We did not find gender differences between ears for any otoacoustic emission tests, but studies still debate this question as the results are contradictory. Neither gender, nor environment origin had any predictive value for the hearing status, according to the results of our study. Like any other audiological test, using otoacoustic emissions to identify hearing loss is not without error. Even when applying multivariate analysis, perfect test performance is never achieved. Although most studies demonstrated the benefit of using the multivariate analysis, it has not been incorporated into clinical decisions maybe because of the idiosyncratic nature of multivariate solutions or because of the lack of the validation studies.
STAMATE, MIRELA CRISTINA; TODOR, NICOLAE; COSGAREA, MARCEL
2015-01-01
Background and aim The clinical utility of otoacoustic emissions as a noninvasive objective test of cochlear function has been long studied. Both transient otoacoustic emissions and distorsion products can be used to identify hearing loss, but to what extent they can be used as predictors for hearing loss is still debated. Most studies agree that multivariate analyses have better test performances than univariate analyses. The aim of the study was to determine transient otoacoustic emissions and distorsion products performance in identifying normal and impaired hearing loss, using the pure tone audiogram as a gold standard procedure and different multivariate statistical approaches. Methods The study included 105 adult subjects with normal hearing and hearing loss who underwent the same test battery: pure-tone audiometry, tympanometry, otoacoustic emission tests. We chose to use the logistic regression as a multivariate statistical technique. Three logistic regression models were developed to characterize the relations between different risk factors (age, sex, tinnitus, demographic features, cochlear status defined by otoacoustic emissions) and hearing status defined by pure-tone audiometry. The multivariate analyses allow the calculation of the logistic score, which is a combination of the inputs, weighted by coefficients, calculated within the analyses. The accuracy of each model was assessed using receiver operating characteristics curve analysis. We used the logistic score to generate receivers operating curves and to estimate the areas under the curves in order to compare different multivariate analyses. Results We compared the performance of each otoacoustic emission (transient, distorsion product) using three different multivariate analyses for each ear, when multi-frequency gold standards were used. We demonstrated that all multivariate analyses provided high values of the area under the curve proving the performance of the otoacoustic emissions. Each otoacoustic emission test presented high values of area under the curve, suggesting that implementing a multivariate approach to evaluate the performances of each otoacoustic emission test would serve to increase the accuracy in identifying the normal and impaired ears. We encountered the highest area under the curve value for the combined multivariate analysis suggesting that both otoacoustic emission tests should be used in assessing hearing status. Our multivariate analyses revealed that age is a constant predictor factor of the auditory status for both ears, but the presence of tinnitus was the most important predictor for the hearing level, only for the left ear. Age presented similar coefficients, but tinnitus coefficients, by their high value, produced the highest variations of the logistic scores, only for the left ear group, thus increasing the risk of hearing loss. We did not find gender differences between ears for any otoacoustic emission tests, but studies still debate this question as the results are contradictory. Neither gender, nor environment origin had any predictive value for the hearing status, according to the results of our study. Conclusion Like any other audiological test, using otoacoustic emissions to identify hearing loss is not without error. Even when applying multivariate analysis, perfect test performance is never achieved. Although most studies demonstrated the benefit of using the multivariate analysis, it has not been incorporated into clinical decisions maybe because of the idiosyncratic nature of multivariate solutions or because of the lack of the validation studies. PMID:26733749