Sample records for measured statistical analyses

  1. Research of Extension of the Life Cycle of Helicopter Rotor Blade in Hungary

    DTIC Science & Technology

    2003-02-01

    Radiography (DXR), and (iii) Vibration Diagnostics (VD) with Statistical Energy Analysis (SEA) were semi- simultaneously applied [1]. The used three...2.2. Vibration Diagnostics (VD)) Parallel to the NDT measurements the Statistical Energy Analysis (SEA) as a vibration diagnostical tool were...noises were analysed with a dual-channel real time frequency analyser (BK2035). In addition to the Statistical Energy Analysis measurement a small

  2. Statistical Data Analyses of Trace Chemical, Biochemical, and Physical Analytical Signatures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Udey, Ruth Norma

    Analytical and bioanalytical chemistry measurement results are most meaningful when interpreted using rigorous statistical treatments of the data. The same data set may provide many dimensions of information depending on the questions asked through the applied statistical methods. Three principal projects illustrated the wealth of information gained through the application of statistical data analyses to diverse problems.

  3. Perceived Effectiveness among College Students of Selected Statistical Measures in Motivating Exercise Behavior

    ERIC Educational Resources Information Center

    Merrill, Ray M.; Chatterley, Amanda; Shields, Eric C.

    2005-01-01

    This study explored the effectiveness of selected statistical measures at motivating or maintaining regular exercise among college students. The study also considered whether ease in understanding these statistical measures was associated with perceived effectiveness at motivating or maintaining regular exercise. Analyses were based on a…

  4. The Problem of Auto-Correlation in Parasitology

    PubMed Central

    Pollitt, Laura C.; Reece, Sarah E.; Mideo, Nicole; Nussey, Daniel H.; Colegrave, Nick

    2012-01-01

    Explaining the contribution of host and pathogen factors in driving infection dynamics is a major ambition in parasitology. There is increasing recognition that analyses based on single summary measures of an infection (e.g., peak parasitaemia) do not adequately capture infection dynamics and so, the appropriate use of statistical techniques to analyse dynamics is necessary to understand infections and, ultimately, control parasites. However, the complexities of within-host environments mean that tracking and analysing pathogen dynamics within infections and among hosts poses considerable statistical challenges. Simple statistical models make assumptions that will rarely be satisfied in data collected on host and parasite parameters. In particular, model residuals (unexplained variance in the data) should not be correlated in time or space. Here we demonstrate how failure to account for such correlations can result in incorrect biological inference from statistical analysis. We then show how mixed effects models can be used as a powerful tool to analyse such repeated measures data in the hope that this will encourage better statistical practices in parasitology. PMID:22511865

  5. Methods in pharmacoepidemiology: a review of statistical analyses and data reporting in pediatric drug utilization studies.

    PubMed

    Sequi, Marco; Campi, Rita; Clavenna, Antonio; Bonati, Maurizio

    2013-03-01

    To evaluate the quality of data reporting and statistical methods performed in drug utilization studies in the pediatric population. Drug utilization studies evaluating all drug prescriptions to children and adolescents published between January 1994 and December 2011 were retrieved and analyzed. For each study, information on measures of exposure/consumption, the covariates considered, descriptive and inferential analyses, statistical tests, and methods of data reporting was extracted. An overall quality score was created for each study using a 12-item checklist that took into account the presence of outcome measures, covariates of measures, descriptive measures, statistical tests, and graphical representation. A total of 22 studies were reviewed and analyzed. Of these, 20 studies reported at least one descriptive measure. The mean was the most commonly used measure (18 studies), but only five of these also reported the standard deviation. Statistical analyses were performed in 12 studies, with the chi-square test being the most commonly performed test. Graphs were presented in 14 papers. Sixteen papers reported the number of drug prescriptions and/or packages, and ten reported the prevalence of the drug prescription. The mean quality score was 8 (median 9). Only seven of the 22 studies received a score of ≥10, while four studies received a score of <6. Our findings document that only a few of the studies reviewed applied statistical methods and reported data in a satisfactory manner. We therefore conclude that the methodology of drug utilization studies needs to be improved.

  6. [Clinical research=design*measurements*statistical analyses].

    PubMed

    Furukawa, Toshiaki

    2012-06-01

    A clinical study must address true endpoints that matter for the patients and the doctors. A good clinical study starts with a good clinical question. Formulating a clinical question in the form of PECO can sharpen one's original question. In order to perform a good clinical study one must have a knowledge of study design, measurements and statistical analyses: The first is taught by epidemiology, the second by psychometrics and the third by biostatistics.

  7. A d-statistic for single-case designs that is equivalent to the usual between-groups d-statistic.

    PubMed

    Shadish, William R; Hedges, Larry V; Pustejovsky, James E; Boyajian, Jonathan G; Sullivan, Kristynn J; Andrade, Alma; Barrientos, Jeannette L

    2014-01-01

    We describe a standardised mean difference statistic (d) for single-case designs that is equivalent to the usual d in between-groups experiments. We show how it can be used to summarise treatment effects over cases within a study, to do power analyses in planning new studies and grant proposals, and to meta-analyse effects across studies of the same question. We discuss limitations of this d-statistic, and possible remedies to them. Even so, this d-statistic is better founded statistically than other effect size measures for single-case design, and unlike many general linear model approaches such as multilevel modelling or generalised additive models, it produces a standardised effect size that can be integrated over studies with different outcome measures. SPSS macros for both effect size computation and power analysis are available.

  8. Statistical approaches to assessing single and multiple outcome measures in dry eye therapy and diagnosis.

    PubMed

    Tomlinson, Alan; Hair, Mario; McFadyen, Angus

    2013-10-01

    Dry eye is a multifactorial disease which would require a broad spectrum of test measures in the monitoring of its treatment and diagnosis. However, studies have typically reported improvements in individual measures with treatment. Alternative approaches involve multiple, combined outcomes being assessed by different statistical analyses. In order to assess the effect of various statistical approaches to the use of single and combined test measures in dry eye, this review reanalyzed measures from two previous studies (osmolarity, evaporation, tear turnover rate, and lipid film quality). These analyses assessed the measures as single variables within groups, pre- and post-intervention with a lubricant supplement, by creating combinations of these variables and by validating these combinations with the combined sample of data from all groups of dry eye subjects. The effectiveness of single measures and combinations in diagnosis of dry eye was also considered. Copyright © 2013. Published by Elsevier Inc.

  9. Measuring the Impacts of ICT Using Official Statistics. OECD Digital Economy Papers, No. 136

    ERIC Educational Resources Information Center

    Roberts, Sheridan

    2008-01-01

    This paper describes the findings of an OECD project examining ICT impact measurement and analyses based on official statistics. Both economic and social impacts are covered and some results are presented. It attempts to place ICT impacts measurement into an Information Society conceptual framework, provides some suggestions for standardising…

  10. Aircraft Maneuvers for the Evaluation of Flying Qualities and Agility. Volume 1. Maneuver Development Process and Initial Maneuver Set

    DTIC Science & Technology

    1993-08-01

    subtitled "Simulation Data," consists of detailed infonrnation on the design parmneter variations tested, subsequent statistical analyses conducted...used with confidence during the design process. The data quality can be examined in various forms such as statistical analyses of measure of merit data...merit, such as time to capture or nmaximurn pitch rate, can be calculated from the simulation time history data. Statistical techniques are then used

  11. The SPARC Intercomparison of Middle Atmosphere Climatologies

    NASA Technical Reports Server (NTRS)

    Randel, William; Fleming, Eric; Geller, Marvin; Gelman, Mel; Hamilton, Kevin; Karoly, David; Ortland, Dave; Pawson, Steve; Swinbank, Richard; Udelhofen, Petra

    2003-01-01

    Our current confidence in 'observed' climatological winds and temperatures in the middle atmosphere (over altitudes approx. 10-80 km) is assessed by detailed intercomparisons of contemporary and historic data sets. These data sets include global meteorological analyses and assimilations, climatologies derived from research satellite measurements, and historical reference atmosphere circulation statistics. We also include comparisons with historical rocketsonde wind and temperature data, and with more recent lidar temperature measurements. The comparisons focus on a few basic circulation statistics, such as temperature, zonal wind, and eddy flux statistics. Special attention is focused on tropical winds and temperatures, where large differences exist among separate analyses. Assimilated data sets provide the most realistic tropical variability, but substantial differences exist among current schemes.

  12. Performance of Between-Study Heterogeneity Measures in the Cochrane Library.

    PubMed

    Ma, Xiaoyue; Lin, Lifeng; Qu, Zhiyong; Zhu, Motao; Chu, Haitao

    2018-05-29

    The growth in comparative effectiveness research and evidence-based medicine has increased attention to systematic reviews and meta-analyses. Meta-analysis synthesizes and contrasts evidence from multiple independent studies to improve statistical efficiency and reduce bias. Assessing heterogeneity is critical for performing a meta-analysis and interpreting results. As a widely used heterogeneity measure, the I statistic quantifies the proportion of total variation across studies that is due to real differences in effect size. The presence of outlying studies can seriously exaggerate the I statistic. Two alternative heterogeneity measures, the Ir and Im, have been recently proposed to reduce the impact of outlying studies. To evaluate these measures' performance empirically, we applied them to 20,599 meta-analyses in the Cochrane Library. We found that the Ir and Im have strong agreement with the I, while they are more robust than the I when outlying studies appear.

  13. The disagreeable behaviour of the kappa statistic.

    PubMed

    Flight, Laura; Julious, Steven A

    2015-01-01

    It is often of interest to measure the agreement between a number of raters when an outcome is nominal or ordinal. The kappa statistic is used as a measure of agreement. The statistic is highly sensitive to the distribution of the marginal totals and can produce unreliable results. Other statistics such as the proportion of concordance, maximum attainable kappa and prevalence and bias adjusted kappa should be considered to indicate how well the kappa statistic represents agreement in the data. Each kappa should be considered and interpreted based on the context of the data being analysed. Copyright © 2014 John Wiley & Sons, Ltd.

  14. Validating Future Force Performance Measures (Army Class): Concluding Analyses

    DTIC Science & Technology

    2016-06-01

    32 Table 3.10. Descriptive Statistics and Intercorrelations for LV Final Predictor Factor Scores...55 Table 4.7. Descriptive Statistics for Analysis Criteria...Soldier attrition and performance: Dependability (Non- Delinquency ), Adjustment, Physical Conditioning, Leadership, Work Orientation, and Agreeableness

  15. Ratio index variables or ANCOVA? Fisher's cats revisited.

    PubMed

    Tu, Yu-Kang; Law, Graham R; Ellison, George T H; Gilthorpe, Mark S

    2010-01-01

    Over 60 years ago Ronald Fisher demonstrated a number of potential pitfalls with statistical analyses using ratio variables. Nonetheless, these pitfalls are largely overlooked in contemporary clinical and epidemiological research, which routinely uses ratio variables in statistical analyses. This article aims to demonstrate how very different findings can be generated as a result of less than perfect correlations among the data used to generate ratio variables. These imperfect correlations result from measurement error and random biological variation. While the former can often be reduced by improvements in measurement, random biological variation is difficult to estimate and eliminate in observational studies. Moreover, wherever the underlying biological relationships among epidemiological variables are unclear, and hence the choice of statistical model is also unclear, the different findings generated by different analytical strategies can lead to contradictory conclusions. Caution is therefore required when interpreting analyses of ratio variables whenever the underlying biological relationships among the variables involved are unspecified or unclear. (c) 2009 John Wiley & Sons, Ltd.

  16. Dissecting the genetics of complex traits using summary association statistics.

    PubMed

    Pasaniuc, Bogdan; Price, Alkes L

    2017-02-01

    During the past decade, genome-wide association studies (GWAS) have been used to successfully identify tens of thousands of genetic variants associated with complex traits and diseases. These studies have produced extensive repositories of genetic variation and trait measurements across large numbers of individuals, providing tremendous opportunities for further analyses. However, privacy concerns and other logistical considerations often limit access to individual-level genetic data, motivating the development of methods that analyse summary association statistics. Here, we review recent progress on statistical methods that leverage summary association data to gain insights into the genetic basis of complex traits and diseases.

  17. Designing Intervention Studies: Selected Populations, Range Restrictions, and Statistical Power

    PubMed Central

    Miciak, Jeremy; Taylor, W. Pat; Stuebing, Karla K.; Fletcher, Jack M.; Vaughn, Sharon

    2016-01-01

    An appropriate estimate of statistical power is critical for the design of intervention studies. Although the inclusion of a pretest covariate in the test of the primary outcome can increase statistical power, samples selected on the basis of pretest performance may demonstrate range restriction on the selection measure and other correlated measures. This can result in attenuated pretest-posttest correlations, reducing the variance explained by the pretest covariate. We investigated the implications of two potential range restriction scenarios: direct truncation on a selection measure and indirect range restriction on correlated measures. Empirical and simulated data indicated direct range restriction on the pretest covariate greatly reduced statistical power and necessitated sample size increases of 82%–155% (dependent on selection criteria) to achieve equivalent statistical power to parameters with unrestricted samples. However, measures demonstrating indirect range restriction required much smaller sample size increases (32%–71%) under equivalent scenarios. Additional analyses manipulated the correlations between measures and pretest-posttest correlations to guide planning experiments. Results highlight the need to differentiate between selection measures and potential covariates and to investigate range restriction as a factor impacting statistical power. PMID:28479943

  18. Designing Intervention Studies: Selected Populations, Range Restrictions, and Statistical Power.

    PubMed

    Miciak, Jeremy; Taylor, W Pat; Stuebing, Karla K; Fletcher, Jack M; Vaughn, Sharon

    2016-01-01

    An appropriate estimate of statistical power is critical for the design of intervention studies. Although the inclusion of a pretest covariate in the test of the primary outcome can increase statistical power, samples selected on the basis of pretest performance may demonstrate range restriction on the selection measure and other correlated measures. This can result in attenuated pretest-posttest correlations, reducing the variance explained by the pretest covariate. We investigated the implications of two potential range restriction scenarios: direct truncation on a selection measure and indirect range restriction on correlated measures. Empirical and simulated data indicated direct range restriction on the pretest covariate greatly reduced statistical power and necessitated sample size increases of 82%-155% (dependent on selection criteria) to achieve equivalent statistical power to parameters with unrestricted samples. However, measures demonstrating indirect range restriction required much smaller sample size increases (32%-71%) under equivalent scenarios. Additional analyses manipulated the correlations between measures and pretest-posttest correlations to guide planning experiments. Results highlight the need to differentiate between selection measures and potential covariates and to investigate range restriction as a factor impacting statistical power.

  19. Effects of Exercise in the Treatment of Overweight and Obese Children and Adolescents: A Systematic Review of Meta-Analyses

    PubMed Central

    Kelley, George A.; Kelley, Kristi S.

    2013-01-01

    Purpose. Conduct a systematic review of previous meta-analyses addressing the effects of exercise in the treatment of overweight and obese children and adolescents. Methods. Previous meta-analyses of randomized controlled exercise trials that assessed adiposity in overweight and obese children and adolescents were included by searching nine electronic databases and cross-referencing from retrieved studies. Methodological quality was assessed using the Assessment of Multiple Systematic Reviews (AMSTAR) Instrument. The alpha level for statistical significance was set at P ≤ 0.05. Results. Of the 308 studies reviewed, two aggregate data meta-analyses representing 14 and 17 studies and 481 and 701 boys and girls met all eligibility criteria. Methodological quality was 64% and 73%. For both studies, statistically significant reductions in percent body fat were observed (P = 0.006 and P < 0.00001). The number-needed-to treat (NNT) was 4 and 3 with an estimated 24.5 and 31.5 million overweight and obese children in the world potentially benefitting, 2.8 and 3.6 million in the US. No other measures of adiposity (BMI-related measures, body weight, and central obesity) were statistically significant. Conclusions. Exercise is efficacious for reducing percent body fat in overweight and obese children and adolescents. Insufficient evidence exists to suggest that exercise reduces other measures of adiposity. PMID:24455215

  20. Cross-population validation of statistical distance as a measure of physiological dysregulation during aging.

    PubMed

    Cohen, Alan A; Milot, Emmanuel; Li, Qing; Legault, Véronique; Fried, Linda P; Ferrucci, Luigi

    2014-09-01

    Measuring physiological dysregulation during aging could be a key tool both to understand underlying aging mechanisms and to predict clinical outcomes in patients. However, most existing indices are either circular or hard to interpret biologically. Recently, we showed that statistical distance of 14 common blood biomarkers (a measure of how strange an individual's biomarker profile is) was associated with age and mortality in the WHAS II data set, validating its use as a measure of physiological dysregulation. Here, we extend the analyses to other data sets (WHAS I and InCHIANTI) to assess the stability of the measure across populations. We found that the statistical criteria used to determine the original 14 biomarkers produced diverging results across populations; in other words, had we started with a different data set, we would have chosen a different set of markers. Nonetheless, the same 14 markers (or the subset of 12 available for InCHIANTI) produced highly similar predictions of age and mortality. We include analyses of all combinatorial subsets of the markers and show that results do not depend much on biomarker choice or data set, but that more markers produce a stronger signal. We conclude that statistical distance as a measure of physiological dysregulation is stable across populations in Europe and North America. Copyright © 2014 Elsevier Inc. All rights reserved.

  1. Statistical approaches in published ophthalmic clinical science papers: a comparison to statistical practice two decades ago.

    PubMed

    Zhang, Harrison G; Ying, Gui-Shuang

    2018-02-09

    The aim of this study is to evaluate the current practice of statistical analysis of eye data in clinical science papers published in British Journal of Ophthalmology ( BJO ) and to determine whether the practice of statistical analysis has improved in the past two decades. All clinical science papers (n=125) published in BJO in January-June 2017 were reviewed for their statistical analysis approaches for analysing primary ocular measure. We compared our findings to the results from a previous paper that reviewed BJO papers in 1995. Of 112 papers eligible for analysis, half of the studies analysed the data at an individual level because of the nature of observation, 16 (14%) studies analysed data from one eye only, 36 (32%) studies analysed data from both eyes at ocular level, one study (1%) analysed the overall summary of ocular finding per individual and three (3%) studies used the paired comparison. Among studies with data available from both eyes, 50 (89%) of 56 papers in 2017 did not analyse data from both eyes or ignored the intereye correlation, as compared with in 60 (90%) of 67 papers in 1995 (P=0.96). Among studies that analysed data from both eyes at an ocular level, 33 (92%) of 36 studies completely ignored the intereye correlation in 2017, as compared with in 16 (89%) of 18 studies in 1995 (P=0.40). A majority of studies did not analyse the data properly when data from both eyes were available. The practice of statistical analysis did not improve in the past two decades. Collaborative efforts should be made in the vision research community to improve the practice of statistical analysis for ocular data. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  2. Studies in interactive communication. I - The effects of four communication modes on the behavior of teams during cooperative problem-solving.

    NASA Technical Reports Server (NTRS)

    Chapanis, A.; Ochsman, R. B.; Parrish, R. N.; Weeks, G. D.

    1972-01-01

    Two-man teams solved credible, 'real-world' problems for which computer assistance has been or could be useful. Conversations were carried on in one of four modes of communication: (1) typewriting, (2) handwriting, (3) voice, and (4) natural, unrestricted communication. Two groups of subjects (experienced and inexperienced typists) were tested in the typewriting mode. Performance was assessed on three classes of dependent measures: time to solution, behavioral measures of activity, and linguistic measures. Significant and meaningful differences among the communication modes were found in each of the three classes of dependent variable. This paper is concerned mainly with the results of the activity analyses. Behavior was recorded in 15 different categories. The analyses of variance yielded 34 statistically significant terms of which 27 were judged to be practically significant as well. When the data were transformed to eliminate heterogeneity, the analyses of variance yielded 35 statistically significant terms of which 26 were judged to be practically significant.

  3. Use of the Global Test Statistic as a Performance Measurement in a Reananlysis of Environmental Health Data

    PubMed Central

    Dymova, Natalya; Hanumara, R. Choudary; Gagnon, Ronald N.

    2009-01-01

    Performance measurement is increasingly viewed as an essential component of environmental and public health protection programs. In characterizing program performance over time, investigators often observe multiple changes resulting from a single intervention across a range of categories. Although a variety of statistical tools allow evaluation of data one variable at a time, the global test statistic is uniquely suited for analyses of categories or groups of interrelated variables. Here we demonstrate how the global test statistic can be applied to environmental and occupational health data for the purpose of making overall statements on the success of targeted intervention strategies. PMID:19696393

  4. Use of the global test statistic as a performance measurement in a reanalysis of environmental health data.

    PubMed

    Dymova, Natalya; Hanumara, R Choudary; Enander, Richard T; Gagnon, Ronald N

    2009-10-01

    Performance measurement is increasingly viewed as an essential component of environmental and public health protection programs. In characterizing program performance over time, investigators often observe multiple changes resulting from a single intervention across a range of categories. Although a variety of statistical tools allow evaluation of data one variable at a time, the global test statistic is uniquely suited for analyses of categories or groups of interrelated variables. Here we demonstrate how the global test statistic can be applied to environmental and occupational health data for the purpose of making overall statements on the success of targeted intervention strategies.

  5. Development of the Statistical Reasoning in Biology Concept Inventory (SRBCI)

    PubMed Central

    Deane, Thomas; Nomme, Kathy; Jeffery, Erica; Pollock, Carol; Birol, Gülnur

    2016-01-01

    We followed established best practices in concept inventory design and developed a 12-item inventory to assess student ability in statistical reasoning in biology (Statistical Reasoning in Biology Concept Inventory [SRBCI]). It is important to assess student thinking in this conceptual area, because it is a fundamental requirement of being statistically literate and associated skills are needed in almost all walks of life. Despite this, previous work shows that non–expert-like thinking in statistical reasoning is common, even after instruction. As science educators, our goal should be to move students along a novice-to-expert spectrum, which could be achieved with growing experience in statistical reasoning. We used item response theory analyses (the one-parameter Rasch model and associated analyses) to assess responses gathered from biology students in two populations at a large research university in Canada in order to test SRBCI’s robustness and sensitivity in capturing useful data relating to the students’ conceptual ability in statistical reasoning. Our analyses indicated that SRBCI is a unidimensional construct, with items that vary widely in difficulty and provide useful information about such student ability. SRBCI should be useful as a diagnostic tool in a variety of biology settings and as a means of measuring the success of teaching interventions designed to improve statistical reasoning skills. PMID:26903497

  6. Angular Baryon Acoustic Oscillation measure at z=2.225 from the SDSS quasar survey

    NASA Astrophysics Data System (ADS)

    de Carvalho, E.; Bernui, A.; Carvalho, G. C.; Novaes, C. P.; Xavier, H. S.

    2018-04-01

    Following a quasi model-independent approach we measure the transversal BAO mode at high redshift using the two-point angular correlation function (2PACF). The analyses done here are only possible now with the quasar catalogue from the twelfth data release (DR12Q) from the Sloan Digital Sky Survey, because it is spatially dense enough to allow the measurement of the angular BAO signature with moderate statistical significance and acceptable precision. Our analyses with quasars in the redshift interval z in [2.20,2.25] produce the angular BAO scale θBAO = 1.77° ± 0.31° with a statistical significance of 2.12 σ (i.e., 97% confidence level), calculated through a likelihood analysis performed using the theoretical covariance matrix sourced by the analytical power spectra expected in the ΛCDM concordance model. Additionally, we show that the BAO signal is robust—although with less statistical significance—under diverse bin-size choices and under small displacements of the quasars' angular coordinates. Finally, we also performed cosmological parameter analyses comparing the θBAO predictions for wCDM and w(a)CDM models with angular BAO data available in the literature, including the measurement obtained here, jointly with CMB data. The constraints on the parameters ΩM, w0 and wa are in excellent agreement with the ΛCDM concordance model.

  7. Learning from Friends: Measuring Influence in a Dyadic Computer Instructional Setting

    ERIC Educational Resources Information Center

    DeLay, Dawn; Hartl, Amy C.; Laursen, Brett; Denner, Jill; Werner, Linda; Campe, Shannon; Ortiz, Eloy

    2014-01-01

    Data collected from partners in a dyadic instructional setting are, by definition, not statistically independent. As a consequence, conventional parametric statistical analyses of change and influence carry considerable risk of bias. In this article, we illustrate a strategy to overcome this obstacle: the longitudinal actor-partner interdependence…

  8. The Surprisingly Modest Relationship between SES and Educational Achievement

    ERIC Educational Resources Information Center

    Harwell, Michael; Maeda, Yukiko; Bishop, Kyoungwon; Xie, Aolin

    2017-01-01

    Measures of socioeconomic status (SES) are routinely used in analyses of achievement data to increase statistical power, statistically control for the effects of SES, and enhance causality arguments under the premise that the SES-achievement relationship is moderate to strong. Empirical evidence characterizing the strength of the SES-achievement…

  9. Using Artificial Neural Networks in Educational Research: Some Comparisons with Linear Statistical Models.

    ERIC Educational Resources Information Center

    Everson, Howard T.; And Others

    This paper explores the feasibility of neural computing methods such as artificial neural networks (ANNs) and abductory induction mechanisms (AIM) for use in educational measurement. ANNs and AIMS methods are contrasted with more traditional statistical techniques, such as multiple regression and discriminant function analyses, for making…

  10. Statistical Treatment of Looking-Time Data

    ERIC Educational Resources Information Center

    Csibra, Gergely; Hernik, Mikolaj; Mascaro, Olivier; Tatone, Denis; Lengyel, Máté

    2016-01-01

    Looking times (LTs) are frequently measured in empirical research on infant cognition. We analyzed the statistical distribution of LTs across participants to develop recommendations for their treatment in infancy research. Our analyses focused on a common within-subject experimental design, in which longer looking to novel or unexpected stimuli is…

  11. Statistical process control: A feasibility study of the application of time-series measurement in early neurorehabilitation after acquired brain injury.

    PubMed

    Markovic, Gabriela; Schult, Marie-Louise; Bartfai, Aniko; Elg, Mattias

    2017-01-31

    Progress in early cognitive recovery after acquired brain injury is uneven and unpredictable, and thus the evaluation of rehabilitation is complex. The use of time-series measurements is susceptible to statistical change due to process variation. To evaluate the feasibility of using a time-series method, statistical process control, in early cognitive rehabilitation. Participants were 27 patients with acquired brain injury undergoing interdisciplinary rehabilitation of attention within 4 months post-injury. The outcome measure, the Paced Auditory Serial Addition Test, was analysed using statistical process control. Statistical process control identifies if and when change occurs in the process according to 3 patterns: rapid, steady or stationary performers. The statistical process control method was adjusted, in terms of constructing the baseline and the total number of measurement points, in order to measure a process in change. Statistical process control methodology is feasible for use in early cognitive rehabilitation, since it provides information about change in a process, thus enabling adjustment of the individual treatment response. Together with the results indicating discernible subgroups that respond differently to rehabilitation, statistical process control could be a valid tool in clinical decision-making. This study is a starting-point in understanding the rehabilitation process using a real-time-measurements approach.

  12. Predicting Subsequent Myopia in Initially Pilot-Qualified USAFA Cadets.

    DTIC Science & Technology

    1985-12-27

    Refraction Measurement 14 Accesion For . 4.0 RESULTS NTIS CRA&I 15 4.1 Descriptive Statistics DTIC TAB 0 15i ~ ~Unannoutwced [ 4.2 Predictive Statistics ...mentioned), and three were missing a status. The data of the subject who was commissionable were dropped from the statistical analyses. Of the 91...relatively equal numbers of participants from all classes will become obvious ’’" - within the results. J 4.1 Descriptive Statistics In the original plan

  13. A systematic review of the quality of statistical methods employed for analysing quality of life data in cancer randomised controlled trials.

    PubMed

    Hamel, Jean-Francois; Saulnier, Patrick; Pe, Madeline; Zikos, Efstathios; Musoro, Jammbe; Coens, Corneel; Bottomley, Andrew

    2017-09-01

    Over the last decades, Health-related Quality of Life (HRQoL) end-points have become an important outcome of the randomised controlled trials (RCTs). HRQoL methodology in RCTs has improved following international consensus recommendations. However, no international recommendations exist concerning the statistical analysis of such data. The aim of our study was to identify and characterise the quality of the statistical methods commonly used for analysing HRQoL data in cancer RCTs. Building on our recently published systematic review, we analysed a total of 33 published RCTs studying the HRQoL methods reported in RCTs since 1991. We focussed on the ability of the methods to deal with the three major problems commonly encountered when analysing HRQoL data: their multidimensional and longitudinal structure and the commonly high rate of missing data. All studies reported HRQoL being assessed repeatedly over time for a period ranging from 2 to 36 months. Missing data were common, with compliance rates ranging from 45% to 90%. From the 33 studies considered, 12 different statistical methods were identified. Twenty-nine studies analysed each of the questionnaire sub-dimensions without type I error adjustment. Thirteen studies repeated the HRQoL analysis at each assessment time again without type I error adjustment. Only 8 studies used methods suitable for repeated measurements. Our findings show a lack of consistency in statistical methods for analysing HRQoL data. Problems related to multiple comparisons were rarely considered leading to a high risk of false positive results. It is therefore critical that international recommendations for improving such statistical practices are developed. Copyright © 2017. Published by Elsevier Ltd.

  14. Statistical Evaluation of Molecular Contamination During Spacecraft Thermal Vacuum Test

    NASA Technical Reports Server (NTRS)

    Chen, Philip; Hedgeland, Randy; Montoya, Alex; Roman-Velazquez, Juan; Dunn, Jamie; Colony, Joe; Petitto, Joseph

    1998-01-01

    The purpose of this paper is to evaluate the statistical molecular contamination data with a goal to improve spacecraft contamination control. The statistical data was generated in typical thermal vacuum tests at the National Aeronautics and Space Administration, Goddard Space Flight Center (GSFC). The magnitude of material outgassing was measured using a Quartz Crystal Microbalance (QCM) device during the test. A solvent rinse sample was taken at the conclusion of each test. Then detailed qualitative and quantitative measurements were obtained through chemical analyses. All data used in this study encompassed numerous spacecraft tests in recent years.

  15. Statistical Evaluation of Molecular Contamination During Spacecraft Thermal Vacuum Test

    NASA Technical Reports Server (NTRS)

    Chen, Philip; Hedgeland, Randy; Montoya, Alex; Roman-Velazquez, Juan; Dunn, Jamie; Colony, Joe; Petitto, Joseph

    1999-01-01

    The purpose of this paper is to evaluate the statistical molecular contamination data with a goal to improve spacecraft contamination control. The statistical data was generated in typical thermal vacuum tests at the National Aeronautics and Space Administration, Goddard Space Flight Center (GSFC). The magnitude of material outgassing was measured using a Quartz Crystal Microbalance (QCNO device during the test. A solvent rinse sample was taken at the conclusion of each test. Then detailed qualitative and quantitative measurements were obtained through chemical analyses. All data used in this study encompassed numerous spacecraft tests in recent years.

  16. Statistical Evaluation of Molecular Contamination During Spacecraft Thermal Vacuum Test

    NASA Technical Reports Server (NTRS)

    Chen, Philip; Hedgeland, Randy; Montoya, Alex; Roman-Velazquez, Juan; Dunn, Jamie; Colony, Joe; Petitto, Joseph

    1997-01-01

    The purpose of this paper is to evaluate the statistical molecular contamination data with a goal to improve spacecraft contamination control. The statistical data was generated in typical thermal vacuum tests at the National Aeronautics and Space Administration, Goddard Space Flight Center (GSFC). The magnitude of material outgassing was measured using a Quartz Crystal Microbalance (QCM) device during the test. A solvent rinse sample was taken at the conclusion of the each test. Then detailed qualitative and quantitative measurements were obtained through chemical analyses. All data used in this study encompassed numerous spacecraft tests in recent years.

  17. Statistical Analysis of a Round-Robin Measurement Survey of Two Candidate Materials for a Seebeck Coefficient Standard Reference Material

    PubMed Central

    Lu, Z. Q. J.; Lowhorn, N. D.; Wong-Ng, W.; Zhang, W.; Thomas, E. L.; Otani, M.; Green, M. L.; Tran, T. N.; Caylor, C.; Dilley, N. R.; Downey, A.; Edwards, B.; Elsner, N.; Ghamaty, S.; Hogan, T.; Jie, Q.; Li, Q.; Martin, J.; Nolas, G.; Obara, H.; Sharp, J.; Venkatasubramanian, R.; Willigan, R.; Yang, J.; Tritt, T.

    2009-01-01

    In an effort to develop a Standard Reference Material (SRM™) for Seebeck coefficient, we have conducted a round-robin measurement survey of two candidate materials—undoped Bi2Te3 and Constantan (55 % Cu and 45 % Ni alloy). Measurements were performed in two rounds by twelve laboratories involved in active thermoelectric research using a number of different commercial and custom-built measurement systems and techniques. In this paper we report the detailed statistical analyses on the interlaboratory measurement results and the statistical methodology for analysis of irregularly sampled measurement curves in the interlaboratory study setting. Based on these results, we have selected Bi2Te3 as the prototype standard material. Once available, this SRM will be useful for future interlaboratory data comparison and instrument calibrations. PMID:27504212

  18. Informal Statistics Help Desk

    NASA Technical Reports Server (NTRS)

    Young, M.; Koslovsky, M.; Schaefer, Caroline M.; Feiveson, A. H.

    2017-01-01

    Back by popular demand, the JSC Biostatistics Laboratory and LSAH statisticians are offering an opportunity to discuss your statistical challenges and needs. Take the opportunity to meet the individuals offering expert statistical support to the JSC community. Join us for an informal conversation about any questions you may have encountered with issues of experimental design, analysis, or data visualization. Get answers to common questions about sample size, repeated measures, statistical assumptions, missing data, multiple testing, time-to-event data, and when to trust the results of your analyses.

  19. Statistical analysis of fNIRS data: a comprehensive review.

    PubMed

    Tak, Sungho; Ye, Jong Chul

    2014-01-15

    Functional near-infrared spectroscopy (fNIRS) is a non-invasive method to measure brain activities using the changes of optical absorption in the brain through the intact skull. fNIRS has many advantages over other neuroimaging modalities such as positron emission tomography (PET), functional magnetic resonance imaging (fMRI), or magnetoencephalography (MEG), since it can directly measure blood oxygenation level changes related to neural activation with high temporal resolution. However, fNIRS signals are highly corrupted by measurement noises and physiology-based systemic interference. Careful statistical analyses are therefore required to extract neuronal activity-related signals from fNIRS data. In this paper, we provide an extensive review of historical developments of statistical analyses of fNIRS signal, which include motion artifact correction, short source-detector separation correction, principal component analysis (PCA)/independent component analysis (ICA), false discovery rate (FDR), serially-correlated errors, as well as inference techniques such as the standard t-test, F-test, analysis of variance (ANOVA), and statistical parameter mapping (SPM) framework. In addition, to provide a unified view of various existing inference techniques, we explain a linear mixed effect model with restricted maximum likelihood (ReML) variance estimation, and show that most of the existing inference methods for fNIRS analysis can be derived as special cases. Some of the open issues in statistical analysis are also described. Copyright © 2013 Elsevier Inc. All rights reserved.

  20. Kidney function changes with aging in adults: comparison between cross-sectional and longitudinal data analyses in renal function assessment.

    PubMed

    Chung, Sang M; Lee, David J; Hand, Austin; Young, Philip; Vaidyanathan, Jayabharathi; Sahajwalla, Chandrahas

    2015-12-01

    The study evaluated whether the renal function decline rate per year with age in adults varies based on two primary statistical analyses: cross-section (CS), using one observation per subject, and longitudinal (LT), using multiple observations per subject over time. A total of 16628 records (3946 subjects; age range 30-92 years) of creatinine clearance and relevant demographic data were used. On average, four samples per subject were collected for up to 2364 days (mean: 793 days). A simple linear regression and random coefficient models were selected for CS and LT analyses, respectively. The renal function decline rates per year were 1.33 and 0.95 ml/min/year for CS and LT analyses, respectively, and were slower when the repeated individual measurements were considered. The study confirms that rates are different based on statistical analyses, and that a statistically robust longitudinal model with a proper sampling design provides reliable individual as well as population estimates of the renal function decline rates per year with age in adults. In conclusion, our findings indicated that one should be cautious in interpreting the renal function decline rate with aging information because its estimation was highly dependent on the statistical analyses. From our analyses, a population longitudinal analysis (e.g. random coefficient model) is recommended if individualization is critical, such as a dose adjustment based on renal function during a chronic therapy. Copyright © 2015 John Wiley & Sons, Ltd.

  1. Measuring Effectiveness in a Virtual Library

    ERIC Educational Resources Information Center

    Finch, Jannette L.

    2010-01-01

    Measuring quality of service in academic libraries traditionally includes quantifiable data such as collection size, staff counts, circulation numbers, reference service statistics, qualitative analyses of customer satisfaction, shelving accuracy, and building comfort. In the libraries of the third millennium, virtual worlds, Web content and…

  2. Synthetic Indicators of Quality of Life in Europe

    ERIC Educational Resources Information Center

    Somarriba, Noelia; Pena, Bernardo

    2009-01-01

    For more than three decades now, sociologists, politicians and economists have used a wide range of statistical and econometric techniques to analyse and measure the quality of life of individuals with the aim of obtaining useful instruments for social, political and economic decision making. The aim of this paper is to analyse the advantages and…

  3. Statistical equivalence and test-retest reliability of delay and probability discounting using real and hypothetical rewards.

    PubMed

    Matusiewicz, Alexis K; Carter, Anne E; Landes, Reid D; Yi, Richard

    2013-11-01

    Delay discounting (DD) and probability discounting (PD) refer to the reduction in the subjective value of outcomes as a function of delay and uncertainty, respectively. Elevated measures of discounting are associated with a variety of maladaptive behaviors, and confidence in the validity of these measures is imperative. The present research examined (1) the statistical equivalence of discounting measures when rewards were hypothetical or real, and (2) their 1-week reliability. While previous research has partially explored these issues using the low threshold of nonsignificant difference, the present study fully addressed this issue using the more-compelling threshold of statistical equivalence. DD and PD measures were collected from 28 healthy adults using real and hypothetical $50 rewards during each of two experimental sessions, one week apart. Analyses using area-under-the-curve measures revealed a general pattern of statistical equivalence, indicating equivalence of real/hypothetical conditions as well as 1-week reliability. Exceptions are identified and discussed. Copyright © 2013 Elsevier B.V. All rights reserved.

  4. Psychometric properties of the Danish student well-being questionnaire assessed in >250,000 student responders.

    PubMed

    Niclasen, Janni; Keilow, Maria; Obel, Carsten

    2018-05-01

    Well-being is considered a prerequisite for learning. The Danish Ministry of Education initiated the development of a new 40-item student well-being questionnaire in 2014 to monitor well-being among all Danish public school students on a yearly basis. The aim of this study was to investigate the basic psychometric properties of this questionnaire. We used the data from the 2015 Danish student well-being survey for 268,357 students in grades 4-9 (about 85% of the study population). Descriptive statistics, exploratory factor analyses, confirmatory factor analyses and Cronbach's α reliability measures were used in the analyses. The factor analyses did not unambiguously support one particular factor structure. However, based on the basic descriptive statistics, exploratory factor analyses, confirmatory factor analyses, the semantics of the individual items and Cronbach's α, we propose a four-factor structure including 27 of the 40 items originally proposed. The four scales measure school connectedness, learning self-efficacy, learning environment and classroom management. Two bullying items and two psychosomatic items should be considered separately, leaving 31 items in the questionnaire. The proposed four-factor structure addresses central aspects of well-being, which, if used constructively, may support public schools' work to increase levels of student well-being.

  5. Statistical analysis of the determinations of the Sun's Galactocentric distance

    NASA Astrophysics Data System (ADS)

    Malkin, Zinovy

    2013-02-01

    Based on several tens of R0 measurements made during the past two decades, several studies have been performed to derive the best estimate of R0. Some used just simple averaging to derive a result, whereas others provided comprehensive analyses of possible errors in published results. In either case, detailed statistical analyses of data used were not performed. However, a computation of the best estimates of the Galactic rotation constants is not only an astronomical but also a metrological task. Here we perform an analysis of 53 R0 measurements (published in the past 20 years) to assess the consistency of the data. Our analysis shows that they are internally consistent. It is also shown that any trend in the R0 estimates from the last 20 years is statistically negligible, which renders the presence of a bandwagon effect doubtful. On the other hand, the formal errors in the published R0 estimates improve significantly with time.

  6. Parametric analyses of summative scores may lead to conflicting inferences when comparing groups: A simulation study.

    PubMed

    Khan, Asaduzzaman; Chien, Chi-Wen; Bagraith, Karl S

    2015-04-01

    To investigate whether using a parametric statistic in comparing groups leads to different conclusions when using summative scores from rating scales compared with using their corresponding Rasch-based measures. A Monte Carlo simulation study was designed to examine between-group differences in the change scores derived from summative scores from rating scales, and those derived from their corresponding Rasch-based measures, using 1-way analysis of variance. The degree of inconsistency between the 2 scoring approaches (i.e. summative and Rasch-based) was examined, using varying sample sizes, scale difficulties and person ability conditions. This simulation study revealed scaling artefacts that could arise from using summative scores rather than Rasch-based measures for determining the changes between groups. The group differences in the change scores were statistically significant for summative scores under all test conditions and sample size scenarios. However, none of the group differences in the change scores were significant when using the corresponding Rasch-based measures. This study raises questions about the validity of the inference on group differences of summative score changes in parametric analyses. Moreover, it provides a rationale for the use of Rasch-based measures, which can allow valid parametric analyses of rating scale data.

  7. Instrument Development Procedures for Rapid Reading Rate Measures. Technical Report # 08-05

    ERIC Educational Resources Information Center

    Liu, Kimy; Carling, Kristy; Geller, Leanne Ketterlin; Tindal, Gerald

    2008-01-01

    In this study, we describe the development of rapid reading measures, sentences presented to students in a nearly subliminal manner, with a literal comprehension question asked following their removal. After administering alternate forms of these measures to students, we present the results from three statistical analyses to ascertain their…

  8. Exploring students’ perceived and actual ability in solving statistical problems based on Rasch measurement tools

    NASA Astrophysics Data System (ADS)

    Azila Che Musa, Nor; Mahmud, Zamalia; Baharun, Norhayati

    2017-09-01

    One of the important skills that is required from any student who are learning statistics is knowing how to solve statistical problems correctly using appropriate statistical methods. This will enable them to arrive at a conclusion and make a significant contribution and decision for the society. In this study, a group of 22 students majoring in statistics at UiTM Shah Alam were given problems relating to topics on testing of hypothesis which require them to solve the problems using confidence interval, traditional and p-value approach. Hypothesis testing is one of the techniques used in solving real problems and it is listed as one of the difficult concepts for students to grasp. The objectives of this study is to explore students’ perceived and actual ability in solving statistical problems and to determine which item in statistical problem solving that students find difficult to grasp. Students’ perceived and actual ability were measured based on the instruments developed from the respective topics. Rasch measurement tools such as Wright map and item measures for fit statistics were used to accomplish the objectives. Data were collected and analysed using Winsteps 3.90 software which is developed based on the Rasch measurement model. The results showed that students’ perceived themselves as moderately competent in solving the statistical problems using confidence interval and p-value approach even though their actual performance showed otherwise. Item measures for fit statistics also showed that the maximum estimated measures were found on two problems. These measures indicate that none of the students have attempted these problems correctly due to reasons which include their lack of understanding in confidence interval and probability values.

  9. Statistical study of air pollutant concentrations via generalized gamma distribution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marani, A.; Lavagnini, I.; Buttazzoni, C.

    1986-11-01

    This paper deals with modeling observed frequency distributions of air quality data measured in the area of Venice, Italy. The paper discusses the application of the generalized gamma distribution (ggd) which has not been commonly applied to air quality data notwithstanding the fact that it embodies most distribution models used for air quality analyses. The approach yields important simplifications for statistical analyses. A comparison among the ggd and other relevant models (standard gamma, Weibull, lognormal), carried out on daily sulfur dioxide concentrations in the area of Venice underlines the efficiency of ggd models in portraying experimental data.

  10. Exploratory study on a statistical method to analyse time resolved data obtained during nanomaterial exposure measurements

    NASA Astrophysics Data System (ADS)

    Clerc, F.; Njiki-Menga, G.-H.; Witschger, O.

    2013-04-01

    Most of the measurement strategies that are suggested at the international level to assess workplace exposure to nanomaterials rely on devices measuring, in real time, airborne particles concentrations (according different metrics). Since none of the instruments to measure aerosols can distinguish a particle of interest to the background aerosol, the statistical analysis of time resolved data requires special attention. So far, very few approaches have been used for statistical analysis in the literature. This ranges from simple qualitative analysis of graphs to the implementation of more complex statistical models. To date, there is still no consensus on a particular approach and the current period is always looking for an appropriate and robust method. In this context, this exploratory study investigates a statistical method to analyse time resolved data based on a Bayesian probabilistic approach. To investigate and illustrate the use of the this statistical method, particle number concentration data from a workplace study that investigated the potential for exposure via inhalation from cleanout operations by sandpapering of a reactor producing nanocomposite thin films have been used. In this workplace study, the background issue has been addressed through the near-field and far-field approaches and several size integrated and time resolved devices have been used. The analysis of the results presented here focuses only on data obtained with two handheld condensation particle counters. While one was measuring at the source of the released particles, the other one was measuring in parallel far-field. The Bayesian probabilistic approach allows a probabilistic modelling of data series, and the observed task is modelled in the form of probability distributions. The probability distributions issuing from time resolved data obtained at the source can be compared with the probability distributions issuing from the time resolved data obtained far-field, leading in a quantitative estimation of the airborne particles released at the source when the task is performed. Beyond obtained results, this exploratory study indicates that the analysis of the results requires specific experience in statistics.

  11. Use of Statistical Analyses in the Ophthalmic Literature

    PubMed Central

    Lisboa, Renato; Meira-Freitas, Daniel; Tatham, Andrew J.; Marvasti, Amir H.; Sharpsten, Lucie; Medeiros, Felipe A.

    2014-01-01

    Purpose To identify the most commonly used statistical analyses in the ophthalmic literature and to determine the likely gain in comprehension of the literature that readers could expect if they were to sequentially add knowledge of more advanced techniques to their statistical repertoire. Design Cross-sectional study Methods All articles published from January 2012 to December 2012 in Ophthalmology, American Journal of Ophthalmology and Archives of Ophthalmology were reviewed. A total of 780 peer-reviewed articles were included. Two reviewers examined each article and assigned categories to each one depending on the type of statistical analyses used. Discrepancies between reviewers were resolved by consensus. Main Outcome Measures Total number and percentage of articles containing each category of statistical analysis were obtained. Additionally we estimated the accumulated number and percentage of articles that a reader would be expected to be able to interpret depending on their statistical repertoire. Results Readers with little or no statistical knowledge would be expected to be able to interpret the statistical methods presented in only 20.8% of articles. In order to understand more than half (51.4%) of the articles published, readers were expected to be familiar with at least 15 different statistical methods. Knowledge of 21 categories of statistical methods was necessary to comprehend 70.9% of articles, while knowledge of more than 29 categories was necessary to comprehend more than 90% of articles. Articles in retina and glaucoma subspecialties showed a tendency for using more complex analysis when compared to cornea. Conclusions Readers of clinical journals in ophthalmology need to have substantial knowledge of statistical methodology to understand the results of published studies in the literature. The frequency of use of complex statistical analyses also indicates that those involved in the editorial peer-review process must have sound statistical knowledge in order to critically appraise articles submitted for publication. The results of this study could provide guidance to direct the statistical learning of clinical ophthalmologists, researchers and educators involved in the design of courses for residents and medical students. PMID:24612977

  12. Teaching statistics in biology: using inquiry-based learning to strengthen understanding of statistical analysis in biology laboratory courses.

    PubMed

    Metz, Anneke M

    2008-01-01

    There is an increasing need for students in the biological sciences to build a strong foundation in quantitative approaches to data analyses. Although most science, engineering, and math field majors are required to take at least one statistics course, statistical analysis is poorly integrated into undergraduate biology course work, particularly at the lower-division level. Elements of statistics were incorporated into an introductory biology course, including a review of statistics concepts and opportunity for students to perform statistical analysis in a biological context. Learning gains were measured with an 11-item statistics learning survey instrument developed for the course. Students showed a statistically significant 25% (p < 0.005) increase in statistics knowledge after completing introductory biology. Students improved their scores on the survey after completing introductory biology, even if they had previously completed an introductory statistics course (9%, improvement p < 0.005). Students retested 1 yr after completing introductory biology showed no loss of their statistics knowledge as measured by this instrument, suggesting that the use of statistics in biology course work may aid long-term retention of statistics knowledge. No statistically significant differences in learning were detected between male and female students in the study.

  13. An evaluation of various methods of treatment for Legg-Calvé-Perthes disease.

    PubMed

    Wang, L; Bowen, J R; Puniak, M A; Guille, J T; Glutting, J

    1995-05-01

    An analysis of 5 methods of treatment for Legg-Calvé-Perthes disease was done on 124 patients with 141 affected hips. Before treatment, all groups were statistically similar concerning initial Mose measurement, age at onset of the disease, gender, and Catterall class. Treatments included the Scottish Rite orthosis (41 hips), nonweight bearing and exercises (41 hips), Petrie cast (29 hips), femoral varus osteotomy (15 hips), or Salter osteotomy (15 hips). Hips treated by the Scottish Rite orthosis had a significantly worse Mose measurement across time interaction (repeated measures analysis of variance, post hoc analyses, p < 0.05). For the other 4 treatment methods, there was no statistically different change. At followup, the Mose measurements for hips treated with the Scottish Rite orthosis were significantly worse than those for hips treated by nonweight bearing and exercises, Petrie cast, varus osteotomy, or Salter osteotomy (repeated measures analysis of variance, post hoc analyses, p < 0.05). There was, however, no significant difference in the distribution of hips according to the Stulberg et al classification at the last followup.

  14. Median statistics estimates of Hubble and Newton's constants

    NASA Astrophysics Data System (ADS)

    Bethapudi, Suryarao; Desai, Shantanu

    2017-02-01

    Robustness of any statistics depends upon the number of assumptions it makes about the measured data. We point out the advantages of median statistics using toy numerical experiments and demonstrate its robustness, when the number of assumptions we can make about the data are limited. We then apply the median statistics technique to obtain estimates of two constants of nature, Hubble constant (H0) and Newton's gravitational constant ( G , both of which show significant differences between different measurements. For H0, we update the analyses done by Chen and Ratra (2011) and Gott et al. (2001) using 576 measurements. We find after grouping the different results according to their primary type of measurement, the median estimates are given by H0 = 72.5^{+2.5}_{-8} km/sec/Mpc with errors corresponding to 95% c.l. (2 σ) and G=6.674702^{+0.0014}_{-0.0009} × 10^{-11} Nm2kg-2 corresponding to 68% c.l. (1σ).

  15. Reporting Practices and Use of Quantitative Methods in Canadian Journal Articles in Psychology.

    PubMed

    Counsell, Alyssa; Harlow, Lisa L

    2017-05-01

    With recent focus on the state of research in psychology, it is essential to assess the nature of the statistical methods and analyses used and reported by psychological researchers. To that end, we investigated the prevalence of different statistical procedures and the nature of statistical reporting practices in recent articles from the four major Canadian psychology journals. The majority of authors evaluated their research hypotheses through the use of analysis of variance (ANOVA), t -tests, and multiple regression. Multivariate approaches were less common. Null hypothesis significance testing remains a popular strategy, but the majority of authors reported a standardized or unstandardized effect size measure alongside their significance test results. Confidence intervals on effect sizes were infrequently employed. Many authors provided minimal details about their statistical analyses and less than a third of the articles presented on data complications such as missing data and violations of statistical assumptions. Strengths of and areas needing improvement for reporting quantitative results are highlighted. The paper concludes with recommendations for how researchers and reviewers can improve comprehension and transparency in statistical reporting.

  16. Data Interpretation: Using Probability

    ERIC Educational Resources Information Center

    Drummond, Gordon B.; Vowler, Sarah L.

    2011-01-01

    Experimental data are analysed statistically to allow researchers to draw conclusions from a limited set of measurements. The hard fact is that researchers can never be certain that measurements from a sample will exactly reflect the properties of the entire group of possible candidates available to be studied (although using a sample is often the…

  17. A Measurement of Alienation in College Student Marihuana Users and Non-Users.

    ERIC Educational Resources Information Center

    Harris, Eileen M.

    A three part questionnaire was administered to 1380 Southern Illinois University students to: (1) elicit demographic data; (2) determine the extent of experience with marihuana; and (3) measure alienation utilizing Dean's scale. In addition, the Minnesota Multiphasic Personality Lie Inventory was given. Statistical analyses were performed to…

  18. Impact of ontology evolution on functional analyses.

    PubMed

    Groß, Anika; Hartung, Michael; Prüfer, Kay; Kelso, Janet; Rahm, Erhard

    2012-10-15

    Ontologies are used in the annotation and analysis of biological data. As knowledge accumulates, ontologies and annotation undergo constant modifications to reflect this new knowledge. These modifications may influence the results of statistical applications such as functional enrichment analyses that describe experimental data in terms of ontological groupings. Here, we investigate to what degree modifications of the Gene Ontology (GO) impact these statistical analyses for both experimental and simulated data. The analysis is based on new measures for the stability of result sets and considers different ontology and annotation changes. Our results show that past changes in the GO are non-uniformly distributed over different branches of the ontology. Considering the semantic relatedness of significant categories in analysis results allows a more realistic stability assessment for functional enrichment studies. We observe that the results of term-enrichment analyses tend to be surprisingly stable despite changes in ontology and annotation.

  19. Improving qPCR telomere length assays: Controlling for well position effects increases statistical power.

    PubMed

    Eisenberg, Dan T A; Kuzawa, Christopher W; Hayes, M Geoffrey

    2015-01-01

    Telomere length (TL) is commonly measured using quantitative PCR (qPCR). Although, easier than the southern blot of terminal restriction fragments (TRF) TL measurement method, one drawback of qPCR is that it introduces greater measurement error and thus reduces the statistical power of analyses. To address a potential source of measurement error, we consider the effect of well position on qPCR TL measurements. qPCR TL data from 3,638 people run on a Bio-Rad iCycler iQ are reanalyzed here. To evaluate measurement validity, correspondence with TRF, age, and between mother and offspring are examined. First, we present evidence for systematic variation in qPCR TL measurements in relation to thermocycler well position. Controlling for these well-position effects consistently improves measurement validity and yields estimated improvements in statistical power equivalent to increasing sample sizes by 16%. We additionally evaluated the linearity of the relationships between telomere and single copy gene control amplicons and between qPCR and TRF measures. We find that, unlike some previous reports, our data exhibit linear relationships. We introduce the standard error in percent, a superior method for quantifying measurement error as compared to the commonly used coefficient of variation. Using this measure, we find that excluding samples with high measurement error does not improve measurement validity in our study. Future studies using block-based thermocyclers should consider well position effects. Since additional information can be gleaned from well position corrections, rerunning analyses of previous results with well position correction could serve as an independent test of the validity of these results. © 2015 Wiley Periodicals, Inc.

  20. [In-house team seminars: working together as a team--from data and statistics to quality development].

    PubMed

    Berlage, Silvia; Wenzlaff, Paul; Damm, Gabriele; Sens, Brigitte

    2010-01-01

    The concept of the "ZQ In-house Seminars" provided by external trainers/experts pursues the specific aim to enable all healthcare staff members of hospital departments to analyse statistical data--especially from external quality measurements--and to initiate in-hospital measures of quality improvement based on structured team work. The results of an evaluation in Lower Saxony for the period between 2004 and 2008 demonstrate a sustainable increase in outcome quality of care and a strengthening of team and process orientation in clinical care.

  1. Methodological Standards for Meta-Analyses and Qualitative Systematic Reviews of Cardiac Prevention and Treatment Studies: A Scientific Statement From the American Heart Association.

    PubMed

    Rao, Goutham; Lopez-Jimenez, Francisco; Boyd, Jack; D'Amico, Frank; Durant, Nefertiti H; Hlatky, Mark A; Howard, George; Kirley, Katherine; Masi, Christopher; Powell-Wiley, Tiffany M; Solomonides, Anthony E; West, Colin P; Wessel, Jennifer

    2017-09-05

    Meta-analyses are becoming increasingly popular, especially in the fields of cardiovascular disease prevention and treatment. They are often considered to be a reliable source of evidence for making healthcare decisions. Unfortunately, problems among meta-analyses such as the misapplication and misinterpretation of statistical methods and tests are long-standing and widespread. The purposes of this statement are to review key steps in the development of a meta-analysis and to provide recommendations that will be useful for carrying out meta-analyses and for readers and journal editors, who must interpret the findings and gauge methodological quality. To make the statement practical and accessible, detailed descriptions of statistical methods have been omitted. Based on a survey of cardiovascular meta-analyses, published literature on methodology, expert consultation, and consensus among the writing group, key recommendations are provided. Recommendations reinforce several current practices, including protocol registration; comprehensive search strategies; methods for data extraction and abstraction; methods for identifying, measuring, and dealing with heterogeneity; and statistical methods for pooling results. Other practices should be discontinued, including the use of levels of evidence and evidence hierarchies to gauge the value and impact of different study designs (including meta-analyses) and the use of structured tools to assess the quality of studies to be included in a meta-analysis. We also recommend choosing a pooling model for conventional meta-analyses (fixed effect or random effects) on the basis of clinical and methodological similarities among studies to be included, rather than the results of a test for statistical heterogeneity. © 2017 American Heart Association, Inc.

  2. Electric Field Magnitude and Radar Reflectivity as a Function of Distance from Cloud Edge

    NASA Technical Reports Server (NTRS)

    Ward, Jennifer G.; Merceret, Francis J.

    2004-01-01

    The results of analyses of data collected during a field investigation of thunderstorm anvil and debris clouds are reported. Statistics of the magnitude of the electric field are determined as a function of distance from cloud edge. Statistics of radar reflectivity near cloud edge are also determined. Both analyses use in-situ airborne field mill and cloud physics data coupled with ground-based radar measurements obtained in east-central Florida during the summer convective season. Electric fields outside of anvil and debris clouds averaged less than 3 kV/m. The average radar reflectivity at the cloud edge ranged between 0 and 5 dBZ.

  3. The extent and consequences of p-hacking in science.

    PubMed

    Head, Megan L; Holman, Luke; Lanfear, Rob; Kahn, Andrew T; Jennions, Michael D

    2015-03-01

    A focus on novel, confirmatory, and statistically significant results leads to substantial bias in the scientific literature. One type of bias, known as "p-hacking," occurs when researchers collect or select data or statistical analyses until nonsignificant results become significant. Here, we use text-mining to demonstrate that p-hacking is widespread throughout science. We then illustrate how one can test for p-hacking when performing a meta-analysis and show that, while p-hacking is probably common, its effect seems to be weak relative to the real effect sizes being measured. This result suggests that p-hacking probably does not drastically alter scientific consensuses drawn from meta-analyses.

  4. Transfusion Indication Threshold Reduction (TITRe2) randomized controlled trial in cardiac surgery: statistical analysis plan.

    PubMed

    Pike, Katie; Nash, Rachel L; Murphy, Gavin J; Reeves, Barnaby C; Rogers, Chris A

    2015-02-22

    The Transfusion Indication Threshold Reduction (TITRe2) trial is the largest randomized controlled trial to date to compare red blood cell transfusion strategies following cardiac surgery. This update presents the statistical analysis plan, detailing how the study will be analyzed and presented. The statistical analysis plan has been written following recommendations from the International Conference on Harmonisation of Technical Requirements for Registration of Pharmaceuticals for Human Use, prior to database lock and the final analysis of trial data. Outlined analyses are in line with the Consolidated Standards of Reporting Trials (CONSORT). The study aims to randomize 2000 patients from 17 UK centres. Patients are randomized to either a restrictive (transfuse if haemoglobin concentration <7.5 g/dl) or liberal (transfuse if haemoglobin concentration <9 g/dl) transfusion strategy. The primary outcome is a binary composite outcome of any serious infectious or ischaemic event in the first 3 months following randomization. The statistical analysis plan details how non-adherence with the intervention, withdrawals from the study, and the study population will be derived and dealt with in the analysis. The planned analyses of the trial primary and secondary outcome measures are described in detail, including approaches taken to deal with multiple testing, model assumptions not being met and missing data. Details of planned subgroup and sensitivity analyses and pre-specified ancillary analyses are given, along with potential issues that have been identified with such analyses and possible approaches to overcome such issues. ISRCTN70923932 .

  5. Mindful attention and awareness: relationships with psychopathology and emotion regulation.

    PubMed

    Gregório, Sónia; Pinto-Gouveia, José

    2013-01-01

    The growing interest in mindfulness from the scientific community has originated several self-report measures of this psychological construct. The Mindful Attention and Awareness Scale (MAAS) is a self-report measure of mindfulness at a trait-level. This paper aims at exploring MAAS psychometric characteristics and validating it for the Portuguese population. The first two studies replicate some of the original author's statistical procedures in two different samples from the Portuguese general community population, in particular confirmatory factor analyses. Results from both analyses confirmed the scale single-factor structure and indicated a very good reliability. Moreover, cross-validation statistics showed that this single-factor structure is valid for different respondents from the general community population. In the third study the Portuguese version of the MAAS was found to have good convergent and discriminant validities. Overall the findings support the psychometric validity of the Portuguese version of MAAS and suggest this is a reliable self-report measure of trait-mindfulness, a central construct in Clinical Psychology research and intervention fields.

  6. Statistical Representations of Track Geometry : Volume I, Text.

    DOT National Transportation Integrated Search

    1980-03-31

    Mathematical representations of railroad track geometry variations are derived from time series analyses of track measurements. Since the majority of track is free of anomalies (turnouts, crossings, bridges, etc.), representation of anomaly-free trac...

  7. Systems and methods for detection of blowout precursors in combustors

    DOEpatents

    Lieuwen, Tim C.; Nair, Suraj

    2006-08-15

    The present invention comprises systems and methods for detecting flame blowout precursors in combustors. The blowout precursor detection system comprises a combustor, a pressure measuring device, and blowout precursor detection unit. A combustion controller may also be used to control combustor parameters. The methods of the present invention comprise receiving pressure data measured by an acoustic pressure measuring device, performing one or a combination of spectral analysis, statistical analysis, and wavelet analysis on received pressure data, and determining the existence of a blowout precursor based on such analyses. The spectral analysis, statistical analysis, and wavelet analysis further comprise their respective sub-methods to determine the existence of blowout precursors.

  8. Teaching Statistics in Biology: Using Inquiry-based Learning to Strengthen Understanding of Statistical Analysis in Biology Laboratory Courses

    PubMed Central

    2008-01-01

    There is an increasing need for students in the biological sciences to build a strong foundation in quantitative approaches to data analyses. Although most science, engineering, and math field majors are required to take at least one statistics course, statistical analysis is poorly integrated into undergraduate biology course work, particularly at the lower-division level. Elements of statistics were incorporated into an introductory biology course, including a review of statistics concepts and opportunity for students to perform statistical analysis in a biological context. Learning gains were measured with an 11-item statistics learning survey instrument developed for the course. Students showed a statistically significant 25% (p < 0.005) increase in statistics knowledge after completing introductory biology. Students improved their scores on the survey after completing introductory biology, even if they had previously completed an introductory statistics course (9%, improvement p < 0.005). Students retested 1 yr after completing introductory biology showed no loss of their statistics knowledge as measured by this instrument, suggesting that the use of statistics in biology course work may aid long-term retention of statistics knowledge. No statistically significant differences in learning were detected between male and female students in the study. PMID:18765754

  9. Measurement issues in research on social support and health.

    PubMed Central

    Dean, K; Holst, E; Kreiner, S; Schoenborn, C; Wilson, R

    1994-01-01

    STUDY OBJECTIVE--The aims were: (1) to identify methodological problems that may explain the inconsistencies and contradictions in the research evidence on social support and health, and (2) to validate a frequently used measure of social support in order to determine whether or not it could be used in multivariate analyses of population data in research on social support and health. DESIGN AND METHODS--Secondary analysis of data collected in a cross sectional survey of a multistage cluster sample of the population of the United States, designed to study relationships in behavioural, social support and health variables. Statistical models based on item response theory and graph theory were used to validate the measure of social support to be used in subsequent analyses. PARTICIPANTS--Data on 1755 men and women aged 20 to 64 years were available for the scale validation. RESULTS--Massive evidence of item bias was found for all items of a group membership subscale. The most serious problems were found in relationship to an item measuring membership in work related groups. Using that item in the social network scale in multivariate analyses would distort findings on the statistical effects of education, employment status, and household income. Evidence of item bias was also found for a sociability subscale. When marital status was included to create what is called an intimate contacts subscale, the confounding grew worse. CONCLUSIONS--The composite measure of social network is not valid and would seriously distort the findings of analyses attempting to study relationships between the index and other variables. The findings show that valid measurement is a methodological issue that must be addressed in scientific research on population health. PMID:8189179

  10. Challenge in Enhancing the Teaching and Learning of Variable Measurements in Quantitative Research

    ERIC Educational Resources Information Center

    Kee, Chang Peng; Osman, Kamisah; Ahmad, Fauziah

    2013-01-01

    Statistical analysis is one component that cannot be avoided in a quantitative research. Initial observations noted that students in higher education institution faced difficulty analysing quantitative data which were attributed to the confusions of various variable measurements. This paper aims to compare the outcomes of two approaches applied in…

  11. Counterbalancing and Other Uses of Repeated-Measures Latin-Square Designs: Analyses and Interpretations.

    ERIC Educational Resources Information Center

    Reese, Hayne W.

    1997-01-01

    Recommends that when repeated-measures Latin-square designs are used to counterbalance treatments across a procedural variable or to reduce the number of treatment combinations given to each participant, effects be analyzed statistically, and that in all uses, researchers consider alternative interpretations of the variance associated with the…

  12. The Psychometric Toolbox: An Excel Package for Use in Measurement and Psychometrics Courses

    ERIC Educational Resources Information Center

    Ferrando, Pere J.; Masip-Cabrera, Antoni; Navarro-González, David; Lorenzo-Seva, Urbano

    2017-01-01

    The Psychometric Toolbox (PT) is a user-friendly, non-commercial package mainly intended to be used for instructional purposes in introductory courses of educational and psychological measurement, psychometrics and statistics. The PT package is organized in six separate modules or sub-programs: Data preprocessor (descriptive analyses and data…

  13. Statistical contact angle analyses; "slow moving" drops on a horizontal silicon-oxide surface.

    PubMed

    Schmitt, M; Grub, J; Heib, F

    2015-06-01

    Sessile drop experiments on horizontal surfaces are commonly used to characterise surface properties in science and in industry. The advancing angle and the receding angle are measurable on every solid. Specially on horizontal surfaces even the notions themselves are critically questioned by some authors. Building a standard, reproducible and valid method of measuring and defining specific (advancing/receding) contact angles is an important challenge of surface science. Recently we have developed two/three approaches, by sigmoid fitting, by independent and by dependent statistical analyses, which are practicable for the determination of specific angles/slopes if inclining the sample surface. These approaches lead to contact angle data which are independent on "user-skills" and subjectivity of the operator which is also of urgent need to evaluate dynamic measurements of contact angles. We will show in this contribution that the slightly modified procedures are also applicable to find specific angles for experiments on horizontal surfaces. As an example droplets on a flat freshly cleaned silicon-oxide surface (wafer) are dynamically measured by sessile drop technique while the volume of the liquid is increased/decreased. The triple points, the time, the contact angles during the advancing and the receding of the drop obtained by high-precision drop shape analysis are statistically analysed. As stated in the previous contribution the procedure is called "slow movement" analysis due to the small covered distance and the dominance of data points with low velocity. Even smallest variations in velocity such as the minimal advancing motion during the withdrawing of the liquid are identifiable which confirms the flatness and the chemical homogeneity of the sample surface and the high sensitivity of the presented approaches. Copyright © 2014 Elsevier Inc. All rights reserved.

  14. Accelerated testing of space batteries

    NASA Technical Reports Server (NTRS)

    Mccallum, J.; Thomas, R. E.; Waite, J. H.

    1973-01-01

    An accelerated life test program for space batteries is presented that fully satisfies empirical, statistical, and physical criteria for validity. The program includes thermal and other nonmechanical stress analyses as well as mechanical stress, strain, and rate of strain measurements.

  15. Morphology of glochidia of Lampsilis higginsi (Bivalvia: Unionidae) compared with three related species

    USGS Publications Warehouse

    Waller, D.L.; Holland Bartels, L. E.; Mitchell, L.G.

    1988-01-01

    Glochidia of the endangered unionid mussel Lampsilis higginsi (Lea) are morphologically similar to those of several other species in the upper Mississippi River. Life history details, such as the timing of reproduction and identity of host fish, can be readily studied if the glochidia of L. higginsi can be distinguished from those of related species. Authors used light and scanning electron microscopy and statistical analyses of three shell measurements, shell length, shell height, and hinge length, to compare the glochidia of L. higginsi with those of L. radiata siliquoidea (Barnes), L. ventricosa (Barnes), and Ligumia recta (Lamarck). Glochidia of L. higginsi were differentiated by scanning electron microscopy on the basis of a combined examination of the position of the hinge ligament and the width of dorsal ridges, but were indistinguishable by light microscope examination or by statistical analyses of measurements.

  16. Evaluation of General Classes of Reliability Estimators Often Used in Statistical Analyses of Quasi-Experimental Designs

    NASA Astrophysics Data System (ADS)

    Saini, K. K.; Sehgal, R. K.; Sethi, B. L.

    2008-10-01

    In this paper major reliability estimators are analyzed and there comparatively result are discussed. There strengths and weaknesses are evaluated in this case study. Each of the reliability estimators has certain advantages and disadvantages. Inter-rater reliability is one of the best ways to estimate reliability when your measure is an observation. However, it requires multiple raters or observers. As an alternative, you could look at the correlation of ratings of the same single observer repeated on two different occasions. Each of the reliability estimators will give a different value for reliability. In general, the test-retest and inter-rater reliability estimates will be lower in value than the parallel forms and internal consistency ones because they involve measuring at different times or with different raters. Since reliability estimates are often used in statistical analyses of quasi-experimental designs.

  17. Investigating output and energy variations and their relationship to delivery QA results using Statistical Process Control for helical tomotherapy.

    PubMed

    Binny, Diana; Mezzenga, Emilio; Lancaster, Craig M; Trapp, Jamie V; Kairn, Tanya; Crowe, Scott B

    2017-06-01

    The aims of this study were to investigate machine beam parameters using the TomoTherapy quality assurance (TQA) tool, establish a correlation to patient delivery quality assurance results and to evaluate the relationship between energy variations detected using different TQA modules. TQA daily measurement results from two treatment machines for periods of up to 4years were acquired. Analyses of beam quality, helical and static output variations were made. Variations from planned dose were also analysed using Statistical Process Control (SPC) technique and their relationship to output trends were studied. Energy variations appeared to be one of the contributing factors to delivery output dose seen in the analysis. Ion chamber measurements were reliable indicators of energy and output variations and were linear with patient dose verifications. Crown Copyright © 2017. Published by Elsevier Ltd. All rights reserved.

  18. Cocaine profiling for strategic intelligence, a cross-border project between France and Switzerland: part II. Validation of the statistical methodology for the profiling of cocaine.

    PubMed

    Lociciro, S; Esseiva, P; Hayoz, P; Dujourdy, L; Besacier, F; Margot, P

    2008-05-20

    Harmonisation and optimization of analytical and statistical methodologies were carried out between two forensic laboratories (Lausanne, Switzerland and Lyon, France) in order to provide drug intelligence for cross-border cocaine seizures. Part I dealt with the optimization of the analytical method and its robustness. This second part investigates statistical methodologies that will provide reliable comparison of cocaine seizures analysed on two different gas chromatographs interfaced with a flame ionisation detectors (GC-FIDs) in two distinct laboratories. Sixty-six statistical combinations (ten data pre-treatments followed by six different distance measurements and correlation coefficients) were applied. One pre-treatment (N+S: area of each peak is divided by its standard deviation calculated from the whole data set) followed by the Cosine or Pearson correlation coefficients were found to be the best statistical compromise for optimal discrimination of linked and non-linked samples. The centralisation of the analyses in one single laboratory is not a required condition anymore to compare samples seized in different countries. This allows collaboration, but also, jurisdictional control over data.

  19. A new method to reduce the statistical and systematic uncertainty of chance coincidence backgrounds measured with waveform digitizers

    DOE PAGES

    O'Donnell, John M.

    2015-06-30

    We present a new method for measuring chance-coincidence backgrounds during the collection of coincidence data. The method relies on acquiring data with near-zero dead time, which is now realistic due to the increasing deployment of flash electronic-digitizer (waveform digitizer) techniques. An experiment designed to use this new method is capable of acquiring more coincidence data, and a much reduced statistical fluctuation of the measured background. A statistical analysis is presented, and us ed to derive a figure of merit for the new method. Factors of four improvement over other analyses are realistic. The technique is illustrated with preliminary data takenmore » as part of a program to make new measurements of the prompt fission neutron spectra at Los Alamo s Neutron Science Center. In conclusion, it is expected that the these measurements will occur in a regime where the maximum figure of merit will be exploited« less

  20. Patient experience and process measures of quality of care at home health agencies: Factors associated with high performance.

    PubMed

    Smith, Laura M; Anderson, Wayne L; Lines, Lisa M; Pronier, Cristalle; Thornburg, Vanessa; Butler, Janelle P; Teichman, Lori; Dean-Whittaker, Debra; Goldstein, Elizabeth

    2017-01-01

    We examined the effects of provider characteristics on home health agency performance on patient experience of care (Home Health CAHPS) and process (OASIS) measures. Descriptive, multivariate, and factor analyses were used. While agencies score high on both domains, factor analyses showed that the underlying items represent separate constructs. Freestanding and Visiting Nurse Association agencies, higher number of home health aides per 100 episodes, and urban location were statistically significant predictors of lower performance. Lack of variation in composite measures potentially led to counterintuitive results for effects of organizational characteristics. This exploratory study showed the value of having separate quality domains.

  1. Applications of spatial statistical network models to stream data

    USGS Publications Warehouse

    Isaak, Daniel J.; Peterson, Erin E.; Ver Hoef, Jay M.; Wenger, Seth J.; Falke, Jeffrey A.; Torgersen, Christian E.; Sowder, Colin; Steel, E. Ashley; Fortin, Marie-Josée; Jordan, Chris E.; Ruesch, Aaron S.; Som, Nicholas; Monestiez, Pascal

    2014-01-01

    Streams and rivers host a significant portion of Earth's biodiversity and provide important ecosystem services for human populations. Accurate information regarding the status and trends of stream resources is vital for their effective conservation and management. Most statistical techniques applied to data measured on stream networks were developed for terrestrial applications and are not optimized for streams. A new class of spatial statistical model, based on valid covariance structures for stream networks, can be used with many common types of stream data (e.g., water quality attributes, habitat conditions, biological surveys) through application of appropriate distributions (e.g., Gaussian, binomial, Poisson). The spatial statistical network models account for spatial autocorrelation (i.e., nonindependence) among measurements, which allows their application to databases with clustered measurement locations. Large amounts of stream data exist in many areas where spatial statistical analyses could be used to develop novel insights, improve predictions at unsampled sites, and aid in the design of efficient monitoring strategies at relatively low cost. We review the topic of spatial autocorrelation and its effects on statistical inference, demonstrate the use of spatial statistics with stream datasets relevant to common research and management questions, and discuss additional applications and development potential for spatial statistics on stream networks. Free software for implementing the spatial statistical network models has been developed that enables custom applications with many stream databases.

  2. Reduction of Complications of Local Anaesthesia in Dental Healthcare Setups by Application of the Six Sigma Methodology: A Statistical Quality Improvement Technique.

    PubMed

    Akifuddin, Syed; Khatoon, Farheen

    2015-12-01

    Health care faces challenges due to complications, inefficiencies and other concerns that threaten the safety of patients. The purpose of his study was to identify causes of complications encountered after administration of local anaesthesia for dental and oral surgical procedures and to reduce the incidence of complications by introduction of six sigma methodology. DMAIC (Define, Measure, Analyse, Improve and Control) process of Six Sigma was taken into consideration to reduce the incidence of complications encountered after administration of local anaesthesia injections for dental and oral surgical procedures using failure mode and effect analysis. Pareto analysis was taken into consideration to analyse the most recurring complications. Paired z-sample test using Minitab Statistical Inference and Fisher's exact test was used to statistically analyse the obtained data. The p-value <0.05 was considered as significant value. Total 54 systemic and 62 local complications occurred during three months of analyse and measure phase. Syncope, failure of anaesthesia, trismus, auto mordeduras and pain at injection site was found to be most recurring complications. Cumulative defective percentage was 7.99 in case of pre-improved data and decreased to 4.58 in the control phase. Estimate for difference was 0.0341228 and 95% lower bound for difference was 0.0193966. p-value was found to be highly significant with p= 0.000. The application of six sigma improvement methodology in healthcare tends to deliver consistently better results to the patients as well as hospitals and results in better patient compliance as well as satisfaction.

  3. The Extent and Consequences of P-Hacking in Science

    PubMed Central

    Head, Megan L.; Holman, Luke; Lanfear, Rob; Kahn, Andrew T.; Jennions, Michael D.

    2015-01-01

    A focus on novel, confirmatory, and statistically significant results leads to substantial bias in the scientific literature. One type of bias, known as “p-hacking,” occurs when researchers collect or select data or statistical analyses until nonsignificant results become significant. Here, we use text-mining to demonstrate that p-hacking is widespread throughout science. We then illustrate how one can test for p-hacking when performing a meta-analysis and show that, while p-hacking is probably common, its effect seems to be weak relative to the real effect sizes being measured. This result suggests that p-hacking probably does not drastically alter scientific consensuses drawn from meta-analyses. PMID:25768323

  4. Preliminary Evidence on the Effectiveness of Psychological Treatments Delivered at a University Counseling Center

    ERIC Educational Resources Information Center

    Minami, Takuya; Davies, D. Robert; Tierney, Sandra Callen; Bettmann, Joanna E.; McAward, Scott M.; Averill, Lynnette A.; Huebner, Lois A.; Weitzman, Lauren M.; Benbrook, Amy R.; Serlin, Ronald C.; Wampold, Bruce E.

    2009-01-01

    Treatment data from a university counseling center (UCC) that utilized the Outcome Questionnaire-45.2 (OQ-45; M. J. Lambert et al., 2004), a self-report general clinical symptom measure, was compared against treatment efficacy benchmarks from clinical trials of adult major depression that utilized similar measures. Statistical analyses suggested…

  5. Development of the Statistical Reasoning in Biology Concept Inventory (SRBCI).

    PubMed

    Deane, Thomas; Nomme, Kathy; Jeffery, Erica; Pollock, Carol; Birol, Gülnur

    2016-01-01

    We followed established best practices in concept inventory design and developed a 12-item inventory to assess student ability in statistical reasoning in biology (Statistical Reasoning in Biology Concept Inventory [SRBCI]). It is important to assess student thinking in this conceptual area, because it is a fundamental requirement of being statistically literate and associated skills are needed in almost all walks of life. Despite this, previous work shows that non-expert-like thinking in statistical reasoning is common, even after instruction. As science educators, our goal should be to move students along a novice-to-expert spectrum, which could be achieved with growing experience in statistical reasoning. We used item response theory analyses (the one-parameter Rasch model and associated analyses) to assess responses gathered from biology students in two populations at a large research university in Canada in order to test SRBCI's robustness and sensitivity in capturing useful data relating to the students' conceptual ability in statistical reasoning. Our analyses indicated that SRBCI is a unidimensional construct, with items that vary widely in difficulty and provide useful information about such student ability. SRBCI should be useful as a diagnostic tool in a variety of biology settings and as a means of measuring the success of teaching interventions designed to improve statistical reasoning skills. © 2016 T. Deane et al. CBE—Life Sciences Education © 2016 The American Society for Cell Biology. This article is distributed by The American Society for Cell Biology under license from the author(s). It is available to the public under an Attribution–Noncommercial–Share Alike 3.0 Unported Creative Commons License (http://creativecommons.org/licenses/by-nc-sa/3.0).

  6. SPARC Intercomparison of Middle Atmosphere Climatologies

    NASA Technical Reports Server (NTRS)

    Randel, William; Fleming, Eric; Geller, Marvin; Hamilton, Kevin; Karoly, David; Ortland, Dave; Pawson, Steve; Swinbank, Richard; Udelhofen, Petra

    2002-01-01

    This atlas presents detailed incomparisons of several climatological wind and temperature data sets which cover the middle atmosphere (over altitudes approx. 10-80 km). A number of middle atmosphere climatologies have been developed in the research community based on a variety of meteorological analyses and satellite data sets. Here we present comparisons between these climatological data sets for a number of basic circulation statistics, such as zonal mean temperature, winds and eddy flux statistics. Special attention is focused on tropical winds and temperatures, where large differences exist among separate analyses. We also include comparisons between the global climatologies and historical rocketsonde wind and temperature measurements, and also with more recent lidar temperature data. These comparisons highlight differences and uncertainties in contemporary middle atmosphere data sets, and allow biases in particular analyses to be isolated. In addition, a brief atlas of zonal mean temperature and wind statistics is provided to highlight data availability and as a quick-look reference. This technical report is intended as a companion to the climatological data sets held in archive at the SPARC Data Center (http://www.sparc.sunysb.edu).

  7. Do regional methods really help reduce uncertainties in flood frequency analyses?

    NASA Astrophysics Data System (ADS)

    Cong Nguyen, Chi; Payrastre, Olivier; Gaume, Eric

    2013-04-01

    Flood frequency analyses are often based on continuous measured series at gauge sites. However, the length of the available data sets is usually too short to provide reliable estimates of extreme design floods. To reduce the estimation uncertainties, the analyzed data sets have to be extended either in time, making use of historical and paleoflood data, or in space, merging data sets considered as statistically homogeneous to build large regional data samples. Nevertheless, the advantage of the regional analyses, the important increase of the size of the studied data sets, may be counterbalanced by the possible heterogeneities of the merged sets. The application and comparison of four different flood frequency analysis methods to two regions affected by flash floods in the south of France (Ardèche and Var) illustrates how this balance between the number of records and possible heterogeneities plays in real-world applications. The four tested methods are: (1) a local statistical analysis based on the existing series of measured discharges, (2) a local analysis valuating the existing information on historical floods, (3) a standard regional flood frequency analysis based on existing measured series at gauged sites and (4) a modified regional analysis including estimated extreme peak discharges at ungauged sites. Monte Carlo simulations are conducted to simulate a large number of discharge series with characteristics similar to the observed ones (type of statistical distributions, number of sites and records) to evaluate to which extent the results obtained on these case studies can be generalized. These two case studies indicate that even small statistical heterogeneities, which are not detected by the standard homogeneity tests implemented in regional flood frequency studies, may drastically limit the usefulness of such approaches. On the other hand, these result show that the valuation of information on extreme events, either historical flood events at gauged sites or estimated extremes at ungauged sites in the considered region, is an efficient way to reduce uncertainties in flood frequency studies.

  8. Online incidental statistical learning of audiovisual word sequences in adults: a registered report.

    PubMed

    Kuppuraj, Sengottuvel; Duta, Mihaela; Thompson, Paul; Bishop, Dorothy

    2018-02-01

    Statistical learning has been proposed as a key mechanism in language learning. Our main goal was to examine whether adults are capable of simultaneously extracting statistical dependencies in a task where stimuli include a range of structures amenable to statistical learning within a single paradigm. We devised an online statistical learning task using real word auditory-picture sequences that vary in two dimensions: (i) predictability and (ii) adjacency of dependent elements. This task was followed by an offline recall task to probe learning of each sequence type. We registered three hypotheses with specific predictions. First, adults would extract regular patterns from continuous stream (effect of grammaticality). Second, within grammatical conditions, they would show differential speeding up for each condition as a factor of statistical complexity of the condition and exposure. Third, our novel approach to measure online statistical learning would be reliable in showing individual differences in statistical learning ability. Further, we explored the relation between statistical learning and a measure of verbal short-term memory (STM). Forty-two participants were tested and retested after an interval of at least 3 days on our novel statistical learning task. We analysed the reaction time data using a novel regression discontinuity approach. Consistent with prediction, participants showed a grammaticality effect, agreeing with the predicted order of difficulty for learning different statistical structures. Furthermore, a learning index from the task showed acceptable test-retest reliability ( r  = 0.67). However, STM did not correlate with statistical learning. We discuss the findings noting the benefits of online measures in tracking the learning process.

  9. Online incidental statistical learning of audiovisual word sequences in adults: a registered report

    PubMed Central

    Duta, Mihaela; Thompson, Paul

    2018-01-01

    Statistical learning has been proposed as a key mechanism in language learning. Our main goal was to examine whether adults are capable of simultaneously extracting statistical dependencies in a task where stimuli include a range of structures amenable to statistical learning within a single paradigm. We devised an online statistical learning task using real word auditory–picture sequences that vary in two dimensions: (i) predictability and (ii) adjacency of dependent elements. This task was followed by an offline recall task to probe learning of each sequence type. We registered three hypotheses with specific predictions. First, adults would extract regular patterns from continuous stream (effect of grammaticality). Second, within grammatical conditions, they would show differential speeding up for each condition as a factor of statistical complexity of the condition and exposure. Third, our novel approach to measure online statistical learning would be reliable in showing individual differences in statistical learning ability. Further, we explored the relation between statistical learning and a measure of verbal short-term memory (STM). Forty-two participants were tested and retested after an interval of at least 3 days on our novel statistical learning task. We analysed the reaction time data using a novel regression discontinuity approach. Consistent with prediction, participants showed a grammaticality effect, agreeing with the predicted order of difficulty for learning different statistical structures. Furthermore, a learning index from the task showed acceptable test–retest reliability (r = 0.67). However, STM did not correlate with statistical learning. We discuss the findings noting the benefits of online measures in tracking the learning process. PMID:29515876

  10. Considerations in the statistical analysis of clinical trials in periodontitis.

    PubMed

    Imrey, P B

    1986-05-01

    Adult periodontitis has been described as a chronic infectious process exhibiting sporadic, acute exacerbations which cause quantal, localized losses of dental attachment. Many analytic problems of periodontal trials are similar to those of other chronic diseases. However, the episodic, localized, infrequent, and relatively unpredictable behavior of exacerbations, coupled with measurement error difficulties, cause some specific problems. Considerable controversy exists as to the proper selection and treatment of multiple site data from the same patient for group comparisons for epidemiologic or therapeutic evaluative purposes. This paper comments, with varying degrees of emphasis, on several issues pertinent to the analysis of periodontal trials. Considerable attention is given to the ways in which measurement variability may distort analytic results. Statistical treatments of multiple site data for descriptive summaries are distinguished from treatments for formal statistical inference to validate therapeutic effects. Evidence suggesting that sites behave independently is contested. For inferential analyses directed at therapeutic or preventive effects, analytic models based on site independence are deemed unsatisfactory. Methods of summarization that may yield more powerful analyses than all-site mean scores, while retaining appropriate treatment of inter-site associations, are suggested. Brief comments and opinions on an assortment of other issues in clinical trial analysis are preferred.

  11. Size and shape measurement in contemporary cephalometrics.

    PubMed

    McIntyre, Grant T; Mossey, Peter A

    2003-06-01

    The traditional method of analysing cephalograms--conventional cephalometric analysis (CCA)--involves the calculation of linear distance measurements, angular measurements, area measurements, and ratios. Because shape information cannot be determined from these 'size-based' measurements, an increasing number of studies employ geometric morphometric tools in the cephalometric analysis of craniofacial morphology. Most of the discussions surrounding the appropriateness of CCA, Procrustes superimposition, Euclidean distance matrix analysis (EDMA), thin-plate spline analysis (TPS), finite element morphometry (FEM), elliptical Fourier functions (EFF), and medial axis analysis (MAA) have centred upon mathematical and statistical arguments. Surprisingly, little information is available to assist the orthodontist in the clinical relevance of each technique. This article evaluates the advantages and limitations of the above methods currently used to analyse the craniofacial morphology on cephalograms and investigates their clinical relevance and possible applications.

  12. Models of dyadic social interaction.

    PubMed Central

    Griffin, Dale; Gonzalez, Richard

    2003-01-01

    We discuss the logic of research designs for dyadic interaction and present statistical models with parameters that are tied to psychologically relevant constructs. Building on Karl Pearson's classic nineteenth-century statistical analysis of within-organism similarity, we describe several approaches to indexing dyadic interdependence and provide graphical methods for visualizing dyadic data. We also describe several statistical and conceptual solutions to the 'levels of analytic' problem in analysing dyadic data. These analytic strategies allow the researcher to examine and measure psychological questions of interdependence and social influence. We provide illustrative data from casually interacting and romantic dyads. PMID:12689382

  13. Dissecting the genetics of complex traits using summary association statistics

    PubMed Central

    Pasaniuc, Bogdan; Price, Alkes L.

    2017-01-01

    During the past decade, genome-wide association studies (GWAS) have successfully identified tens of thousands of genetic variants associated with complex traits and diseases. These studies have produced extensive repositories of genetic variation and trait measurements across large numbers of individuals, providing tremendous opportunities for further analyses. However, privacy concerns and other logistical considerations often limit access to individual-level genetic data, motivating the development of methods that analyze summary association statistics. Here we review recent progress on statistical methods that leverage summary association data to gain insights into the genetic basis of complex traits and diseases. PMID:27840428

  14. Fundamentals of Petroleum.

    ERIC Educational Resources Information Center

    Bureau of Naval Personnel, Washington, DC.

    Basic information on petroleum is presented in this book prepared for naval logistics officers. Petroleum in national defense is discussed in connection with consumption statistics, productive capacity, world's resources, and steps in logistics. Chemical and geological analyses are made in efforts to familiarize methods of refining, measuring,…

  15. Health And Safety In Maintenance Activities

    NASA Astrophysics Data System (ADS)

    Ungureanu, Nicolae Stelian; Daraba, Dinu; Moraru, Roland Iosif

    2015-07-01

    The paper examines some aspects of health and safety at work in maintenance activities. It was analysed the occurrence of accidents, statistically, in maintenance work. There have been identified a number of causes of accidents and there have been proposed some measures to reduce them.

  16. Statistical Model of Dynamic Markers of the Alzheimer's Pathological Cascade.

    PubMed

    Balsis, Steve; Geraci, Lisa; Benge, Jared; Lowe, Deborah A; Choudhury, Tabina K; Tirso, Robert; Doody, Rachelle S

    2018-05-05

    Alzheimer's disease (AD) is a progressive disease reflected in markers across assessment modalities, including neuroimaging, cognitive testing, and evaluation of adaptive function. Identifying a single continuum of decline across assessment modalities in a single sample is statistically challenging because of the multivariate nature of the data. To address this challenge, we implemented advanced statistical analyses designed specifically to model complex data across a single continuum. We analyzed data from the Alzheimer's Disease Neuroimaging Initiative (ADNI; N = 1,056), focusing on indicators from the assessments of magnetic resonance imaging (MRI) volume, fluorodeoxyglucose positron emission tomography (FDG-PET) metabolic activity, cognitive performance, and adaptive function. Item response theory was used to identify the continuum of decline. Then, through a process of statistical scaling, indicators across all modalities were linked to that continuum and analyzed. Findings revealed that measures of MRI volume, FDG-PET metabolic activity, and adaptive function added measurement precision beyond that provided by cognitive measures, particularly in the relatively mild range of disease severity. More specifically, MRI volume, and FDG-PET metabolic activity become compromised in the very mild range of severity, followed by cognitive performance and finally adaptive function. Our statistically derived models of the AD pathological cascade are consistent with existing theoretical models.

  17. A reply to Zigler and Seitz.

    PubMed

    Neman, R

    1975-03-01

    The Zigler and Seitz (1975) critique was carefully examined with respect to the conclusions of the Neman et al. (1975) study. Particular attention was given to the following questions: (a) did experimenter bias or commitment account for the results, (b) were unreliable and invalid psychometric instruments used, (c) were the statistical analyses insufficient or incorrect, (d) did the results reflect no more than the operation of chance, and (e) were the results biased by artifactually inflated profile scores. Experimenter bias and commitment were shown to be insufficient to account for the results; a further review of Buros (1972) showed that there was no need for apprehension about the testing instruments; the statistical analyses were shown to exceed prevailing standards for research reporting; the results were shown to reflect valid findings at the .05 probability level; and the Neman et al. (1975) results for the profile measure were equally significant using either "raw" neurological scores or "scales" neurological age scores. Zigler, Seitz, and I agreed on the needs for (a) using multivariate analyses, where applicable, in studies having more than one dependent variable; (b) defining the population for which sensorimotor training procedures may be appropriately prescribed; and (c) validating the profile measure as a tool to assess neurological disorganization.

  18. [Correlation of dental age and anthropometric parametres of the overall growth and development in children].

    PubMed

    Triković-Janjić, Olivera; Apostolović, Mirjana; Janosević, Mirjana; Filipović, Gordana

    2008-02-01

    Anthropometric methods of measuring the whole body and body parts are the most commonly applied methods of analysing the growth and development of children. Anthropometric measures are interconnected, so that with growth and development the change of one of the parameters causes the change of the other. The aim of the paper was to analyse whether dental development follows the overall growth and development and what the ratio of this interdependence is. The research involved a sample of 134 participants, aged between 6 and 8 years. Dental age was determined as the average of the sum of existing permanent teeth from the participants aged 6, 7 and 8. With the aim of analysing physical growth and development, commonly accepted anthropometric indexes were applied: height, weight, circumference of the head, the chest cavity at its widest point, the upper arm, the abdomen, the thigh and thickness of the epidermis. The dimensions were measured according to the methodology of the International Biological Programme. The influence of the pertinent variables' related size on the analysed variable was deter mined by the statistical method of multivariable regression. The middle values of all the anthropometric parametres, except for the thickness of the epidermis, were slightly bigger with male participants, and the circumference of the chest cavity was statistically considerably bigger (p < 0.05). The results of anthropometric measurement showed in general a distinct homogeneity not only of the sample group but also within gender, in relation to all the dimensions, excyt for the thickness of the epidermis. The average of the dental age of the participants was 10.36, (10.42 and 10.31 for females and males respectively). Considerable correlation (R = 0.59) with high statistical significance (p < 0.001) was determined between dental age and the set of anthropometric parameters of general growth and development. There is a considerable positive correlation (R = 0.59) between dental age and anthropometric parameters of general growth and development, which confirms that dental development follows the overall growth and development of children, aged between 6 and 8 years.

  19. Predictors of persistent pain after total knee arthroplasty: a systematic review and meta-analysis.

    PubMed

    Lewis, G N; Rice, D A; McNair, P J; Kluger, M

    2015-04-01

    Several studies have identified clinical, psychosocial, patient characteristic, and perioperative variables that are associated with persistent postsurgical pain; however, the relative effect of these variables has yet to be quantified. The aim of the study was to provide a systematic review and meta-analysis of predictor variables associated with persistent pain after total knee arthroplasty (TKA). Included studies were required to measure predictor variables prior to or at the time of surgery, include a pain outcome measure at least 3 months post-TKA, and include a statistical analysis of the effect of the predictor variable(s) on the outcome measure. Counts were undertaken of the number of times each predictor was analysed and the number of times it was found to have a significant relationship with persistent pain. Separate meta-analyses were performed to determine the effect size of each predictor on persistent pain. Outcomes from studies implementing uni- and multivariable statistical models were analysed separately. Thirty-two studies involving almost 30 000 patients were included in the review. Preoperative pain was the predictor that most commonly demonstrated a significant relationship with persistent pain across uni- and multivariable analyses. In the meta-analyses of data from univariate models, the largest effect sizes were found for: other pain sites, catastrophizing, and depression. For data from multivariate models, significant effects were evident for: catastrophizing, preoperative pain, mental health, and comorbidities. Catastrophizing, mental health, preoperative knee pain, and pain at other sites are the strongest independent predictors of persistent pain after TKA. © The Author 2014. Published by Oxford University Press on behalf of the British Journal of Anaesthesia. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  20. Reduction of Complications of Local Anaesthesia in Dental Healthcare Setups by Application of the Six Sigma Methodology: A Statistical Quality Improvement Technique

    PubMed Central

    Khatoon, Farheen

    2015-01-01

    Background Health care faces challenges due to complications, inefficiencies and other concerns that threaten the safety of patients. Aim The purpose of his study was to identify causes of complications encountered after administration of local anaesthesia for dental and oral surgical procedures and to reduce the incidence of complications by introduction of six sigma methodology. Materials and Methods DMAIC (Define, Measure, Analyse, Improve and Control) process of Six Sigma was taken into consideration to reduce the incidence of complications encountered after administration of local anaesthesia injections for dental and oral surgical procedures using failure mode and effect analysis. Pareto analysis was taken into consideration to analyse the most recurring complications. Paired z-sample test using Minitab Statistical Inference and Fisher’s exact test was used to statistically analyse the obtained data. The p-value <0.05 was considered as significant value. Results Total 54 systemic and 62 local complications occurred during three months of analyse and measure phase. Syncope, failure of anaesthesia, trismus, auto mordeduras and pain at injection site was found to be most recurring complications. Cumulative defective percentage was 7.99 in case of pre-improved data and decreased to 4.58 in the control phase. Estimate for difference was 0.0341228 and 95% lower bound for difference was 0.0193966. p-value was found to be highly significant with p= 0.000. Conclusion The application of six sigma improvement methodology in healthcare tends to deliver consistently better results to the patients as well as hospitals and results in better patient compliance as well as satisfaction. PMID:26816989

  1. Using Network Analysis to Characterize Biogeographic Data in a Community Archive

    NASA Astrophysics Data System (ADS)

    Wellman, T. P.; Bristol, S.

    2017-12-01

    Informative measures are needed to evaluate and compare data from multiple providers in a community-driven data archive. This study explores insights from network theory and other descriptive and inferential statistics to examine data content and application across an assemblage of publically available biogeographic data sets. The data are archived in ScienceBase, a collaborative catalog of scientific data supported by the U.S Geological Survey to enhance scientific inquiry and acuity. In gaining understanding through this investigation and other scientific venues our goal is to improve scientific insight and data use across a spectrum of scientific applications. Network analysis is a tool to reveal patterns of non-trivial topological features in the data that do not exhibit complete regularity or randomness. In this work, network analyses are used to explore shared events and dependencies between measures of data content and application derived from metadata and catalog information and measures relevant to biogeographic study. Descriptive statistical tools are used to explore relations between network analysis properties, while inferential statistics are used to evaluate the degree of confidence in these assessments. Network analyses have been used successfully in related fields to examine social awareness of scientific issues, taxonomic structures of biological organisms, and ecosystem resilience to environmental change. Use of network analysis also shows promising potential to identify relationships in biogeographic data that inform programmatic goals and scientific interests.

  2. Statistics for NAEG: past efforts, new results, and future plans

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gilbert, R.O.; Simpson, J.C.; Kinnison, R.R.

    A brief review of Nevada Applied Ecology Group (NAEG) objectives is followed by a summary of past statistical analyses conducted by Pacific Northwest Laboratory for the NAEG. Estimates of spatial pattern of radionuclides and other statistical analyses at NS's 201, 219 and 221 are reviewed as background for new analyses presented in this paper. Suggested NAEG activities and statistical analyses needed for the projected termination date of NAEG studies in March 1986 are given.

  3. ParallABEL: an R library for generalized parallelization of genome-wide association studies.

    PubMed

    Sangket, Unitsa; Mahasirimongkol, Surakameth; Chantratita, Wasun; Tandayya, Pichaya; Aulchenko, Yurii S

    2010-04-29

    Genome-Wide Association (GWA) analysis is a powerful method for identifying loci associated with complex traits and drug response. Parts of GWA analyses, especially those involving thousands of individuals and consuming hours to months, will benefit from parallel computation. It is arduous acquiring the necessary programming skills to correctly partition and distribute data, control and monitor tasks on clustered computers, and merge output files. Most components of GWA analysis can be divided into four groups based on the types of input data and statistical outputs. The first group contains statistics computed for a particular Single Nucleotide Polymorphism (SNP), or trait, such as SNP characterization statistics or association test statistics. The input data of this group includes the SNPs/traits. The second group concerns statistics characterizing an individual in a study, for example, the summary statistics of genotype quality for each sample. The input data of this group includes individuals. The third group consists of pair-wise statistics derived from analyses between each pair of individuals in the study, for example genome-wide identity-by-state or genomic kinship analyses. The input data of this group includes pairs of SNPs/traits. The final group concerns pair-wise statistics derived for pairs of SNPs, such as the linkage disequilibrium characterisation. The input data of this group includes pairs of individuals. We developed the ParallABEL library, which utilizes the Rmpi library, to parallelize these four types of computations. ParallABEL library is not only aimed at GenABEL, but may also be employed to parallelize various GWA packages in R. The data set from the North American Rheumatoid Arthritis Consortium (NARAC) includes 2,062 individuals with 545,080, SNPs' genotyping, was used to measure ParallABEL performance. Almost perfect speed-up was achieved for many types of analyses. For example, the computing time for the identity-by-state matrix was linearly reduced from approximately eight hours to one hour when ParallABEL employed eight processors. Executing genome-wide association analysis using the ParallABEL library on a computer cluster is an effective way to boost performance, and simplify the parallelization of GWA studies. ParallABEL is a user-friendly parallelization of GenABEL.

  4. Sensitivity study of experimental measures for the nuclear liquid-gas phase transition in the statistical multifragmentation model

    NASA Astrophysics Data System (ADS)

    Lin, W.; Ren, P.; Zheng, H.; Liu, X.; Huang, M.; Wada, R.; Qu, G.

    2018-05-01

    The experimental measures of the multiplicity derivatives—the moment parameters, the bimodal parameter, the fluctuation of maximum fragment charge number (normalized variance of Zmax, or NVZ), the Fisher exponent (τ ), and the Zipf law parameter (ξ )—are examined to search for the liquid-gas phase transition in nuclear multifragmention processes within the framework of the statistical multifragmentation model (SMM). The sensitivities of these measures are studied. All these measures predict a critical signature at or near to the critical point both for the primary and secondary fragments. Among these measures, the total multiplicity derivative and the NVZ provide accurate measures for the critical point from the final cold fragments as well as the primary fragments. The present study will provide a guide for future experiments and analyses in the study of the nuclear liquid-gas phase transition.

  5. Statistical methods for convergence detection of multi-objective evolutionary algorithms.

    PubMed

    Trautmann, H; Wagner, T; Naujoks, B; Preuss, M; Mehnen, J

    2009-01-01

    In this paper, two approaches for estimating the generation in which a multi-objective evolutionary algorithm (MOEA) shows statistically significant signs of convergence are introduced. A set-based perspective is taken where convergence is measured by performance indicators. The proposed techniques fulfill the requirements of proper statistical assessment on the one hand and efficient optimisation for real-world problems on the other hand. The first approach accounts for the stochastic nature of the MOEA by repeating the optimisation runs for increasing generation numbers and analysing the performance indicators using statistical tools. This technique results in a very robust offline procedure. Moreover, an online convergence detection method is introduced as well. This method automatically stops the MOEA when either the variance of the performance indicators falls below a specified threshold or a stagnation of their overall trend is detected. Both methods are analysed and compared for two MOEA and on different classes of benchmark functions. It is shown that the methods successfully operate on all stated problems needing less function evaluations while preserving good approximation quality at the same time.

  6. Analysis of repeated measurement data in the clinical trials

    PubMed Central

    Singh, Vineeta; Rana, Rakesh Kumar; Singhal, Richa

    2013-01-01

    Statistics is an integral part of Clinical Trials. Elements of statistics span Clinical Trial design, data monitoring, analyses and reporting. A solid understanding of statistical concepts by clinicians improves the comprehension and the resulting quality of Clinical Trials. In biomedical research it has been seen that researcher frequently use t-test and ANOVA to compare means between the groups of interest irrespective of the nature of the data. In Clinical Trials we record the data on the patients more than two times. In such a situation using the standard ANOVA procedures is not appropriate as it does not consider dependencies between observations within subjects in the analysis. To deal with such types of study data Repeated Measure ANOVA should be used. In this article the application of One-way Repeated Measure ANOVA has been demonstrated by using the software SPSS (Statistical Package for Social Sciences) Version 15.0 on the data collected at four time points 0 day, 15th day, 30th day, and 45th day of multicentre clinical trial conducted on Pandu Roga (~Iron Deficiency Anemia) with an Ayurvedic formulation Dhatrilauha. PMID:23930038

  7. RepExplore: addressing technical replicate variance in proteomics and metabolomics data analysis.

    PubMed

    Glaab, Enrico; Schneider, Reinhard

    2015-07-01

    High-throughput omics datasets often contain technical replicates included to account for technical sources of noise in the measurement process. Although summarizing these replicate measurements by using robust averages may help to reduce the influence of noise on downstream data analysis, the information on the variance across the replicate measurements is lost in the averaging process and therefore typically disregarded in subsequent statistical analyses.We introduce RepExplore, a web-service dedicated to exploit the information captured in the technical replicate variance to provide more reliable and informative differential expression and abundance statistics for omics datasets. The software builds on previously published statistical methods, which have been applied successfully to biomedical omics data but are difficult to use without prior experience in programming or scripting. RepExplore facilitates the analysis by providing a fully automated data processing and interactive ranking tables, whisker plot, heat map and principal component analysis visualizations to interpret omics data and derived statistics. Freely available at http://www.repexplore.tk enrico.glaab@uni.lu Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press.

  8. High order statistical signatures from source-driven measurements of subcritical fissile systems

    NASA Astrophysics Data System (ADS)

    Mattingly, John Kelly

    1998-11-01

    This research focuses on the development and application of high order statistical analyses applied to measurements performed with subcritical fissile systems driven by an introduced neutron source. The signatures presented are derived from counting statistics of the introduced source and radiation detectors that observe the response of the fissile system. It is demonstrated that successively higher order counting statistics possess progressively higher sensitivity to reactivity. Consequently, these signatures are more sensitive to changes in the composition, fissile mass, and configuration of the fissile assembly. Furthermore, it is shown that these techniques are capable of distinguishing the response of the fissile system to the introduced source from its response to any internal or inherent sources. This ability combined with the enhanced sensitivity of higher order signatures indicates that these techniques will be of significant utility in a variety of applications. Potential applications include enhanced radiation signature identification of weapons components for nuclear disarmament and safeguards applications and augmented nondestructive analysis of spent nuclear fuel. In general, these techniques expand present capabilities in the analysis of subcritical measurements.

  9. 77 FR 33120 - Truth in Lending (Regulation Z)

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-05

    ... FHFA's release of historical data on loan volumes and delinquency rates, including any tabulations or... with varying characteristics and to perform other statistical analyses that may assist the Bureau in... definitions of a ``qualified mortgage.'' For example, the Bureau is examining various measures of delinquency...

  10. METHODS OF DEALING WITH VALUES BELOW THE LIMIT OF DETECTION USING SAS

    EPA Science Inventory

    Due to limitations of chemical analysis procedures, small concentrations cannot be precisely measured. These concentrations are said to be below the limit of detection (LOD). In statistical analyses, these values are often censored and substituted with a constant value, such ...

  11. The effect of the involvement of the dominant or non-dominant hand on grip/pinch strengths and the Levine score in patients with carpal tunnel syndrome.

    PubMed

    Zyluk, A; Walaszek, I

    2012-06-01

    The Levine questionnaire is a disease-oriented instrument developed for outcome measurement of carpal tunnel syndrome (CTS) management. The objective of this study was to compare Levine scores in patients with unilateral CTS, involving the dominant or non-dominant hand, before and after carpal tunnel release. Records of 144 patients, 126 women (87%) and 18 men (13%) aged a mean of 58 years with unilateral CTS, treated operatively, were analysed. The dominant hand was involved in 100 patients (69%), the non-dominant in 44 (31%). The parameters were analysed pre-operatively, and at 1 and 6 months post-operatively. A comparison of Levine scores in patients with the involvement of the dominant or non-dominant hand showed no statistically significant differences at baseline and any of the follow-up measurements. Statistically significant differences were noted in total grip strength at baseline and at 6 month assessments and in key-pinch strength at 1 and 6 months.

  12. Analysis of Cross-Sectional Univariate Measurements for Family Dyads Using Linear Mixed Modeling

    PubMed Central

    Knafl, George J.; Dixon, Jane K.; O'Malley, Jean P.; Grey, Margaret; Deatrick, Janet A.; Gallo, Agatha M.; Knafl, Kathleen A.

    2010-01-01

    Outcome measurements from members of the same family are likely correlated. Such intrafamilial correlation (IFC) is an important dimension of the family as a unit but is not always accounted for in analyses of family data. This article demonstrates the use of linear mixed modeling to account for IFC in the important special case of univariate measurements for family dyads collected at a single point in time. Example analyses of data from partnered parents having a child with a chronic condition on their child's adaptation to the condition and on the family's general functioning and management of the condition are provided. Analyses of this kind are reasonably straightforward to generate with popular statistical tools. Thus, it is recommended that IFC be reported as standard practice reflecting the fact that a family dyad is more than just the aggregate of two individuals. Moreover, not accounting for IFC can affect the conclusions. PMID:19307316

  13. Gait patterns for crime fighting: statistical evaluation

    NASA Astrophysics Data System (ADS)

    Sulovská, Kateřina; Bělašková, Silvie; Adámek, Milan

    2013-10-01

    The criminality is omnipresent during the human history. Modern technology brings novel opportunities for identification of a perpetrator. One of these opportunities is an analysis of video recordings, which may be taken during the crime itself or before/after the crime. The video analysis can be classed as identification analyses, respectively identification of a person via externals. The bipedal locomotion focuses on human movement on the basis of their anatomical-physiological features. Nowadays, the human gait is tested by many laboratories to learn whether the identification via bipedal locomotion is possible or not. The aim of our study is to use 2D components out of 3D data from the VICON Mocap system for deep statistical analyses. This paper introduces recent results of a fundamental study focused on various gait patterns during different conditions. The study contains data from 12 participants. Curves obtained from these measurements were sorted, averaged and statistically tested to estimate the stability and distinctiveness of this biometrics. Results show satisfactory distinctness of some chosen points, while some do not embody significant difference. However, results presented in this paper are of initial phase of further deeper and more exacting analyses of gait patterns under different conditions.

  14. Relationship between athletes' emotional intelligence and precompetitive anxiety.

    PubMed

    Lu, Frank J-H; Li, Gladys Shuk-fong; Hsu, Eva Ya-wen; Williams, Lavon

    2010-02-01

    This study examined the relationship between athletes' Emotional Intelligence (EI) and precompetitive anxiety. Taiwanese intercollegiate track and field athletes (N = 111; 64 men, 47 women) completed the Bar-On EQ-i 1 mo. before a1 national intercollegiate athletic meet, and the Competition State Anxiety Inventory-2R 1 hr. before the competition. Analyses indicated that participants with the lowest EI scores reported greater intensity of precompetitive cognitive anxiety than those with the highest EI scores. No other statistically significant differences were found among the groups. Further, correlational analyses and multiple stepwise regression analyses revealed that EI components such as stress management, intrapersonal EI, and interpersonal EI were associated with precompetitive anxiety. Current EI measures provide limited understanding of precompetitive anxiety. A sport-specific EI measure is needed for future research.

  15. Statistical analysis of iron geochemical data suggests limited late Proterozoic oxygenation

    NASA Astrophysics Data System (ADS)

    Sperling, Erik A.; Wolock, Charles J.; Morgan, Alex S.; Gill, Benjamin C.; Kunzmann, Marcus; Halverson, Galen P.; MacDonald, Francis A.; Knoll, Andrew H.; Johnston, David T.

    2015-07-01

    Sedimentary rocks deposited across the Proterozoic-Phanerozoic transition record extreme climate fluctuations, a potential rise in atmospheric oxygen or re-organization of the seafloor redox landscape, and the initial diversification of animals. It is widely assumed that the inferred redox change facilitated the observed trends in biodiversity. Establishing this palaeoenvironmental context, however, requires that changes in marine redox structure be tracked by means of geochemical proxies and translated into estimates of atmospheric oxygen. Iron-based proxies are among the most effective tools for tracking the redox chemistry of ancient oceans. These proxies are inherently local, but have global implications when analysed collectively and statistically. Here we analyse about 4,700 iron-speciation measurements from shales 2,300 to 360 million years old. Our statistical analyses suggest that subsurface water masses in mid-Proterozoic oceans were predominantly anoxic and ferruginous (depleted in dissolved oxygen and iron-bearing), but with a tendency towards euxinia (sulfide-bearing) that is not observed in the Neoproterozoic era. Analyses further indicate that early animals did not experience appreciable benthic sulfide stress. Finally, unlike proxies based on redox-sensitive trace-metal abundances, iron geochemical data do not show a statistically significant change in oxygen content through the Ediacaran and Cambrian periods, sharply constraining the magnitude of the end-Proterozoic oxygen increase. Indeed, this re-analysis of trace-metal data is consistent with oxygenation continuing well into the Palaeozoic era. Therefore, if changing redox conditions facilitated animal diversification, it did so through a limited rise in oxygen past critical functional and ecological thresholds, as is seen in modern oxygen minimum zone benthic animal communities.

  16. Field and laboratory analyses of water from the Columbia aquifer in Eastern Maryland

    USGS Publications Warehouse

    Bachman, L.J.

    1984-01-01

    Field and laboratory analyses of pH, alkalinity, and specific conductance from water samples collected from the Columbia aquifer on the Delmarva Peninsula in eastern Maryland were compared to determine if laboratory analyses could be used for making regional water-quality interpretations. Kruskal-Wallis tests of field and laboratory data indicate that the difference between field and laboratory values is usually not enough to affect the outcome of the statistical tests. Thus, laboratory measurements of these constituents may be adequate for making certain regional water-quality interpretations, although they may result in errors if used for geochemical interpretations.

  17. Quantitative Thermochemical Measurements in High-Pressure Gaseous Combustion

    NASA Technical Reports Server (NTRS)

    Kojima, Jun J.; Fischer, David G.

    2012-01-01

    We present our strategic experiment and thermochemical analyses on combustion flow using a subframe burst gating (SBG) Raman spectroscopy. This unconventional laser diagnostic technique has promising ability to enhance accuracy of the quantitative scalar measurements in a point-wise single-shot fashion. In the presentation, we briefly describe an experimental methodology that generates transferable calibration standard for the routine implementation of the diagnostics in hydrocarbon flames. The diagnostic technology was applied to simultaneous measurements of temperature and chemical species in a swirl-stabilized turbulent flame with gaseous methane fuel at elevated pressure (17 atm). Statistical analyses of the space-/time-resolved thermochemical data provide insights into the nature of the mixing process and it impact on the subsequent combustion process in the model combustor.

  18. Literature review of some selected types of results and statistical analyses of total-ozone data. [for the ozonosphere

    NASA Technical Reports Server (NTRS)

    Myers, R. H.

    1976-01-01

    The depletion of ozone in the stratosphere is examined, and causes for the depletion are cited. Ground station and satellite measurements of ozone, which are taken on a worldwide basis, are discussed. Instruments used in ozone measurement are discussed, such as the Dobson spectrophotometer, which is credited with providing the longest and most extensive series of observations for ground based observation of stratospheric ozone. Other ground based instruments used to measure ozone are also discussed. The statistical differences of ground based measurements of ozone from these different instruments are compared to each other, and to satellite measurements. Mathematical methods (i.e., trend analysis or linear regression analysis) of analyzing the variability of ozone concentration with respect to time and lattitude are described. Various time series models which can be employed in accounting for ozone concentration variability are examined.

  19. Sigsearch: a new term for post hoc unplanned search for statistically significant relationships with the intent to create publishable findings.

    PubMed

    Hashim, Muhammad Jawad

    2010-09-01

    Post-hoc secondary data analysis with no prespecified hypotheses has been discouraged by textbook authors and journal editors alike. Unfortunately no single term describes this phenomenon succinctly. I would like to coin the term "sigsearch" to define this practice and bring it within the teaching lexicon of statistics courses. Sigsearch would include any unplanned, post-hoc search for statistical significance using multiple comparisons of subgroups. It would also include data analysis with outcomes other than the prespecified primary outcome measure of a study as well as secondary data analyses of earlier research.

  20. Bayesian analyses of seasonal runoff forecasts

    NASA Astrophysics Data System (ADS)

    Krzysztofowicz, R.; Reese, S.

    1991-12-01

    Forecasts of seasonal snowmelt runoff volume provide indispensable information for rational decision making by water project operators, irrigation district managers, and farmers in the western United States. Bayesian statistical models and communication frames have been researched in order to enhance the forecast information disseminated to the users, and to characterize forecast skill from the decision maker's point of view. Four products are presented: (i) a Bayesian Processor of Forecasts, which provides a statistical filter for calibrating the forecasts, and a procedure for estimating the posterior probability distribution of the seasonal runoff; (ii) the Bayesian Correlation Score, a new measure of forecast skill, which is related monotonically to the ex ante economic value of forecasts for decision making; (iii) a statistical predictor of monthly cumulative runoffs within the snowmelt season, conditional on the total seasonal runoff forecast; and (iv) a framing of the forecast message that conveys the uncertainty associated with the forecast estimates to the users. All analyses are illustrated with numerical examples of forecasts for six gauging stations from the period 1971 1988.

  1. Agriculture, population growth, and statistical analysis of the radiocarbon record.

    PubMed

    Zahid, H Jabran; Robinson, Erick; Kelly, Robert L

    2016-01-26

    The human population has grown significantly since the onset of the Holocene about 12,000 y ago. Despite decades of research, the factors determining prehistoric population growth remain uncertain. Here, we examine measurements of the rate of growth of the prehistoric human population based on statistical analysis of the radiocarbon record. We find that, during most of the Holocene, human populations worldwide grew at a long-term annual rate of 0.04%. Statistical analysis of the radiocarbon record shows that transitioning farming societies experienced the same rate of growth as contemporaneous foraging societies. The same rate of growth measured for populations dwelling in a range of environments and practicing a variety of subsistence strategies suggests that the global climate and/or endogenous biological factors, not adaptability to local environment or subsistence practices, regulated the long-term growth of the human population during most of the Holocene. Our results demonstrate that statistical analyses of large ensembles of radiocarbon dates are robust and valuable for quantitatively investigating the demography of prehistoric human populations worldwide.

  2. [A Review on the Use of Effect Size in Nursing Research].

    PubMed

    Kang, Hyuncheol; Yeon, Kyupil; Han, Sang Tae

    2015-10-01

    The purpose of this study was to introduce the main concepts of statistical testing and effect size and to provide researchers in nursing science with guidance on how to calculate the effect size for the statistical analysis methods mainly used in nursing. For t-test, analysis of variance, correlation analysis, regression analysis which are used frequently in nursing research, the generally accepted definitions of the effect size were explained. Some formulae for calculating the effect size are described with several examples in nursing research. Furthermore, the authors present the required minimum sample size for each example utilizing G*Power 3 software that is the most widely used program for calculating sample size. It is noted that statistical significance testing and effect size measurement serve different purposes, and the reliance on only one side may be misleading. Some practical guidelines are recommended for combining statistical significance testing and effect size measure in order to make more balanced decisions in quantitative analyses.

  3. A methodology using in-chair movements as an objective measure of discomfort for the purpose of statistically distinguishing between similar seat surfaces.

    PubMed

    Cascioli, Vincenzo; Liu, Zhuofu; Heusch, Andrew; McCarthy, Peter W

    2016-05-01

    This study presents a method for objectively measuring in-chair movement (ICM) that shows correlation with subjective ratings of comfort and discomfort. Employing a cross-over controlled, single blind design, healthy young subjects (n = 21) sat for 18 min on each of the following surfaces: contoured foam, straight foam and wood. Force sensitive resistors attached to the sitting interface measured the relative movements of the subjects during sitting. The purpose of this study was to determine whether ICM could statistically distinguish between each seat material, including two with subtle design differences. In addition, this study investigated methodological considerations, in particular appropriate threshold selection and sitting duration, when analysing objective movement data. ICM appears to be able to statistically distinguish between similar foam surfaces, as long as appropriate ICM thresholds and sufficient sitting durations are present. A relationship between greater ICM and increased discomfort, and lesser ICM and increased comfort was also found. Copyright © 2016. Published by Elsevier Ltd.

  4. Fluctuating asymmetry in broiler chickens: a decision protocol for trait selection in seven measuring methods.

    PubMed

    Van Nuffel, A; Tuyttens, F A M; Van Dongen, S; Talloen, W; Van Poucke, E; Sonck, B; Lens, L

    2007-12-01

    Nonidentical development of bilateral traits due to disturbing genetic or developmental factors is called fluctuating asymmetry (FA) if such deviations are continuously distributed. Fluctuating asymmetry is believed to be a reliable indicator of the fitness and welfare of an animal. Despite an increasing body of research, the link between FA and animal performance or welfare is reported to be inconsistent, possibly, among other reasons, due to inaccurate measuring protocols or incorrect statistical analyses. This paper reviews problems of interpreting FA results in poultry and provides guidelines for the measurement and analysis of FA, applied to broilers. A wide range of morphological traits were measured by 7 different techniques (ranging from measurements on living broilers or intact carcasses to X-rays, bones, and digital images) and evaluated for their applicability to estimate FA. Following 4 selection criteria (significant FA, absence of directional asymmetry or antisymmetry, absence of between-trait correlation in signed FA values, and high signal-to-noise ratio), from 3 to 14 measurements per method were found suitable for estimating the degree of FA. The accuracy of FA estimates was positively related to the complexity and time investment of the measuring method. In addition, our study clearly shows the importance of securing adequate statistical power when designing FA studies. Repeatability analyses of FA estimates indicated the need for larger sample sizes, more repeated measurements, or both, than are commonly used in FA studies.

  5. Comparison of linear measurements and analyses taken from plaster models and three-dimensional images.

    PubMed

    Porto, Betina Grehs; Porto, Thiago Soares; Silva, Monica Barros; Grehs, Renésio Armindo; Pinto, Ary dos Santos; Bhandi, Shilpa H; Tonetto, Mateus Rodrigues; Bandéca, Matheus Coelho; dos Santos-Pinto, Lourdes Aparecida Martins

    2014-11-01

    Digital models are an alternative for carrying out analyses and devising treatment plans in orthodontics. The objective of this study was to evaluate the accuracy and the reproducibility of measurements of tooth sizes, interdental distances and analyses of occlusion using plaster models and their digital images. Thirty pairs of plaster models were chosen at random, and the digital images of each plaster model were obtained using a laser scanner (3Shape R-700, 3Shape A/S). With the plaster models, the measurements were taken using a caliper (Mitutoyo Digimatic(®), Mitutoyo (UK) Ltd) and the MicroScribe (MS) 3DX (Immersion, San Jose, Calif). For the digital images, the measurement tools used were those from the O3d software (Widialabs, Brazil). The data obtained were compared statistically using the Dahlberg formula, analysis of variance and the Tukey test (p < 0.05). The majority of the measurements, obtained using the caliper and O3d were identical, and both were significantly different from those obtained using the MS. Intra-examiner agreement was lowest when using the MS. The results demonstrated that the accuracy and reproducibility of the tooth measurements and analyses from the plaster models using the caliper and from the digital models using O3d software were identical.

  6. Using complexity metrics with R-R intervals and BPM heart rate measures.

    PubMed

    Wallot, Sebastian; Fusaroli, Riccardo; Tylén, Kristian; Jegindø, Else-Marie

    2013-01-01

    Lately, growing attention in the health sciences has been paid to the dynamics of heart rate as indicator of impending failures and for prognoses. Likewise, in social and cognitive sciences, heart rate is increasingly employed as a measure of arousal, emotional engagement and as a marker of interpersonal coordination. However, there is no consensus about which measurements and analytical tools are most appropriate in mapping the temporal dynamics of heart rate and quite different metrics are reported in the literature. As complexity metrics of heart rate variability depend critically on variability of the data, different choices regarding the kind of measures can have a substantial impact on the results. In this article we compare linear and non-linear statistics on two prominent types of heart beat data, beat-to-beat intervals (R-R interval) and beats-per-min (BPM). As a proof-of-concept, we employ a simple rest-exercise-rest task and show that non-linear statistics-fractal (DFA) and recurrence (RQA) analyses-reveal information about heart beat activity above and beyond the simple level of heart rate. Non-linear statistics unveil sustained post-exercise effects on heart rate dynamics, but their power to do so critically depends on the type data that is employed: While R-R intervals are very susceptible to non-linear analyses, the success of non-linear methods for BPM data critically depends on their construction. Generally, "oversampled" BPM time-series can be recommended as they retain most of the information about non-linear aspects of heart beat dynamics.

  7. The effect of a senior jazz dance class on static balance in healthy women over 50 years of age: a pilot study.

    PubMed

    Wallmann, Harvey W; Gillis, Carrie B; Alpert, Patricia T; Miller, Sally K

    2009-01-01

    The purpose of this pilot study is to assess the impact of a senior jazz dance class on static balance for healthy women over 50 years of age using the NeuroCom Smart Balance Master System (Balance Master). A total of 12 healthy women aged 54-88 years completed a 15-week jazz dance class which they attended 1 time per week for 90 min per class. Balance data were collected using the Sensory Organization Test (SOT) at baseline (pre), at 7 weeks (mid), and after 15 weeks (post). An equilibrium score measuring postural sway was calculated for each of six different conditions. The composite equilibrium score (all six conditions integrated to 1 score) was used as an overall measure of balance. Repeated measures analyses of variance (ANOVAs) were used to compare the means of each participant's SOT composite equilibrium score in addition to the equilibrium score for each individual condition (1-6) across the 3 time points (pre, mid, post). There was a statistically significant difference among the means, p < .0005. Pairwise (Bonferroni) post hoc analyses revealed the following statistically significant findings for SOT composite equilibrium scores for the pre (67.33 + 10.43), mid (75.25 + 6.97), and post (79.00 + 4.97) measurements: premid (p = .008); prepost (p < .0005); midpost (p = .033). In addition, correlational statistics were used to determine any relationship between SOT scores and age. Results indicated that administration of a 15-week jazz dance class 1 time per week was beneficial in improving static balance as measured by the Balance Master SOT.

  8. Longitudinal measurements of oxygen consumption in growing infants during the first weeks after birth: old data revisited.

    PubMed

    Sinclair, J C; Thorlund, K; Walter, S D

    2013-01-01

    In a study conducted in 1966-1969, longitudinal measurements were made of the metabolic rate in growing infants. Statistical methods for analyzing longitudinal data weren't readily accessible at that time. To measure minimal rates of oxygen consumption (V·O2, ml/min) in growing infants during the first postnatal weeks and to determine the relationships between postnatal increases in V·O2, body size and postnatal age. We studied 61 infants of any birth weight or gestational age, including 19 of very low birth weight. The infants, nursed in incubators, were clinically well and without need of oxygen supplementation or respiratory assistance. Serial measures of V·O2 using a closed-circuit method were obtained at approximately weekly intervals. V·O2 was measured under thermoneutral conditions with the infant asleep or resting quietly. Data were analyzed using mixed-effects models. During early postnatal growth, V·O2 rises as surface area (m(2))(1.94) (standard error, SE 0.054) or body weight (kg)(1.24) (SE 0.033). Multivariate analyses show statistically significant effects of both size and age. Reference intervals (RIs) for V·O2 for fixed values of body weight and postnatal age are presented. As V·O2 rises with increasing size and age, there is an increase in the skin-operative environmental temperature gradient (T skin-op) required for heat loss. Required T skin-op can be predicted from surface area and heat loss (heat production minus heat storage). Generation of RIs for minimal rates of V·O2 in growing infants from the 1960s was enabled by application of mixed-effects statistical models for analyses of longitudinal data. Results apply to the precaffeine era of neonatal care. Copyright © 2013 S. Karger AG, Basel.

  9. A randomized, placebo-controlled trial of patient education for acute low back pain (PREVENT Trial): statistical analysis plan.

    PubMed

    Traeger, Adrian C; Skinner, Ian W; Hübscher, Markus; Lee, Hopin; Moseley, G Lorimer; Nicholas, Michael K; Henschke, Nicholas; Refshauge, Kathryn M; Blyth, Fiona M; Main, Chris J; Hush, Julia M; Pearce, Garry; Lo, Serigne; McAuley, James H

    Statistical analysis plans increase the transparency of decisions made in the analysis of clinical trial results. The purpose of this paper is to detail the planned analyses for the PREVENT trial, a randomized, placebo-controlled trial of patient education for acute low back pain. We report the pre-specified principles, methods, and procedures to be adhered to in the main analysis of the PREVENT trial data. The primary outcome analysis will be based on Mixed Models for Repeated Measures (MMRM), which can test treatment effects at specific time points, and the assumptions of this analysis are outlined. We also outline the treatment of secondary outcomes and planned sensitivity analyses. We provide decisions regarding the treatment of missing data, handling of descriptive and process measure data, and blinded review procedures. Making public the pre-specified statistical analysis plan for the PREVENT trial minimizes the potential for bias in the analysis of trial data, and in the interpretation and reporting of trial results. ACTRN12612001180808 (https://www.anzctr.org.au/Trial/Registration/TrialReview.aspx?ACTRN=12612001180808). Copyright © 2017 Associação Brasileira de Pesquisa e Pós-Graduação em Fisioterapia. Publicado por Elsevier Editora Ltda. All rights reserved.

  10. Applying a Mixed Methods Framework to Differential Item Function Analyses

    ERIC Educational Resources Information Center

    Hitchcock, John H.; Johanson, George A.

    2015-01-01

    Understanding the reason(s) for Differential Item Functioning (DIF) in the context of measurement is difficult. Although identifying potential DIF items is typically a statistical endeavor, understanding the reasons for DIF (and item repair or replacement) might require investigations that can be informed by qualitative work. Such work is…

  11. Some Psychometric and Design Implications of Game-Based Learning Analytics

    ERIC Educational Resources Information Center

    Gibson, David; Clarke-Midura, Jody

    2013-01-01

    The rise of digital game and simulation-based learning applications has led to new approaches in educational measurement that take account of patterns in time, high resolution paths of action, and clusters of virtual performance artifacts. The new approaches, which depart from traditional statistical analyses, include data mining, machine…

  12. Independent review : statistical analyses of relationship between vehicle curb weight, track width, wheelbase and fatality rates.

    DOT National Transportation Integrated Search

    2011-03-01

    "NHTSA selected the vehicle footprint (the measure of a vehicles wheelbase multiplied by its average track width) as the attribute upon which to base the CAFE standards for model year 2012-2016 passenger cars and light trucks. These standards are ...

  13. Statistical Analysis of PDF's for Na Released by Photons from Solid Surfaces

    NASA Astrophysics Data System (ADS)

    Gamborino, D.; Wurz, P.

    2018-05-01

    We analyse the adequacy of three model speed PDF's previously used to describe the desorption of Na from a solid surface either by ESD or PSD. We found that the Maxwell PDF is too wide compared to measurements and non-thermal PDF's are better suited.

  14. METHODS OF DEALING WITH VALUES BELOW THE LIMIT OF DETECTION USING SAS

    EPA Science Inventory

    Due to limitations of chemical analysis procedures, small values cannot be precisely measured. These values are said to be below the limit of detection (LOD). In statistical analyses, these values are often censored and substituted with a constant value, such as half the LOD,...

  15. School Libraries and Science Achievement: A View from Michigan's Middle Schools

    ERIC Educational Resources Information Center

    Mardis, Marcia

    2007-01-01

    If strong school library media centers (SLMCs) positively impact middle school student reading achievement, as measured on standardized tests, are they also beneficial for middle school science achievement? To answer this question, the researcher built upon the statistical analyses used in previous school library impact studies with qualitative…

  16. Ethnic Identity and Career Development among First-Year College Students

    ERIC Educational Resources Information Center

    Duffy, Ryan D.; Klingaman, Elizabeth A.

    2009-01-01

    The current study explored the relation of ethnic identity achievement and career development progress among a sample of 2,432 first-year college students who completed the Career Decision Profile and Phinney's Multigroup Ethnic Identity Measure. Among students of color, correlational analyses revealed a series of statistically significant, but…

  17. Leadership and Culture-Building in Schools: Quantitative and Qualitative Understandings.

    ERIC Educational Resources Information Center

    Sashkin, Marshall; Sashkin, Molly G.

    Understanding effective school leadership as a function of culture building through quantitative and qualitative analyses is the purpose of this paper. The two-part quantitative phase of the research focused on statistical measures of culture and leadership behavior directed toward culture building in the school. The first quantitative part…

  18. Measurement of Low-Energy Nuclear-Recoil Quenching Factors in CsI[Na] and Statistical Analysis of the First Observation of Coherent, Elastic Neutrino-Nucleus Scattering

    NASA Astrophysics Data System (ADS)

    Rich, Grayson Currie

    The COHERENT Collaboration has produced the first-ever observation, with a significance of 6.7sigma, of a process consistent with coherent, elastic neutrino-nucleus scattering (CEnuNS) as first predicted and described by D.Z. Freedman in 1974. Physics of the CEnuNS process are presented along with its relationship to future measurements in the arenas of nuclear physics, fundamental particle physics, and astroparticle physics, where the newly-observed interaction presents a viable tool for investigations into numerous outstanding questions about the nature of the universe. To enable the CEnuNS observation with a 14.6-kg CsI[Na] detector, new measurements of the response of CsI[Na] to low-energy nuclear recoils, which is the only mechanism by which CEnuNS is detectable, were carried out at Triangle Universities Nuclear Laboratory; these measurements are detailed and an effective nuclear-recoil quenching factor of 8.78 +/- 1.66% is established for CsI[Na] in the recoil-energy range of 5-30 keV, based on new and literature data. Following separate analyses of the CEnuNS-search data by groups at the University of Chicago and the Moscow Engineering and Physics Institute, information from simulations, calculations, and ancillary measurements were used to inform statistical analyses of the collected data. Based on input from the Chicago analysis, the number of CEnuNS events expected from the Standard Model is 173 +/- 48; interpretation as a simple counting experiment finds 136 +/- 31 CEnuNS counts in the data, while a two-dimensional, profile likelihood fit yields 134 +/- 22 CEnuNS counts. Details of the simulations, calculations, and supporting measurements are discussed, in addition to the statistical procedures. Finally, potential improvements to the CsI[Na]-based CEnuNS measurement are presented along with future possibilities for COHERENT Collaboration, including new CEnuNS detectors and measurement of the neutrino-induced neutron spallation process.

  19. Kolmogorov-Smirnov statistical test for analysis of ZAP-70 expression in B-CLL, compared with quantitative PCR and IgV(H) mutation status.

    PubMed

    Van Bockstaele, Femke; Janssens, Ann; Piette, Anne; Callewaert, Filip; Pede, Valerie; Offner, Fritz; Verhasselt, Bruno; Philippé, Jan

    2006-07-15

    ZAP-70 has been proposed as a surrogate marker for immunoglobulin heavy-chain variable region (IgV(H)) mutation status, which is known as a prognostic marker in B-cell chronic lymphocytic leukemia (CLL). The flow cytometric analysis of ZAP-70 suffers from difficulties in standardization and interpretation. We applied the Kolmogorov-Smirnov (KS) statistical test to make analysis more straightforward. We examined ZAP-70 expression by flow cytometry in 53 patients with CLL. Analysis was performed as initially described by Crespo et al. (New England J Med 2003; 348:1764-1775) and alternatively by application of the KS statistical test comparing T cells with B cells. Receiver-operating-characteristics (ROC)-curve analyses were performed to determine the optimal cut-off values for ZAP-70 measured by the two approaches. ZAP-70 protein expression was compared with ZAP-70 mRNA expression measured by a quantitative PCR (qPCR) and with the IgV(H) mutation status. Both flow cytometric analyses correlated well with the molecular technique and proved to be of equal value in predicting the IgV(H) mutation status. Applying the KS test is reproducible, simple, straightforward, and overcomes a number of difficulties encountered in the Crespo-method. The KS statistical test is an essential part of the software delivered with modern routine analytical flow cytometers and is well suited for analysis of ZAP-70 expression in CLL. (c) 2006 International Society for Analytical Cytology.

  20. Visual field progression with frequency-doubling matrix perimetry and standard automated perimetry in patients with glaucoma and in healthy controls.

    PubMed

    Redmond, Tony; O'Leary, Neil; Hutchison, Donna M; Nicolela, Marcelo T; Artes, Paul H; Chauhan, Balwantray C

    2013-12-01

    A new analysis method called permutation of pointwise linear regression measures the significance of deterioration over time at each visual field location, combines the significance values into an overall statistic, and then determines the likelihood of change in the visual field. Because the outcome is a single P value, individualized to that specific visual field and independent of the scale of the original measurement, the method is well suited for comparing techniques with different stimuli and scales. To test the hypothesis that frequency-doubling matrix perimetry (FDT2) is more sensitive than standard automated perimetry (SAP) in identifying visual field progression in glaucoma. Patients with open-angle glaucoma and healthy controls were examined by FDT2 and SAP, both with the 24-2 test pattern, on the same day at 6-month intervals in a longitudinal prospective study conducted in a hospital-based setting. Only participants with at least 5 examinations were included. Data were analyzed with permutation of pointwise linear regression. Permutation of pointwise linear regression is individualized to each participant, in contrast to current analyses in which the statistical significance is inferred from population-based approaches. Analyses were performed with both total deviation and pattern deviation. Sixty-four patients and 36 controls were included in the study. The median age, SAP mean deviation, and follow-up period were 65 years, -2.6 dB, and 5.4 years, respectively, in patients and 62 years, +0.4 dB, and 5.2 years, respectively, in controls. Using total deviation analyses, statistically significant deterioration was identified in 17% of patients with FDT2, in 34% of patients with SAP, and in 14% of patients with both techniques; in controls these percentages were 8% with FDT2, 31% with SAP, and 8% with both. Using pattern deviation analyses, statistically significant deterioration was identified in 16% of patients with FDT2, in 17% of patients with SAP, and in 3% of patients with both techniques; in controls these values were 3% with FDT2 and none with SAP. No evidence was found that FDT2 is more sensitive than SAP in identifying visual field deterioration. In about one-third of healthy controls, age-related deterioration with SAP reached statistical significance.

  1. Sample Size and Statistical Conclusions from Tests of Fit to the Rasch Model According to the Rasch Unidimensional Measurement Model (Rumm) Program in Health Outcome Measurement.

    PubMed

    Hagell, Peter; Westergren, Albert

    Sample size is a major factor in statistical null hypothesis testing, which is the basis for many approaches to testing Rasch model fit. Few sample size recommendations for testing fit to the Rasch model concern the Rasch Unidimensional Measurement Models (RUMM) software, which features chi-square and ANOVA/F-ratio based fit statistics, including Bonferroni and algebraic sample size adjustments. This paper explores the occurrence of Type I errors with RUMM fit statistics, and the effects of algebraic sample size adjustments. Data with simulated Rasch model fitting 25-item dichotomous scales and sample sizes ranging from N = 50 to N = 2500 were analysed with and without algebraically adjusted sample sizes. Results suggest the occurrence of Type I errors with N less then or equal to 500, and that Bonferroni correction as well as downward algebraic sample size adjustment are useful to avoid such errors, whereas upward adjustment of smaller samples falsely signal misfit. Our observations suggest that sample sizes around N = 250 to N = 500 may provide a good balance for the statistical interpretation of the RUMM fit statistics studied here with respect to Type I errors and under the assumption of Rasch model fit within the examined frame of reference (i.e., about 25 item parameters well targeted to the sample).

  2. Selected Streamflow Statistics and Regression Equations for Predicting Statistics at Stream Locations in Monroe County, Pennsylvania

    USGS Publications Warehouse

    Thompson, Ronald E.; Hoffman, Scott A.

    2006-01-01

    A suite of 28 streamflow statistics, ranging from extreme low to high flows, was computed for 17 continuous-record streamflow-gaging stations and predicted for 20 partial-record stations in Monroe County and contiguous counties in north-eastern Pennsylvania. The predicted statistics for the partial-record stations were based on regression analyses relating inter-mittent flow measurements made at the partial-record stations indexed to concurrent daily mean flows at continuous-record stations during base-flow conditions. The same statistics also were predicted for 134 ungaged stream locations in Monroe County on the basis of regression analyses relating the statistics to GIS-determined basin characteristics for the continuous-record station drainage areas. The prediction methodology for developing the regression equations used to estimate statistics was developed for estimating low-flow frequencies. This study and a companion study found that the methodology also has application potential for predicting intermediate- and high-flow statistics. The statistics included mean monthly flows, mean annual flow, 7-day low flows for three recurrence intervals, nine flow durations, mean annual base flow, and annual mean base flows for two recurrence intervals. Low standard errors of prediction and high coefficients of determination (R2) indicated good results in using the regression equations to predict the statistics. Regression equations for the larger flow statistics tended to have lower standard errors of prediction and higher coefficients of determination (R2) than equations for the smaller flow statistics. The report discusses the methodologies used in determining the statistics and the limitations of the statistics and the equations used to predict the statistics. Caution is indicated in using the predicted statistics for small drainage area situations. Study results constitute input needed by water-resource managers in Monroe County for planning purposes and evaluation of water-resources availability.

  3. How Historical Information Can Improve Extreme Value Analysis of Coastal Water Levels

    NASA Astrophysics Data System (ADS)

    Le Cozannet, G.; Bulteau, T.; Idier, D.; Lambert, J.; Garcin, M.

    2016-12-01

    The knowledge of extreme coastal water levels is useful for coastal flooding studies or the design of coastal defences. While deriving such extremes with standard analyses using tide gauge measurements, one often needs to deal with limited effective duration of observation which can result in large statistical uncertainties. This is even truer when one faces outliers, those particularly extreme values distant from the others. In a recent work (Bulteau et al., 2015), we investigated how historical information of past events reported in archives can reduce statistical uncertainties and relativize such outlying observations. We adapted a Bayesian Markov Chain Monte Carlo method, initially developed in the hydrology field (Reis and Stedinger, 2005), to the specific case of coastal water levels. We applied this method to the site of La Rochelle (France), where the storm Xynthia in 2010 generated a water level considered so far as an outlier. Based on 30 years of tide gauge measurements and 8 historical events since 1890, the results showed a significant decrease in statistical uncertainties on return levels when historical information is used. Also, Xynthia's water level no longer appeared as an outlier and we could have reasonably predicted the annual exceedance probability of that level beforehand (predictive probability for 2010 based on data until the end of 2009 of the same order of magnitude as the standard estimative probability using data until the end of 2010). Such results illustrate the usefulness of historical information in extreme value analyses of coastal water levels, as well as the relevance of the proposed method to integrate heterogeneous data in such analyses.

  4. Statistical analysis plan for the Laser-1st versus Drops-1st for Glaucoma and Ocular Hypertension Trial (LiGHT): a multi-centre randomised controlled trial.

    PubMed

    Vickerstaff, Victoria; Ambler, Gareth; Bunce, Catey; Xing, Wen; Gazzard, Gus

    2015-11-11

    The LiGHT trial (Laser-1st versus Drops-1st for Glaucoma and Ocular Hypertension Trial) is a multicentre randomised controlled trial of two treatment pathways for patients who are newly diagnosed with open-angle glaucoma (OAG) and ocular hypertension (OHT). The main hypothesis for the trial is that lowering intraocular pressure (IOP) with selective laser trabeculoplasty (SLT) as the primary treatment ('Laser-1st') leads to a better health-related quality of life than for those started on IOP-lowering drops as their primary treatment ('Medicine-1st') and that this is associated with reduced costs and improved tolerability of treatment. This paper describes the statistical analysis plan for the study. The LiGHT trial is an unmasked, multi-centre randomised controlled trial. A total of 718 patients (359 per arm) are being randomised to two groups: medicine-first or laser-first treatment. Outcomes are recorded at baseline and at 6-month intervals up to 36 months. The primary outcome measure is health-related quality of life (HRQL) at 36 months measured using the EQ-5D-5L. The main secondary outcome is the Glaucoma Utility Index. We plan to analyse the patient outcome data according to the group to which the patient was originally assigned. Methods of statistical analysis are described, including the handling of missing data, the covariates used in the adjusted analyses and the planned sensitivity analyses. The trial was registered with the ISRCTN register on 23/07/2012, number ISRCTN32038223 .

  5. Comparative statistical component analysis of transgenic, cyanophycin-producing potatoes in greenhouse and field trials.

    PubMed

    Schmidt, Kerstin; Schmidtke, Jörg; Mast, Yvonne; Waldvogel, Eva; Wohlleben, Wolfgang; Klemke, Friederike; Lockau, Wolfgang; Hausmann, Tina; Hühns, Maja; Broer, Inge

    2017-08-01

    Potatoes are a promising system for industrial production of the biopolymer cyanophycin as a second compound in addition to starch. To assess the efficiency in the field, we analysed the stability of the system, specifically its sensitivity to environmental factors. Field and greenhouse trials with transgenic potatoes (two independent events) were carried out for three years. The influence of environmental factors was measured and target compounds in the transgenic plants (cyanophycin, amino acids) were analysed for differences to control plants. Furthermore, non-target parameters (starch content, number, weight and size of tubers) were analysed for equivalence with control plants. The huge amount of data received was handled using modern statistical approaches to model the correlation between influencing environmental factors (year of cultivation, nitrogen fertilization, origin of plants, greenhouse or field cultivation) and key components (starch, amino acids, cyanophycin) and agronomic characteristics. General linear models were used for modelling, and standard effect sizes were applied to compare conventional and genetically modified plants. Altogether, the field trials prove that significant cyanophycin production is possible without reduction of starch content. Non-target compound composition seems to be equivalent under varying environmental conditions. Additionally, a quick test to measure cyanophycin content gives similar results compared to the extensive enzymatic test. This work facilitates the commercial cultivation of cyanophycin potatoes.

  6. Interim analyses in 2 x 2 crossover trials.

    PubMed

    Cook, R J

    1995-09-01

    A method is presented for performing interim analyses in long term 2 x 2 crossover trials with serial patient entry. The analyses are based on a linear statistic that combines data from individuals observed for one treatment period with data from individuals observed for both periods. The coefficients in this linear combination can be chosen quite arbitrarily, but we focus on variance-based weights to maximize power for tests regarding direct treatment effects. The type I error rate of this procedure is controlled by utilizing the joint distribution of the linear statistics over analysis stages. Methods for performing power and sample size calculations are indicated. A two-stage sequential design involving simultaneous patient entry and a single between-period interim analysis is considered in detail. The power and average number of measurements required for this design are compared to those of the usual crossover trial. The results indicate that, while there is minimal loss in power relative to the usual crossover design in the absence of differential carry-over effects, the proposed design can have substantially greater power when differential carry-over effects are present. The two-stage crossover design can also lead to more economical studies in terms of the expected number of measurements required, due to the potential for early stopping. Attention is directed toward normally distributed responses.

  7. Full in-vitro analyses of new-generation bulk fill dental composites cured by halogen light.

    PubMed

    Tekin, Tuçe Hazal; Kantürk Figen, Aysel; Yılmaz Atalı, Pınar; Coşkuner Filiz, Bilge; Pişkin, Mehmet Burçin

    2017-08-01

    The objective of this study was to investigate the full in-vitro analyses of new-generation bulk-fill dental composites cured by halogen light (HLG). Two types' four composites were studied: Surefill SDR (SDR) and Xtra Base (XB) as bulk-fill flowable materials; QuixFill (QF) and XtraFill (XF) as packable bulk-fill materials. Samples were prepared for each analysis and test by applying the same procedure, but with different diameters and thicknesses appropriate to the analysis and test requirements. Thermal properties were determined by thermogravimetric analysis (TG/DTG) and differential scanning calorimetry (DSC) analysis; the Vickers microhardness (VHN) was measured after 1, 7, 15 and 30days of storage in water. The degree of conversion values for the materials (DC, %) were immediately measured using near-infrared spectroscopy (FT-IR). The surface morphology of the composites was investigated by scanning electron microscopes (SEM) and atomic-force microscopy (AFM) analyses. The sorption and solubility measurements were also performed after 1, 7, 15 and 30days of storage in water. In addition to his, the data were statistically analyzed using one-way analysis of variance, and both the Newman Keuls and Tukey multiple comparison tests. The statistical significance level was established at p<0.05. According to the ISO 4049 standards, all the tested materials showed acceptable water sorption and solubility, and a halogen light source was an option to polymerize bulk-fill, resin-based dental composites. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. ParallABEL: an R library for generalized parallelization of genome-wide association studies

    PubMed Central

    2010-01-01

    Background Genome-Wide Association (GWA) analysis is a powerful method for identifying loci associated with complex traits and drug response. Parts of GWA analyses, especially those involving thousands of individuals and consuming hours to months, will benefit from parallel computation. It is arduous acquiring the necessary programming skills to correctly partition and distribute data, control and monitor tasks on clustered computers, and merge output files. Results Most components of GWA analysis can be divided into four groups based on the types of input data and statistical outputs. The first group contains statistics computed for a particular Single Nucleotide Polymorphism (SNP), or trait, such as SNP characterization statistics or association test statistics. The input data of this group includes the SNPs/traits. The second group concerns statistics characterizing an individual in a study, for example, the summary statistics of genotype quality for each sample. The input data of this group includes individuals. The third group consists of pair-wise statistics derived from analyses between each pair of individuals in the study, for example genome-wide identity-by-state or genomic kinship analyses. The input data of this group includes pairs of SNPs/traits. The final group concerns pair-wise statistics derived for pairs of SNPs, such as the linkage disequilibrium characterisation. The input data of this group includes pairs of individuals. We developed the ParallABEL library, which utilizes the Rmpi library, to parallelize these four types of computations. ParallABEL library is not only aimed at GenABEL, but may also be employed to parallelize various GWA packages in R. The data set from the North American Rheumatoid Arthritis Consortium (NARAC) includes 2,062 individuals with 545,080, SNPs' genotyping, was used to measure ParallABEL performance. Almost perfect speed-up was achieved for many types of analyses. For example, the computing time for the identity-by-state matrix was linearly reduced from approximately eight hours to one hour when ParallABEL employed eight processors. Conclusions Executing genome-wide association analysis using the ParallABEL library on a computer cluster is an effective way to boost performance, and simplify the parallelization of GWA studies. ParallABEL is a user-friendly parallelization of GenABEL. PMID:20429914

  9. The validation and application of the Chinese version of perceived nursing work environment scale.

    PubMed

    Zhao, Peng; Chen, Fen Ju; Jia, Xiao Hui; Lv, Hui; Cheng, Piao Piao; Zhang, Li Ping

    2013-07-01

    To improve the development of the Chinese version of Perceived Nursing Work Environment (C-PNWE) scale by examination and application and to explore the nurses' perception of their working environment in a hospital. The C-PNWE scale was translated and revised from the PNWE scale. The least of perfection is that the development of C-PNWE ignored that the psychometric properties of the PNWE instrument were established of critical care nurses and further application and testing of the PNWE in various patient care settings were recommended. This is a cross-sectional design. Nurses from different departments of a hospital were sampled by convenience sampling and investigated by self-administrated questionnaire. Data obtained through questionnaires were analysed by descriptive statistical analyses and profile analyses using the Statistical Package for the Social Sciences (SPSS) Chinese version 17.0 software. The coincident and level profile analyses indicated that these groups can merge into one group, and the profile of measurement result of this merged group would not exhibit flatness. Among six dimensions of C-PNWE scale, the Staffing and Resource Adequacy got the lowest average score. Among 41 items, 'Opportunity for staff nurse to participate in policy decisions' got the lowest mean. The C-PNWE scale shows good psychometric properties and can be used to explore nurses' perspectives of the nursing practice environment in China. And the situation of nurses' perceived working environment in China needs further study. Shaping nursing practice environments to promote desired outcomes requires valid and reliable measures to assess practice environments prior to, during and following efforts to implement change. The C-PNWE scale can be a useful measurement tool for administrators to improve the nursing work environment in China. © 2013 John Wiley & Sons Ltd.

  10. Four modes of optical parametric operation for squeezed state generation

    NASA Astrophysics Data System (ADS)

    Andersen, U. L.; Buchler, B. C.; Lam, P. K.; Wu, J. W.; Gao, J. R.; Bachor, H.-A.

    2003-11-01

    We report a versatile instrument, based on a monolithic optical parametric amplifier, which reliably generates four different types of squeezed light. We obtained vacuum squeezing, low power amplitude squeezing, phase squeezing and bright amplitude squeezing. We show a complete analysis of this light, including a full quantum state tomography. In addition we demonstrate the direct detection of the squeezed state statistics without the aid of a spectrum analyser. This technique makes the nonclassical properties directly visible and allows complete measurement of the statistical moments of the squeezed quadrature.

  11. The effect of noise-induced variance on parameter recovery from reaction times.

    PubMed

    Vadillo, Miguel A; Garaizar, Pablo

    2016-03-31

    Technical noise can compromise the precision and accuracy of the reaction times collected in psychological experiments, especially in the case of Internet-based studies. Although this noise seems to have only a small impact on traditional statistical analyses, its effects on model fit to reaction-time distributions remains unexplored. Across four simulations we study the impact of technical noise on parameter recovery from data generated from an ex-Gaussian distribution and from a Ratcliff Diffusion Model. Our results suggest that the impact of noise-induced variance tends to be limited to specific parameters and conditions. Although we encourage researchers to adopt all measures to reduce the impact of noise on reaction-time experiments, we conclude that the typical amount of noise-induced variance found in these experiments does not pose substantial problems for statistical analyses based on model fitting.

  12. Statistical Approaches Used to Assess the Equity of Access to Food Outlets: A Systematic Review

    PubMed Central

    Lamb, Karen E.; Thornton, Lukar E.; Cerin, Ester; Ball, Kylie

    2015-01-01

    Background Inequalities in eating behaviours are often linked to the types of food retailers accessible in neighbourhood environments. Numerous studies have aimed to identify if access to healthy and unhealthy food retailers is socioeconomically patterned across neighbourhoods, and thus a potential risk factor for dietary inequalities. Existing reviews have examined differences between methodologies, particularly focussing on neighbourhood and food outlet access measure definitions. However, no review has informatively discussed the suitability of the statistical methodologies employed; a key issue determining the validity of study findings. Our aim was to examine the suitability of statistical approaches adopted in these analyses. Methods Searches were conducted for articles published from 2000–2014. Eligible studies included objective measures of the neighbourhood food environment and neighbourhood-level socio-economic status, with a statistical analysis of the association between food outlet access and socio-economic status. Results Fifty-four papers were included. Outlet accessibility was typically defined as the distance to the nearest outlet from the neighbourhood centroid, or as the number of food outlets within a neighbourhood (or buffer). To assess if these measures were linked to neighbourhood disadvantage, common statistical methods included ANOVA, correlation, and Poisson or negative binomial regression. Although all studies involved spatial data, few considered spatial analysis techniques or spatial autocorrelation. Conclusions With advances in GIS software, sophisticated measures of neighbourhood outlet accessibility can be considered. However, approaches to statistical analysis often appear less sophisticated. Care should be taken to consider assumptions underlying the analysis and the possibility of spatially correlated residuals which could affect the results. PMID:29546115

  13. A Comparison of Readability in Science-Based Texts: Implications for Elementary Teachers

    ERIC Educational Resources Information Center

    Gallagher, Tiffany; Fazio, Xavier; Ciampa, Katia

    2017-01-01

    Science curriculum standards were mapped onto various texts (literacy readers, trade books, online articles). Statistical analyses highlighted the inconsistencies among readability formulae for Grades 2-6 levels of the standards. There was a lack of correlation among the readability measures, and also when comparing different text sources. Online…

  14. Tests of Alignment among Assessment, Standards, and Instruction Using Generalized Linear Model Regression

    ERIC Educational Resources Information Center

    Fulmer, Gavin W.; Polikoff, Morgan S.

    2014-01-01

    An essential component in school accountability efforts is for assessments to be well-aligned with the standards or curriculum they are intended to measure. However, relatively little prior research has explored methods to determine statistical significance of alignment or misalignment. This study explores analyses of alignment as a special case…

  15. Children's Health, Access to Services and Quality of Care. Revised Executive Summary.

    ERIC Educational Resources Information Center

    Dutton, Diana B.

    This research investigated factors affecting children's health, based on empirical analyses of data from Washington, D.C. and national data. By most measures, poor children experience disproportionate morbidity and mortality. Yet certain ear and vision problems exhibit a U-shaped relation to family income in both national statistics and the…

  16. Network Analysis with the Enron Email Corpus

    ERIC Educational Resources Information Center

    Hardin, J. S.; Sarkis, G.; URC, P. .

    2015-01-01

    We use the Enron email corpus to study relationships in a network by applying six different measures of centrality. Our results came out of an in-semester undergraduate research seminar. The Enron corpus is well suited to statistical analyses at all levels of undergraduate education. Through this article's focus on centrality, students can explore…

  17. Getting the Measure of the VET Professional

    ERIC Educational Resources Information Center

    Mlotkowski, Peter; Guthrie, Hugh

    2010-01-01

    This report draws on analyses of Australian Bureau of Statistics (ABS) data from the Survey of Education and Training (SET) and the Census of Population and Housing to provide an updated demographic profile of vocational education and training (VET) professionals and VET practitioners. A number of caveats are attached to this analysis, all…

  18. Ontario Universities Statistical Compendium, 1970-90. Part A: Micro-Indicators.

    ERIC Educational Resources Information Center

    Council of Ontario Universities, Toronto. Research Div.

    This publication provides macro-indicators and complementary analyses and supporting data for use by policy and decision makers concerned with Ontario universities. These analytical tools are meant to unambiguously measure what is taking place in Canadian postsecondary education, and therefore, assist in focusing on what decisions need to be made.…

  19. Are Public School Teacher Salaries Paid Compensating Wage Differentials for Student Racial and Ethnic Characteristics?

    ERIC Educational Resources Information Center

    Martin, Stephanie M.

    2010-01-01

    The present paper examines the relationship between public school teacher salaries and the racial concentration and segregation of students in the district. A particularly rich set of control variables is included to better measure the effect of racial characteristics. Additional analyses included Metropolitan Statistical Area fixed effects and…

  20. Psycho-Motor Needs Assessment of Virginia School Children.

    ERIC Educational Resources Information Center

    Glen Haven Achievement Center, Fort Collins, CO.

    An effort to assess psycho-motor (P-M) needs among Virginia children in K-4 and in special primary classes for the educable mentally retarded is presented. Included are methods for selecting, combining, and developing evaluation measures, which are verified statistically by analyses of data collected from a stratified sample of approximately 4,500…

  1. [Sonographic ovarian vascularization and volume in women with polycystic ovary syndrome treated with clomiphene citrate and metformin].

    PubMed

    de la Fuente-Valero, Jesús; Zapardiel-Gutiérrez, Ignacio; Orensanz-Fernández, Inmaculada; Alvarez-Alvarez, Pilar; Engels-Calvo, Virginia; Bajo-Arenas, José Manuel

    2010-01-01

    To measure the vascularization and ovarian volume with three-dimensional sonography in patients diagnosed of polycystic ovary syndrome with stimulated ovulation treatment, and to analyse the differences between the patients treated with clomiphen citrate versus clomiphen citrate and metformin. Therty patients were studied. Twenty ovulation cycles were obtained with clomiphen citrate and 17 with clomiphen citrate plus merformin (added in case of obesity or hyperglucemy/hyperinsulinemia). Ovarian volumes and vascular indexes were studied with 3D-sonography and results were analysed by treatment. There were no statistical differences of ovarian volume by treatment along the cycles, although bigger volume were found in ovulatory cycles compared to non-ovulatory ones (20,36 versus 13,89 ml, p = 0,026). No statistical differences were also found concerning vascular indexes, neither by treatment nor by the obtention of ovulation in the cycle. Ovarian volume and vascular indexes measured with three-dimensional sonography in patients diagnosed of polycystic ovary syndrome do not show differents values in patients treated with clomiphen citrate alone versus clomiphen citrate plus metformin.

  2. "What If" Analyses: Ways to Interpret Statistical Significance Test Results Using EXCEL or "R"

    ERIC Educational Resources Information Center

    Ozturk, Elif

    2012-01-01

    The present paper aims to review two motivations to conduct "what if" analyses using Excel and "R" to understand the statistical significance tests through the sample size context. "What if" analyses can be used to teach students what statistical significance tests really do and in applied research either prospectively to estimate what sample size…

  3. The Applicability of Standard Error of Measurement and Minimal Detectable Change to Motor Learning Research-A Behavioral Study.

    PubMed

    Furlan, Leonardo; Sterr, Annette

    2018-01-01

    Motor learning studies face the challenge of differentiating between real changes in performance and random measurement error. While the traditional p -value-based analyses of difference (e.g., t -tests, ANOVAs) provide information on the statistical significance of a reported change in performance scores, they do not inform as to the likely cause or origin of that change, that is, the contribution of both real modifications in performance and random measurement error to the reported change. One way of differentiating between real change and random measurement error is through the utilization of the statistics of standard error of measurement (SEM) and minimal detectable change (MDC). SEM is estimated from the standard deviation of a sample of scores at baseline and a test-retest reliability index of the measurement instrument or test employed. MDC, in turn, is estimated from SEM and a degree of confidence, usually 95%. The MDC value might be regarded as the minimum amount of change that needs to be observed for it to be considered a real change, or a change to which the contribution of real modifications in performance is likely to be greater than that of random measurement error. A computer-based motor task was designed to illustrate the applicability of SEM and MDC to motor learning research. Two studies were conducted with healthy participants. Study 1 assessed the test-retest reliability of the task and Study 2 consisted in a typical motor learning study, where participants practiced the task for five consecutive days. In Study 2, the data were analyzed with a traditional p -value-based analysis of difference (ANOVA) and also with SEM and MDC. The findings showed good test-retest reliability for the task and that the p -value-based analysis alone identified statistically significant improvements in performance over time even when the observed changes could in fact have been smaller than the MDC and thereby caused mostly by random measurement error, as opposed to by learning. We suggest therefore that motor learning studies could complement their p -value-based analyses of difference with statistics such as SEM and MDC in order to inform as to the likely cause or origin of any reported changes in performance.

  4. Therapeutic whole-body hypothermia reduces mortality in severe traumatic brain injury if the cooling index is sufficiently high: meta-analyses of the effect of single cooling parameters and their integrated measure.

    PubMed

    Olah, Emoke; Poto, Laszlo; Hegyi, Peter; Szabo, Imre; Hartmann, Petra; Solymar, Margit; Petervari, Erika; Balasko, Marta; Habon, Tamas; Rumbus, Zoltan; Tenk, Judit; Rostas, Ildiko; Weinberg, Jordan; Romanovsky, Andrej A; Garami, Andras

    2018-04-21

    Therapeutic hypothermia was investigated repeatedly as a tool to improve the outcome of severe traumatic brain injury (TBI), but previous clinical trials and meta-analyses found contradictory results. We aimed to determine the effectiveness of therapeutic whole-body hypothermia on the mortality of adult patients with severe TBI by using a novel approach of meta-analysis. We searched the PubMed, EMBASE, and Cochrane Library databases from inception to February 2017. The identified human studies were evaluated regarding statistical, clinical, and methodological designs to ensure inter-study homogeneity. We extracted data on TBI severity, body temperature, mortality, and cooling parameters; then we calculated the cooling index, an integrated measure of therapeutic hypothermia. Forest plot of all identified studies showed no difference in the outcome of TBI between cooled and not cooled patients, but inter-study heterogeneity was high. On the contrary, by meta-analysis of RCTs which were homogenous with regards to statistical, clinical designs and precisely reported the cooling protocol, we showed decreased odds ratio for mortality in therapeutic hypothermia compared to no cooling. As independent factors, milder and longer cooling, and rewarming at < 0.25°C/h were associated with better outcome. Therapeutic hypothermia was beneficial only if the cooling index (measure of combination of cooling parameters) was sufficiently high. We conclude that high methodological and statistical inter-study heterogeneity could underlie the contradictory results obtained in previous studies. By analyzing methodologically homogenous studies, we show that cooling improves the outcome of severe TBI and this beneficial effect depends on certain cooling parameters and on their integrated measure, the cooling index.

  5. Imaging of Al/Fe ratios in synthetic Al-goethite revealed by nanoscale secondary ion mass spectrometry.

    PubMed

    Pohl, Lydia; Kölbl, Angelika; Werner, Florian; Mueller, Carsten W; Höschen, Carmen; Häusler, Werner; Kögel-Knabner, Ingrid

    2018-04-30

    Aluminium (Al)-substituted goethite is ubiquitous in soils and sediments. The extent of Al-substitution affects the physicochemical properties of the mineral and influences its macroscale properties. Bulk analysis only provides total Al/Fe ratios without providing information with respect to the Al-substitution of single minerals. Here, we demonstrate that nanoscale secondary ion mass spectrometry (NanoSIMS) enables the precise determination of Al-content in single minerals, while simultaneously visualising the variation of the Al/Fe ratio. Al-substituted goethite samples were synthesized with increasing Al concentrations of 0.1, 3, and 7 % and analysed by NanoSIMS in combination with established bulk spectroscopic methods (XRD, FTIR, Mössbauer spectroscopy). The high spatial resolution (50-150 nm) of NanoSIMS is accompanied by a high number of single-point measurements. We statistically evaluated the Al/Fe ratios derived from NanoSIMS, while maintaining the spatial information and reassigning it to its original localization. XRD analyses confirmed increasing concentration of incorporated Al within the goethite structure. Mössbauer spectroscopy revealed 11 % of the goethite samples generated at high Al concentrations consisted of hematite. The NanoSIMS data show that the Al/Fe ratios are in agreement with bulk data derived from total digestion and demonstrated small spatial variability between single-point measurements. More advantageously, statistical analysis and reassignment of single-point measurements allowed us to identify distinct spots with significantly higher or lower Al/Fe ratios. NanoSIMS measurements confirmed the capacity to produce images, which indicated the uniform increase in Al-concentrations in goethite. Using a combination of statistical analysis with information from complementary spectroscopic techniques (XRD, FTIR and Mössbauer spectroscopy) we were further able to reveal spots with lower Al/Fe ratios as hematite. Copyright © 2018 John Wiley & Sons, Ltd.

  6. The Dissociative Subtype of PTSD Scale (DSPS): Initial Evaluation in a National Sample of Trauma-Exposed Veterans

    PubMed Central

    Wolf, Erika J.; Mitchell, Karen S.; Sadeh, Naomi; Hein, Christina; Fuhrman, Isaac; Pietrzak, Robert H.; Miller, Mark W.

    2015-01-01

    The fifth edition of the Diagnostic and Statistical Manual (DSM-5) includes a dissociative subtype of posttraumatic stress disorder (PTSD), but no existing measures specifically assess it. This paper describes the initial evaluation of a 15-item self-report measure of the subtype called the Dissociative Subtype of PTSD Scale (DSPS) in an on-line survey of 697 trauma-exposed military veterans representative of the US veteran population. Exploratory factor analyses of the lifetime DSPS items supported the intended structure of the measure consisting of three factors reflecting derealization/depersonalization, loss of awareness, and psychogenic amnesia. Consistent with prior research, latent profile analyses assigned 8.3% of the sample to a highly dissociative class distinguished by pronounced symptoms of derealization and depersonalization. Overall, results provide initial psychometric support for the lifetime DSPS scales; additional research in clinical and community samples is needed to further validate the measure. PMID:26603115

  7. The Dissociative Subtype of PTSD Scale: Initial Evaluation in a National Sample of Trauma-Exposed Veterans.

    PubMed

    Wolf, Erika J; Mitchell, Karen S; Sadeh, Naomi; Hein, Christina; Fuhrman, Isaac; Pietrzak, Robert H; Miller, Mark W

    2017-06-01

    The fifth edition of the Diagnostic and Statistical Manual includes a dissociative subtype of posttraumatic stress disorder, but no existing measures specifically assess it. This article describes the initial evaluation of a 15-item self-report measure of the subtype called the Dissociative Subtype of Posttraumatic Stress Disorder Scale (DSPS) in an online survey of 697 trauma-exposed military veterans representative of the U.S. veteran population. Exploratory factor analyses of the lifetime DSPS items supported the intended structure of the measure consisting of three factors reflecting derealization/depersonalization, loss of awareness, and psychogenic amnesia. Consistent with prior research, latent profile analyses assigned 8.3% of the sample to a highly dissociative class distinguished by pronounced symptoms of derealization and depersonalization. Overall, results provide initial psychometric support for the lifetime DSPS scales; additional research in clinical and community samples is needed to further validate the measure.

  8. Progressive statistics for studies in sports medicine and exercise science.

    PubMed

    Hopkins, William G; Marshall, Stephen W; Batterham, Alan M; Hanin, Juri

    2009-01-01

    Statistical guidelines and expert statements are now available to assist in the analysis and reporting of studies in some biomedical disciplines. We present here a more progressive resource for sample-based studies, meta-analyses, and case studies in sports medicine and exercise science. We offer forthright advice on the following controversial or novel issues: using precision of estimation for inferences about population effects in preference to null-hypothesis testing, which is inadequate for assessing clinical or practical importance; justifying sample size via acceptable precision or confidence for clinical decisions rather than via adequate power for statistical significance; showing SD rather than SEM, to better communicate the magnitude of differences in means and nonuniformity of error; avoiding purely nonparametric analyses, which cannot provide inferences about magnitude and are unnecessary; using regression statistics in validity studies, in preference to the impractical and biased limits of agreement; making greater use of qualitative methods to enrich sample-based quantitative projects; and seeking ethics approval for public access to the depersonalized raw data of a study, to address the need for more scrutiny of research and better meta-analyses. Advice on less contentious issues includes the following: using covariates in linear models to adjust for confounders, to account for individual differences, and to identify potential mechanisms of an effect; using log transformation to deal with nonuniformity of effects and error; identifying and deleting outliers; presenting descriptive, effect, and inferential statistics in appropriate formats; and contending with bias arising from problems with sampling, assignment, blinding, measurement error, and researchers' prejudices. This article should advance the field by stimulating debate, promoting innovative approaches, and serving as a useful checklist for authors, reviewers, and editors.

  9. Applied immuno-epidemiological research: an approach for integrating existing knowledge into the statistical analysis of multiple immune markers.

    PubMed

    Genser, Bernd; Fischer, Joachim E; Figueiredo, Camila A; Alcântara-Neves, Neuza; Barreto, Mauricio L; Cooper, Philip J; Amorim, Leila D; Saemann, Marcus D; Weichhart, Thomas; Rodrigues, Laura C

    2016-05-20

    Immunologists often measure several correlated immunological markers, such as concentrations of different cytokines produced by different immune cells and/or measured under different conditions, to draw insights from complex immunological mechanisms. Although there have been recent methodological efforts to improve the statistical analysis of immunological data, a framework is still needed for the simultaneous analysis of multiple, often correlated, immune markers. This framework would allow the immunologists' hypotheses about the underlying biological mechanisms to be integrated. We present an analytical approach for statistical analysis of correlated immune markers, such as those commonly collected in modern immuno-epidemiological studies. We demonstrate i) how to deal with interdependencies among multiple measurements of the same immune marker, ii) how to analyse association patterns among different markers, iii) how to aggregate different measures and/or markers to immunological summary scores, iv) how to model the inter-relationships among these scores, and v) how to use these scores in epidemiological association analyses. We illustrate the application of our approach to multiple cytokine measurements from 818 children enrolled in a large immuno-epidemiological study (SCAALA Salvador), which aimed to quantify the major immunological mechanisms underlying atopic diseases or asthma. We demonstrate how to aggregate systematically the information captured in multiple cytokine measurements to immunological summary scores aimed at reflecting the presumed underlying immunological mechanisms (Th1/Th2 balance and immune regulatory network). We show how these aggregated immune scores can be used as predictors in regression models with outcomes of immunological studies (e.g. specific IgE) and compare the results to those obtained by a traditional multivariate regression approach. The proposed analytical approach may be especially useful to quantify complex immune responses in immuno-epidemiological studies, where investigators examine the relationship among epidemiological patterns, immune response, and disease outcomes.

  10. Empirical evidence about inconsistency among studies in a pair‐wise meta‐analysis

    PubMed Central

    Turner, Rebecca M.; Higgins, Julian P. T.

    2015-01-01

    This paper investigates how inconsistency (as measured by the I2 statistic) among studies in a meta‐analysis may differ, according to the type of outcome data and effect measure. We used hierarchical models to analyse data from 3873 binary, 5132 continuous and 880 mixed outcome meta‐analyses within the Cochrane Database of Systematic Reviews. Predictive distributions for inconsistency expected in future meta‐analyses were obtained, which can inform priors for between‐study variance. Inconsistency estimates were highest on average for binary outcome meta‐analyses of risk differences and continuous outcome meta‐analyses. For a planned binary outcome meta‐analysis in a general research setting, the predictive distribution for inconsistency among log odds ratios had median 22% and 95% CI: 12% to 39%. For a continuous outcome meta‐analysis, the predictive distribution for inconsistency among standardized mean differences had median 40% and 95% CI: 15% to 73%. Levels of inconsistency were similar for binary data measured by log odds ratios and log relative risks. Fitted distributions for inconsistency expected in continuous outcome meta‐analyses using mean differences were almost identical to those using standardized mean differences. The empirical evidence on inconsistency gives guidance on which outcome measures are most likely to be consistent in particular circumstances and facilitates Bayesian meta‐analysis with an informative prior for heterogeneity. © 2015 The Authors. Research Synthesis Methods published by John Wiley & Sons, Ltd. © 2015 The Authors. Research Synthesis Methods published by John Wiley & Sons, Ltd. PMID:26679486

  11. Evaluating a measure of social health derived from two mental health recovery measures: the California Quality of Life (CA-QOL) and Mental Health Statistics Improvement Program Consumer Survey (MHSIP).

    PubMed

    Carlson, Jordan A; Sarkin, Andrew J; Levack, Ashley E; Sklar, Marisa; Tally, Steven R; Gilmer, Todd P; Groessl, Erik J

    2011-08-01

    Social health is important to measure when assessing outcomes in community mental health. Our objective was to validate social health scales using items from two broader commonly used measures that assess mental health outcomes. Participants were 609 adults receiving psychological treatment services. Items were identified from the California Quality of Life (CA-QOL) and Mental Health Statistics Improvement Program (MHSIP) outcome measures by their conceptual correspondence with social health and compared to the Social Functioning Questionnaire (SFQ) using correlational analyses. Pearson correlations for the identified CA-QOL and MSHIP items with the SFQ ranged from .42 to .62, and the identified scale scores produced Pearson correlation coefficients of .56, .70, and, .70 with the SFQ. Concurrent validity with social health was supported for the identified scales. The current inclusion of these assessment tools allows community mental health programs to include social health in their assessments.

  12. Hospital inpatient self-administration of medicine programmes: a critical literature review.

    PubMed

    Wright, Julia; Emerson, Angela; Stephens, Martin; Lennan, Elaine

    2006-06-01

    The Department of Health, pharmaceutical and nursing bodies have advocated the benefits of self-administration programmes (SAPs), but their implementation within UK hospitals has been limited. Perceived barriers are: anticipated increased workload, insufficient resources and patient safety concerns. This review aims to discover if benefits of SAPs are supported in the literature in relation to risk and resource implications. Electronic databases were searched up to March 2004. Published English language articles that described and evaluated implementation of an SAP were included. Outcomes reported were: compliance measures, errors, knowledge, patient satisfaction, and nursing and pharmacy time. Most of the 51 papers reviewed had methodological flaws. SAPs varied widely in content and structure. Twelve studies (10 controlled) measured compliance by tablet counts. Of 7 studies subjected to statistical analysis, four demonstrated a significant difference in compliance between SAP and controls. Eight studies (5 controlled) measured errors as an outcome. Of the two evaluated statistically, only one demonstrated significantly fewer medication errors in the SAP group than in controls. Seventeen papers (11 controlled) studied the effect of SAPs on patients' medication knowledge. Ten of the 11 statistically analysed studies showed that SAP participants knew significantly more about some aspects of their medication than did controls. Seventeen studies (5 controlled), measured patient satisfaction. Two studies were statistically analysed and these studies suggested that patients were satisfied and preferred SAP. Seven papers studied pharmacy time, three studied nursing time but results were not compared to controls. The paucity of well-designed studies, flawed methodology and inadequate reporting in many papers make conclusions hard to draw. Conclusive evidence that SAPs improve compliance was not provided. Although patients participating in SAPs make errors, small numbers of patients are often responsible for a large number of errors. Whilst most studies suggest that SAPs increase patient's knowledge in part, it is difficult to separate out the effect of the educational component of many SAPs. Most patients who participated in SAPs were satisfied with their care and many would choose to take part in a SAP in the future. No studies measured the total resource requirement of implementing and maintaining a SAP.

  13. The effect of a cryotherapy gel wrap on the microcirculation of skin affected by chronic venous disorders.

    PubMed

    Kelechi, Teresa J; Mueller, Martina; Zapka, Jane G; King, Dana E

    2011-11-01

    The aim of this randomized clinical trial was to investigate a cryotherapy (cooling) gel wrap applied to lower leg skin affected by chronic venous disorders to determine whether therapeutic cooling improves skin microcirculation. Chronic venous disorders are under-recognized vascular health problems that result in severe skin damage and ulcerations of the lower legs. Impaired skin microcirculation contributes to venous leg ulcer development, thus new prevention therapies should address the microcirculation to prevent venous leg ulcers. Sixty participants (n = 30 per group) were randomized to receive one of two daily 30-minute interventions for four weeks. The treatment group applied the cryotherapy gel wrap around the affected lower leg skin, or compression and elevated the legs on a special pillow each evening at bedtime. The standard care group wore compression and elevated the legs only. Laboratory pre- and post-measures included microcirculation measures of skin temperature with a thermistor, blood flow with a laser Doppler flowmeter, and venous refill time with a photoplethysmograph. Data were collected between 2008 2009 and analysed using descriptive statistics, paired t-tests or Wilcoxon signed ranks tests, logistic regression analyses, and mixed model analyses. Fifty-seven participants (treatment = 28; standard care = 29) completed the study. The mean age was 62 years, 70% female, 50% African American. In the final adjusted model, there was a statistically significant decrease in blood flow between the two groups (-6.2[-11.8; -0.6], P = 0.03). No statistically significant differences were noted in temperature or venous refill time. Study findings suggest that cryotherapy improves blood flow by slowing movement within the microcirculation and thus might potentially provide a therapeutic benefit to prevent leg ulcers. © 2011 Blackwell Publishing Ltd.

  14. The effect of a cryotherapy gel wrap on the microcirculation of skin affected by Chronic Venous Disorders

    PubMed Central

    Mueller, Martina; Zapka, Jane G.; King, Dana E.

    2011-01-01

    Aim This randomized clinical trial was conducted 2008 – 2009 to investigate a cryotherapy (cooling) gel wrap applied to lower leg skin affected by chronic venous disorders to determine whether therapeutic cooling improves skin microcirculation. Impaired skin microcirculation contributes to venous leg ulcer development, thus new prevention therapies should address the microcirculation to prevent venous leg ulcers. Data Sources Sixty participants (n = 30 per group) were randomized to receive one of two daily 30-minute interventions for four weeks. The treatment group applied the cryotherapy gel wrap around the affected lower leg skin, or compression and elevated the legs on a special pillow each evening at bedtime. The standard care group wore compression and elevated the legs only. Laboratory pre- and post-measures included microcirculation measures of skin temperature with a thermistor, blood flow with a laser Doppler flowmeter, and venous refill time with a photoplethysmograph. Review methods Data were analysed using descriptive statistics, paired t-tests or Wilcoxon signed ranks tests, logistic regression analyses, and mixed model analyses. Results Fifty-seven participants (treatment = 28; standard care = 29) completed the study. The mean age was 62 years, 70% female, 50% African American. In the final adjusted model, there was a statistically significant decrease in blood flow between the two groups (−6.2[−11.8; −0.6], P = 0.03). No statistically significant differences were noted in temperature or venous refill time. Conclusion Study findings suggest that cryotherapy improves blood flow by slowing movement within the microcirculation and thus might potentially provide a therapeutic benefit to prevent leg ulcers. PMID:21592186

  15. Statistical limitations in functional neuroimaging. I. Non-inferential methods and statistical models.

    PubMed Central

    Petersson, K M; Nichols, T E; Poline, J B; Holmes, A P

    1999-01-01

    Functional neuroimaging (FNI) provides experimental access to the intact living brain making it possible to study higher cognitive functions in humans. In this review and in a companion paper in this issue, we discuss some common methods used to analyse FNI data. The emphasis in both papers is on assumptions and limitations of the methods reviewed. There are several methods available to analyse FNI data indicating that none is optimal for all purposes. In order to make optimal use of the methods available it is important to know the limits of applicability. For the interpretation of FNI results it is also important to take into account the assumptions, approximations and inherent limitations of the methods used. This paper gives a brief overview over some non-inferential descriptive methods and common statistical models used in FNI. Issues relating to the complex problem of model selection are discussed. In general, proper model selection is a necessary prerequisite for the validity of the subsequent statistical inference. The non-inferential section describes methods that, combined with inspection of parameter estimates and other simple measures, can aid in the process of model selection and verification of assumptions. The section on statistical models covers approaches to global normalization and some aspects of univariate, multivariate, and Bayesian models. Finally, approaches to functional connectivity and effective connectivity are discussed. In the companion paper we review issues related to signal detection and statistical inference. PMID:10466149

  16. Visual classification of very fine-grained sediments: Evaluation through univariate and multivariate statistics

    USGS Publications Warehouse

    Hohn, M. Ed; Nuhfer, E.B.; Vinopal, R.J.; Klanderman, D.S.

    1980-01-01

    Classifying very fine-grained rocks through fabric elements provides information about depositional environments, but is subject to the biases of visual taxonomy. To evaluate the statistical significance of an empirical classification of very fine-grained rocks, samples from Devonian shales in four cored wells in West Virginia and Virginia were measured for 15 variables: quartz, illite, pyrite and expandable clays determined by X-ray diffraction; total sulfur, organic content, inorganic carbon, matrix density, bulk density, porosity, silt, as well as density, sonic travel time, resistivity, and ??-ray response measured from well logs. The four lithologic types comprised: (1) sharply banded shale, (2) thinly laminated shale, (3) lenticularly laminated shale, and (4) nonbanded shale. Univariate and multivariate analyses of variance showed that the lithologic classification reflects significant differences for the variables measured, difference that can be detected independently of stratigraphic effects. Little-known statistical methods found useful in this work included: the multivariate analysis of variance with more than one effect, simultaneous plotting of samples and variables on canonical variates, and the use of parametric ANOVA and MANOVA on ranked data. ?? 1980 Plenum Publishing Corporation.

  17. Statistical analysis on experimental calibration data for flowmeters in pressure pipes

    NASA Astrophysics Data System (ADS)

    Lazzarin, Alessandro; Orsi, Enrico; Sanfilippo, Umberto

    2017-08-01

    This paper shows a statistical analysis on experimental calibration data for flowmeters (i.e.: electromagnetic, ultrasonic, turbine flowmeters) in pressure pipes. The experimental calibration data set consists of the whole archive of the calibration tests carried out on 246 flowmeters from January 2001 to October 2015 at Settore Portate of Laboratorio di Idraulica “G. Fantoli” of Politecnico di Milano, that is accredited as LAT 104 for a flow range between 3 l/s and 80 l/s, with a certified Calibration and Measurement Capability (CMC) - formerly known as Best Measurement Capability (BMC) - equal to 0.2%. The data set is split into three subsets, respectively consisting in: 94 electromagnetic, 83 ultrasonic and 69 turbine flowmeters; each subset is analysed separately from the others, but then a final comparison is carried out. In particular, the main focus of the statistical analysis is the correction C, that is the difference between the flow rate Q measured by the calibration facility (through the accredited procedures and the certified reference specimen) minus the flow rate QM contemporarily recorded by the flowmeter under calibration, expressed as a percentage of the same QM .

  18. Direct Measurements of the Convective Recycling of the Upper Troposphere

    NASA Technical Reports Server (NTRS)

    Bertram, Timothy H.; Perring, Anne E.; Wooldridge, Paul J.; Crounse, John D.; Kwan, Alan J.; Wennberg, Paul O.; Scheuer, Eric; Dibb, Jack; Avery, Melody; Sachse, Glen; hide

    2007-01-01

    We present a statistical representation of the aggregate effects of deep convection on the chemistry and dynamics of the Upper Troposphere (UT) based on direct aircraft observations of the chemical composition of the UT over the Eastern United States and Canada during summer. These measurements provide new and unique observational constraints on the chemistry occurring downwind of convection and the rate at which air in the UT is recycled, previously only the province of model analyses. These results provide quantitative measures that can be used to evaluate global climate and chemistry models.

  19. Quantitative analysis of trace levels of surface contamination by X-ray photoelectron spectroscopy Part I: statistical uncertainty near the detection limit.

    PubMed

    Hill, Shannon B; Faradzhev, Nadir S; Powell, Cedric J

    2017-12-01

    We discuss the problem of quantifying common sources of statistical uncertainties for analyses of trace levels of surface contamination using X-ray photoelectron spectroscopy. We examine the propagation of error for peak-area measurements using common forms of linear and polynomial background subtraction including the correlation of points used to determine both background and peak areas. This correlation has been neglected in previous analyses, but we show that it contributes significantly to the peak-area uncertainty near the detection limit. We introduce the concept of relative background subtraction variance (RBSV) which quantifies the uncertainty introduced by the method of background determination relative to the uncertainty of the background area itself. The uncertainties of the peak area and atomic concentration and of the detection limit are expressed using the RBSV, which separates the contributions from the acquisition parameters, the background-determination method, and the properties of the measured spectrum. These results are then combined to find acquisition strategies that minimize the total measurement time needed to achieve a desired detection limit or atomic-percentage uncertainty for a particular trace element. Minimization of data-acquisition time is important for samples that are sensitive to x-ray dose and also for laboratories that need to optimize throughput.

  20. Automated brain volumetrics in multiple sclerosis: a step closer to clinical application

    PubMed Central

    Beadnall, H N; Hatton, S N; Bader, G; Tomic, D; Silva, D G

    2016-01-01

    Background Whole brain volume (WBV) estimates in patients with multiple sclerosis (MS) correlate more robustly with clinical disability than traditional, lesion-based metrics. Numerous algorithms to measure WBV have been developed over the past two decades. We compare Structural Image Evaluation using Normalisation of Atrophy-Cross-sectional (SIENAX) to NeuroQuant and MSmetrix, for assessment of cross-sectional WBV in patients with MS. Methods MRIs from 61 patients with relapsing-remitting MS and 2 patients with clinically isolated syndrome were analysed. WBV measurements were calculated using SIENAX, NeuroQuant and MSmetrix. Statistical agreement between the methods was evaluated using linear regression and Bland-Altman plots. Precision and accuracy of WBV measurement was calculated for (1) NeuroQuant versus SIENAX and (2) MSmetrix versus SIENAX. Results Precision (Pearson's r) of WBV estimation for NeuroQuant and MSmetrix versus SIENAX was 0.983 and 0.992, respectively. Accuracy (Cb) was 0.871 and 0.994, respectively. NeuroQuant and MSmetrix showed a 5.5% and 1.0% volume difference compared with SIENAX, respectively, that was consistent across low and high values. Conclusions In the analysed population, NeuroQuant and MSmetrix both quantified cross-sectional WBV with comparable statistical agreement to SIENAX, a well-validated cross-sectional tool that has been used extensively in MS clinical studies. PMID:27071647

  1. Analysis and interpretation of cost data in randomised controlled trials: review of published studies

    PubMed Central

    Barber, Julie A; Thompson, Simon G

    1998-01-01

    Objective To review critically the statistical methods used for health economic evaluations in randomised controlled trials where an estimate of cost is available for each patient in the study. Design Survey of published randomised trials including an economic evaluation with cost values suitable for statistical analysis; 45 such trials published in 1995 were identified from Medline. Main outcome measures The use of statistical methods for cost data was assessed in terms of the descriptive statistics reported, use of statistical inference, and whether the reported conclusions were justified. Results Although all 45 trials reviewed apparently had cost data for each patient, only 9 (20%) reported adequate measures of variability for these data and only 25 (56%) gave results of statistical tests or a measure of precision for the comparison of costs between the randomised groups. Only 16 (36%) of the articles gave conclusions which were justified on the basis of results presented in the paper. No paper reported sample size calculations for costs. Conclusions The analysis and interpretation of cost data from published trials reveal a lack of statistical awareness. Strong and potentially misleading conclusions about the relative costs of alternative therapies have often been reported in the absence of supporting statistical evidence. Improvements in the analysis and reporting of health economic assessments are urgently required. Health economic guidelines need to be revised to incorporate more detailed statistical advice. Key messagesHealth economic evaluations required for important healthcare policy decisions are often carried out in randomised controlled trialsA review of such published economic evaluations assessed whether statistical methods for cost outcomes have been appropriately used and interpretedFew publications presented adequate descriptive information for costs or performed appropriate statistical analysesIn at least two thirds of the papers, the main conclusions regarding costs were not justifiedThe analysis and reporting of health economic assessments within randomised controlled trials urgently need improving PMID:9794854

  2. PV cells electrical parameters measurement

    NASA Astrophysics Data System (ADS)

    Cibira, Gabriel

    2017-12-01

    When measuring optical parameters of a photovoltaic silicon cell, precise results bring good electrical parameters estimation, applying well-known physical-mathematical models. Nevertheless, considerable re-combination phenomena might occur in both surface and intrinsic thin layers within novel materials. Moreover, rear contact surface parameters may influence close-area re-combination phenomena, too. Therefore, the only precise electrical measurement approach is to prove assumed cell electrical parameters. Based on theoretical approach with respect to experiments, this paper analyses problems within measurement procedures and equipment used for electrical parameters acquisition within a photovoltaic silicon cell, as a case study. Statistical appraisal quality is contributed.

  3. Difficulties in learning and teaching statistics: teacher views

    NASA Astrophysics Data System (ADS)

    Koparan, Timur

    2015-01-01

    The purpose of this study is to define teacher views about the difficulties in learning and teaching middle school statistics subjects. To serve this aim, a number of interviews were conducted with 10 middle school maths teachers in 2011-2012 school year in the province of Trabzon. Of the qualitative descriptive research methods, the semi-structured interview technique was applied in the research. In accordance with the aim, teacher opinions about the statistics subjects were examined and analysed. Similar responses from the teachers were grouped and evaluated. The teachers stated that it was positive that middle school statistics subjects were taught gradually in every grade but some difficulties were experienced in the teaching of this subject. The findings are presented in eight themes which are context, sample, data representation, central tendency and dispersion measure, probability, variance, and other difficulties.

  4. Does Motivational Interviewing (MI) Work with Nonaddicted Clients? A Controlled Study Measuring the Effects of a Brief Training in MI on Client Outcomes

    ERIC Educational Resources Information Center

    Young, Tabitha L.; Gutierrez, Daniel; Hagedorn, W. Bryce

    2013-01-01

    This study investigated the relationships between motivational interviewing (MI) and client symptoms, attendance, and satisfaction. Seventy-nine clients attending a university-based counseling center were purposefully assigned to treatment or control conditions. Statistical analyses revealed client symptoms in both groups improved. However,…

  5. From genes to ecosystems: Measuring evolutionary diversity and community structure with Forest Inventory and Analysis (FIA) data

    Treesearch

    Kevin M. Potter

    2009-01-01

    Forest genetic sustainability is an important component of forest health because genetic diversity and evolutionary processes allow for the adaptation of species and for the maintenance of ecosystem functionality and resilience. Phylogenetic community analyses, a set of new statistical methods for describing the evolutionary relationships among species, offer an...

  6. Are Gender Differences in Perceived and Demonstrated Technology Literacy Significant? It Depends on the Model

    ERIC Educational Resources Information Center

    Hohlfeld, Tina N.; Ritzhaupt, Albert D.; Barron, Ann E.

    2013-01-01

    This paper examines gender differences related to Information and Communication Technology (ICT) literacy using two valid and internally consistent measures with eighth grade students (N = 1,513) from Florida public schools. The results of t test statistical analyses, which examined only gender differences in demonstrated and perceived ICT skills,…

  7. Sample Size Calculations for Precise Interval Estimation of the Eta-Squared Effect Size

    ERIC Educational Resources Information Center

    Shieh, Gwowen

    2015-01-01

    Analysis of variance is one of the most frequently used statistical analyses in the behavioral, educational, and social sciences, and special attention has been paid to the selection and use of an appropriate effect size measure of association in analysis of variance. This article presents the sample size procedures for precise interval estimation…

  8. Are Student Evaluations of Teaching Effectiveness Valid for Measuring Student Learning Outcomes in Business Related Classes? A Neural Network and Bayesian Analyses

    ERIC Educational Resources Information Center

    Galbraith, Craig S.; Merrill, Gregory B.; Kline, Doug M.

    2012-01-01

    In this study we investigate the underlying relational structure between student evaluations of teaching effectiveness (SETEs) and achievement of student learning outcomes in 116 business related courses. Utilizing traditional statistical techniques, a neural network analysis and a Bayesian data reduction and classification algorithm, we find…

  9. The Influence of Biographical Factors on Adult Learner Self-Directedness in an Open Distance Learning Environment

    ERIC Educational Resources Information Center

    Botha, Jo-Anne; Coetzee, Mariette

    2016-01-01

    This study investigated the relationship between self-directedness (as measured by the Adult Learner Self-Directedness Scale) and biographical factors such as age, race, and gender of adult learners enrolled at a South African open distance learning (ODL) higher education institution. Correlational and inferential statistical analyses were used. A…

  10. The Relationship between Parental Involvement and Urban Secondary School Student Academic Achievement: A Meta-Analysis

    ERIC Educational Resources Information Center

    Jeynes, William H.

    2007-01-01

    A meta-analysis is undertaken, including 52 studies, to determine the influence of parental involvement on the educational outcomes of urban secondary school children. Statistical analyses are done to determine the overall impact of parental involvement as well as specific components of parental involvement. Four different measures of educational…

  11. Ontario Universities Statistical Compendium, 1970-71 to 1978-79. Part A, Macro-Indicators.

    ERIC Educational Resources Information Center

    Council of Ontario Universities, Toronto.

    Macro-indicators on the conditions of Ontario universities and supporting data that might be used to generate such indicators were developed, and analyses of both indicators and data were undertaken. Overall objectives were as follows: (1) to measure the real resources available to the Ontario university system as a function of the volume of…

  12. Engineering evaluation of SSME dynamic data from engine tests and SSV flights

    NASA Technical Reports Server (NTRS)

    1986-01-01

    An engineering evaluation of dynamic data from SSME hot firing tests and SSV flights is summarized. The basic objective of the study is to provide analyses of vibration, strain and dynamic pressure measurements in support of MSFC performance and reliability improvement programs. A brief description of the SSME test program is given and a typical test evaluation cycle reviewed. Data banks generated to characterize SSME component dynamic characteristics are described and statistical analyses performed on these data base measurements are discussed. Analytical models applied to define the dynamic behavior of SSME components (such as turbopump bearing elements and the flight accelerometer safety cut-off system) are also summarized. Appendices are included to illustrate some typical tasks performed under this study.

  13. Use of proxy measures in estimating socioeconomic inequalities in malaria prevalence.

    PubMed

    Somi, Masha F; Butler, James R; Vahid, Farshid; Njau, Joseph D; Kachur, S P; Abdulla, Salim

    2008-03-01

    To present and compare socioeconomic status (SES) rankings of households using consumption and an asset-based index as two alternative measures of SES; and to compare and evaluate the performance of these two measures in multivariate analyses of the socioeconomic gradient in malaria prevalence. Data for the study come from a survey of 557 households in 25 study villages in Tanzania in 2004. Household SES was determined using consumption and an asset-based index calculated using Principal Components Analysis on a set of household variables. In multivariate analyses of malaria prevalence, we also used two other measures of disease prevalence: parasitaemia and self-report of malaria or fever in the 2 weeks before interview. Household rankings based on the two measures of SES differ substantially. In multivariate analyses, there was a statistically significant negative association between both measures of SES and parasitaemia but not between either measure of SES and self-reported malaria. Age of individual, use of a mosquito net, and wall construction were negatively and significantly associated with parasitaemia, whilst roof construction was positively associated with parasitaemia. Only age remained significant when malaria self-report was used as the measure of disease prevalence. An asset index is an effective alternative to consumption in measuring the socioeconomic gradient in malaria parasitaemia, but self-report may be an unreliable measure of malaria prevalence for this purpose.

  14. Tightening force and torque of nonlocking screws in a reverse shoulder prosthesis.

    PubMed

    Terrier, A; Kochbeck, S H; Merlini, F; Gortchacow, M; Pioletti, D P; Farron, A

    2010-07-01

    Reversed shoulder arthroplasty is an accepted treatment for glenohumeral arthritis associated to rotator cuff deficiency. For most reversed shoulder prostheses, the baseplate of the glenoid component is uncemented and its primary stability is provided by a central peg and peripheral screws. Because of the importance of the primary stability for a good osteo-integration of the baseplate, the optimal fixation of the screws is crucial. In particular, the amplitude of the tightening force of the nonlocking screws is clearly associated to this stability. Since this force is unknown, it is currently not accounted for in experimental or numerical analyses. Thus, the primary goal of this work is to measure this tightening force experimentally. In addition, the tightening torque was also measured, to estimate an optimal surgical value. An experimental setup with an instrumented baseplate was developed to measure simultaneously the tightening force, tightening torque and screwing angle, of the nonlocking screws of the Aquealis reversed prosthesis. In addition, the amount of bone volume around each screw was measured with a micro-CT. Measurements were performed on 6 human cadaveric scapulae. A statistically correlated relationship (p<0.05, R=0.83) was obtained between the maximal tightening force and the bone volume. The relationship between the tightening torque and the bone volume was not statistically significant. The experimental relationship presented in this paper can be used in numerical analyses to improve the baseplate fixation in the glenoid bone. Copyright (c) 2010 Elsevier Ltd. All rights reserved.

  15. Measuring the statistical validity of summary meta-analysis and meta-regression results for use in clinical practice.

    PubMed

    Willis, Brian H; Riley, Richard D

    2017-09-20

    An important question for clinicians appraising a meta-analysis is: are the findings likely to be valid in their own practice-does the reported effect accurately represent the effect that would occur in their own clinical population? To this end we advance the concept of statistical validity-where the parameter being estimated equals the corresponding parameter for a new independent study. Using a simple ('leave-one-out') cross-validation technique, we demonstrate how we may test meta-analysis estimates for statistical validity using a new validation statistic, Vn, and derive its distribution. We compare this with the usual approach of investigating heterogeneity in meta-analyses and demonstrate the link between statistical validity and homogeneity. Using a simulation study, the properties of Vn and the Q statistic are compared for univariate random effects meta-analysis and a tailored meta-regression model, where information from the setting (included as model covariates) is used to calibrate the summary estimate to the setting of application. Their properties are found to be similar when there are 50 studies or more, but for fewer studies Vn has greater power but a higher type 1 error rate than Q. The power and type 1 error rate of Vn are also shown to depend on the within-study variance, between-study variance, study sample size, and the number of studies in the meta-analysis. Finally, we apply Vn to two published meta-analyses and conclude that it usefully augments standard methods when deciding upon the likely validity of summary meta-analysis estimates in clinical practice. © 2017 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. © 2017 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.

  16. Vocal training in an anthropometrical aspect.

    PubMed

    Wyganowska-Świątkowska, Marzena; Kowalkowska, Iwona; Flicińska-Pamfil, Grażyna; Dąbrowski, Mikołaj; Kopczyński, Przemysław; Wiskirska-Woźnica, Bożena

    2017-12-01

    As shown in our previous paper, the dimensions of the cerebral parts of the cranium and face of the vocal students were higher than those of the non-singing students. The aim of the present study was to analyse the type of voice and its development depending on selected dimensions. A total of 56 vocal students - 36 women and 20 men - who underwent anthropometric measurements were divided into groups according to their voice type. Two professors of singing made a subjective, independent evaluation of individual students' vocal development progress during the four years of training. The findings were analysed statistically with the current licensed versions of Statistica software. We found statistically significant positive correlation between: the head length, head and face width, depth of upper and middle face, nose length and student's voice development. The dimensions of the head and the face have no impact on type of voice; however, some anatomical characteristics may have impact on voice development.

  17. Multivariate analyses of tinnitus complaint and change in tinnitus complaint: a masker study.

    PubMed

    Jakes, S; Stephens, S D

    1987-11-01

    Multivariate statistical techniques were used to re-analyse the data from the recent DHSS multi-centre masker study. These analyses were undertaken to three ends. First, to clarify and attempt to replicate the previously found factor structure of complaints about tinnitus. Secondly, to attempt to identify common factors in the change or improvement measures pre- and post-masker treatment. Thirdly, to identify predictors of any such outcome factors. Two complaint factors were identified; 'Distress' and 'intrusiveness'. A series of analyses were conducted on change measures using different numbers of subjects and variables. When only semantic differential scales were used, the change factors were very similar to the complaint factors noted above. When variables measuring other aspects of improvement were included, several other factors were identified. These included; 'tinnitus helped', 'masking effects', 'residual inhibition' and 'matched loudness'. Twenty-five conceptually distinct predictors of outcome were identified. These predictor variables were quite different for different outcome factors. For example, high-frequency hearing loss was a predictor of tinnitus being helped by the masker, and a low frequency match and a low masking threshold predicted therapeutic success on residual inhibition. Decrease in matched loudness was predicted by louder tinnitus initially.

  18. Empirical performance of interpolation techniques in risk-neutral density (RND) estimation

    NASA Astrophysics Data System (ADS)

    Bahaludin, H.; Abdullah, M. H.

    2017-03-01

    The objective of this study is to evaluate the empirical performance of interpolation techniques in risk-neutral density (RND) estimation. Firstly, the empirical performance is evaluated by using statistical analysis based on the implied mean and the implied variance of RND. Secondly, the interpolation performance is measured based on pricing error. We propose using the leave-one-out cross-validation (LOOCV) pricing error for interpolation selection purposes. The statistical analyses indicate that there are statistical differences between the interpolation techniques:second-order polynomial, fourth-order polynomial and smoothing spline. The results of LOOCV pricing error shows that interpolation by using fourth-order polynomial provides the best fitting to option prices in which it has the lowest value error.

  19. What can 35 years and over 700,000 measurements tell us about noise exposure in the mining industry?

    PubMed

    Roberts, Benjamin; Sun, Kan; Neitzel, Richard L

    2017-01-01

    To analyse over 700,000 cross-sectional measurements from the Mine Safety and Health Administration (MHSA) and develop statistical models to predict noise exposure for a worker. Descriptive statistics were used to summarise the data. Two linear regression models were used to predict noise exposure based on MSHA-permissible exposure limit (PEL) and action level (AL), respectively. Twofold cross validation was used to compare the exposure estimates from the models to actual measurement. The mean difference and t-statistic was calculated for each job title to determine whether the model predictions were significantly different from the actual data. Measurements were acquired from MSHA through a Freedom of Information Act request. From 1979 to 2014, noise exposure has decreased. Measurements taken before the implementation of MSHA's revised noise regulation in 2000 were on average 4.5 dBA higher than after the law was implemented. Both models produced exposure predictions that were less than 1 dBA different than the holdout data. Overall noise levels in mines have been decreasing. However, this decrease has not been uniform across all mining sectors. The exposure predictions from the model will be useful to help predict hearing loss in workers in the mining industry.

  20. Body fat indices and biomarkers of inflammation: a cross-sectional study with implications for obesity and peri-implant oral health.

    PubMed

    Elangovan, Satheesh; Brogden, Kim A; Dawson, Deborah V; Blanchette, Derek; Pagan-Rivera, Keyla; Stanford, Clark M; Johnson, Georgia K; Recker, Erica; Bowers, Rob; Haynes, William G; Avila-Ortiz, Gustavo

    2014-01-01

    To examine the relationships between three measures of body fat-body mass index (BMI), waist circumference (WC), and total body fat percent-and markers of inflammation around dental implants in stable periodontal maintenance patients. Seventy-three subjects were enrolled in this cross-sectional assessment. The study visit consisted of a physical examination that included anthropologic measurements of body composition (BMI, WC, body fat %); intraoral assessments were performed (full-mouth plaque index, periodontal and peri-implant comprehensive examinations) and peri-implant sulcular fluid (PISF) was collected on the study implants. Levels of interleukin (IL)-1α, IL-1β, IL-6, IL-8, IL-10, IL-12, IL-17, tumor necrosis factor-α, C-reactive protein, osteoprotegerin, leptin, and adiponectin in the PISF were measured using multiplex proteomic immunoassays. Correlation analysis with body fat measures was then performed using appropriate statistical methods. After adjustments for covariates, regression analyses revealed statistically significant correlation between IL-1β in PISF and WC (R = 0.33; P = .0047). In this study in stable periodontal maintenance patients, a modest but statistically significant positive correlation was observed between the levels of IL-1β, a major proinflammatory cytokine in PISF, and WC, a reliable measure of central obesity.

  1. Measuring anxiety after spinal cord injury: Development and psychometric characteristics of the SCI-QOL Anxiety item bank and linkage with GAD-7.

    PubMed

    Kisala, Pamela A; Tulsky, David S; Kalpakjian, Claire Z; Heinemann, Allen W; Pohlig, Ryan T; Carle, Adam; Choi, Seung W

    2015-05-01

    To develop a calibrated item bank and computer adaptive test to assess anxiety symptoms in individuals with spinal cord injury (SCI), transform scores to the Patient Reported Outcomes Measurement Information System (PROMIS) metric, and create a statistical linkage with the Generalized Anxiety Disorder (GAD)-7, a widely used anxiety measure. Grounded-theory based qualitative item development methods; large-scale item calibration field testing; confirmatory factor analysis; graded response model item response theory analyses; statistical linking techniques to transform scores to a PROMIS metric; and linkage with the GAD-7. Setting Five SCI Model System centers and one Department of Veterans Affairs medical center in the United States. Participants Adults with traumatic SCI. Spinal Cord Injury-Quality of Life (SCI-QOL) Anxiety Item Bank Seven hundred sixteen individuals with traumatic SCI completed 38 items assessing anxiety, 17 of which were PROMIS items. After 13 items (including 2 PROMIS items) were removed, factor analyses confirmed unidimensionality. Item response theory analyses were used to estimate slopes and thresholds for the final 25 items (15 from PROMIS). The observed Pearson correlation between the SCI-QOL Anxiety and GAD-7 scores was 0.67. The SCI-QOL Anxiety item bank demonstrates excellent psychometric properties and is available as a computer adaptive test or short form for research and clinical applications. SCI-QOL Anxiety scores have been transformed to the PROMIS metric and we provide a method to link SCI-QOL Anxiety scores with those of the GAD-7.

  2. PIXE analysis of elements in gastric cancer and adjacent mucosa

    NASA Astrophysics Data System (ADS)

    Liu, Qixin; Zhong, Ming; Zhang, Xiaofeng; Yan, Lingnuo; Xu, Yongling; Ye, Simao

    1990-04-01

    The elemental regional distributions in 20 resected human stomach tissues were obtained using PIXE analysis. The samples were pathologically divided into four types: normal, adjacent mucosa A, adjacent mucosa B and cancer. The targets for PIXE analysis were prepared by wet digestion with a pressure bomb system. P, K, Fe, Cu, Zn and Se were measured and statistically analysed. We found significantly higher concentrations of P, K, Cu, Zn and a higher ratio of Cu compared to Zn in cancer tissue as compared with normal tissue, but statistically no significant difference between adjacent mucosa and cancer tissue was found.

  3. An innovative and comprehensive technique to evaluate different measures of medication adherence: The network meta-analysis.

    PubMed

    Tonin, Fernanda S; Wiecek, Elyssa; Torres-Robles, Andrea; Pontarolo, Roberto; Benrimoj, Shalom Charlie I; Fernandez-Llimos, Fernando; Garcia-Cardenas, Victoria

    2018-05-19

    Poor medication adherence is associated with adverse health outcomes and higher costs of care. However, inconsistencies in the assessment of adherence are found in the literature. To evaluate the effect of different measures of adherence in the comparative effectiveness of complex interventions to enhance patients' adherence to prescribed medications. A systematic review with network meta-analysis was performed. Electronic searches for relevant pairwise meta-analysis including trials of interventions that aimed to improve medication adherence were performed in PubMed. Data extraction was conducted with eligible trials evaluating short-period adherence follow-up (until 3 months) using any measure of adherence: self-report, pill count, or MEMS (medication event monitoring system). To standardize the results obtained with these different measures, an overall composite measure and an objective composite measure were also calculated. Network meta-analyses for each measure of adherence were built. Rank order and surface under the cumulative ranking curve analyses (SUCRA) were performed. Ninety-one trials were included in the network meta-analyses. The five network meta-analyses demonstrated robustness and reliability. Results obtained for all measures of adherence were similar across them and to both composite measures. For both composite measures, interventions comprising economic + technical components were the best option (90% of probability in SUCRA analysis) with statistical superiority against almost all other interventions and against standard care (odds ratio with 95% credibility interval ranging from 0.09 to 0.25 [0.02, 0.98]). The use of network meta-analysis was reliable to compare different measures of adherence of complex interventions in short-periods follow-up. Analyses with longer follow-up periods are needed to confirm these results. Different measures of adherence produced similar results. The use of composite measures revealed reliable alternatives to establish a broader and more detailed picture of adherence. Copyright © 2018 Elsevier Inc. All rights reserved.

  4. Measurement of turbulent spatial structure and kinetic energy spectrum by exact temporal-to-spatial mapping

    NASA Astrophysics Data System (ADS)

    Buchhave, Preben; Velte, Clara M.

    2017-08-01

    We present a method for converting a time record of turbulent velocity measured at a point in a flow to a spatial velocity record consisting of consecutive convection elements. The spatial record allows computation of dynamic statistical moments such as turbulent kinetic wavenumber spectra and spatial structure functions in a way that completely bypasses the need for Taylor's hypothesis. The spatial statistics agree with the classical counterparts, such as the total kinetic energy spectrum, at least for spatial extents up to the Taylor microscale. The requirements for applying the method are access to the instantaneous velocity magnitude, in addition to the desired flow quantity, and a high temporal resolution in comparison to the relevant time scales of the flow. We map, without distortion and bias, notoriously difficult developing turbulent high intensity flows using three main aspects that distinguish these measurements from previous work in the field: (1) The measurements are conducted using laser Doppler anemometry and are therefore not contaminated by directional ambiguity (in contrast to, e.g., frequently employed hot-wire anemometers); (2) the measurement data are extracted using a correctly and transparently functioning processor and are analysed using methods derived from first principles to provide unbiased estimates of the velocity statistics; (3) the exact mapping proposed herein has been applied to the high turbulence intensity flows investigated to avoid the significant distortions caused by Taylor's hypothesis. The method is first confirmed to produce the correct statistics using computer simulations and later applied to measurements in some of the most difficult regions of a round turbulent jet—the non-equilibrium developing region and the outermost parts of the developed jet. The proposed mapping is successfully validated using corresponding directly measured spatial statistics in the fully developed jet, even in the difficult outer regions of the jet where the average convection velocity is negligible and turbulence intensities increase dramatically. The measurements in the developing region reveal interesting features of an incomplete Richardson-Kolmogorov cascade under development.

  5. Reliability of reference distances used in photogrammetry.

    PubMed

    Aksu, Muge; Kaya, Demet; Kocadereli, Ilken

    2010-07-01

    To determine the reliability of the reference distances used for photogrammetric assessment. The sample consisted of 100 subjects with mean ages of 22.97 +/- 2.98 years. Five lateral and four frontal parameters were measured directly on the subjects' faces. For photogrammetric assessment, two reference distances for the profile view and three reference distances for the frontal view were established. Standardized photographs were taken and all the parameters that had been measured directly on the face were measured on the photographs. The reliability of the reference distances was checked by comparing direct and indirect values of the parameters obtained from the subjects' faces and photographs. Repeated measure analysis of variance (ANOVA) and Bland-Altman analyses were used for statistical assessment. For profile measurements, the indirect values measured were statistically different from the direct values except for Sn-Sto in male subjects and Prn-Sn and Sn-Sto in female subjects. The indirect values of Prn-Sn and Sn-Sto were reliable in both sexes. The poorest results were obtained in the indirect values of the N-Sn parameter for female subjects and the Sn-Me parameter for male subjects according to the Sa-Sba reference distance. For frontal measurements, the indirect values were statistically different from the direct values in both sexes except for one in male subjects. The indirect values measured were not statistically different from the direct values for Go-Go. The indirect values of Ch-Ch were reliable in male subjects. The poorest results were obtained according to the P-P reference distance. For profile assessment, the T-Ex reference distance was reliable for Prn-Sn and Sn-Sto in both sexes. For frontal assessment, Ex-Ex and En-En reference distances were reliable for Ch-Ch in male subjects.

  6. Type I and type II residual stress in iron meteorites determined by neutron diffraction measurements

    NASA Astrophysics Data System (ADS)

    Caporali, Stefano; Pratesi, Giovanni; Kabra, Saurabh; Grazzi, Francesco

    2018-04-01

    In this work we present a preliminary investigation by means of neutron diffraction experiment to determine the residual stress state in three different iron meteorites (Chinga, Sikhote Alin and Nantan). Because of the very peculiar microstructural characteristic of this class of samples, all the systematic effects related to the measuring procedure - such as crystallite size and composition - were taken into account and a clear differentiation in the statistical distribution of residual stress in coarse and fine grained meteorites were highlighted. Moreover, the residual stress state was statistically analysed in three orthogonal directions finding evidence of the existence of both type I and type II residual stress components. Finally, the application of von Mises approach allowed to determine the distribution of type II stress.

  7. Comparison of Data Quality of NOAA's ISIS and SURFRAD Networks to NREL's SRRL-BMS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderberg, M.; Sengupta, M.

    2014-11-01

    This report provides analyses of broadband solar radiometric data quality for the National Oceanic and Atmospheric Administration's Integrated Surface Irradiance Study and Surface Radiation Budget Network (SURFRAD) solar measurement networks. The data quality of these networks is compared to that of the National Renewable Energy Laboratory's Solar Radiation Research Laboratory Baseline Measurement System (SRRL-BMS) native data resolutions and hourly averages of the data from the years 2002 through 2013. This report describes the solar radiometric data quality testing and flagging procedures and the method used to determine and tabulate data quality statistics. Monthly data quality statistics for each network weremore » plotted by year against the statistics for the SRRL-BMS. Some of the plots are presented in the body of the report, but most are in the appendix. These plots indicate that the overall solar radiometric data quality of the SURFRAD network is superior to that of the Integrated Surface Irradiance Study network and can be comparable to SRRL-BMS.« less

  8. Machine Learning Methods for Attack Detection in the Smart Grid.

    PubMed

    Ozay, Mete; Esnaola, Inaki; Yarman Vural, Fatos Tunay; Kulkarni, Sanjeev R; Poor, H Vincent

    2016-08-01

    Attack detection problems in the smart grid are posed as statistical learning problems for different attack scenarios in which the measurements are observed in batch or online settings. In this approach, machine learning algorithms are used to classify measurements as being either secure or attacked. An attack detection framework is provided to exploit any available prior knowledge about the system and surmount constraints arising from the sparse structure of the problem in the proposed approach. Well-known batch and online learning algorithms (supervised and semisupervised) are employed with decision- and feature-level fusion to model the attack detection problem. The relationships between statistical and geometric properties of attack vectors employed in the attack scenarios and learning algorithms are analyzed to detect unobservable attacks using statistical learning methods. The proposed algorithms are examined on various IEEE test systems. Experimental analyses show that machine learning algorithms can detect attacks with performances higher than attack detection algorithms that employ state vector estimation methods in the proposed attack detection framework.

  9. Dynamic properties of small-scale solar wind plasma fluctuations.

    PubMed

    Riazantseva, M O; Budaev, V P; Zelenyi, L M; Zastenker, G N; Pavlos, G P; Safrankova, J; Nemecek, Z; Prech, L; Nemec, F

    2015-05-13

    The paper presents the latest results of the studies of small-scale fluctuations in a turbulent flow of solar wind (SW) using measurements with extremely high temporal resolution (up to 0.03 s) of the bright monitor of SW (BMSW) plasma spectrometer operating on astrophysical SPECTR-R spacecraft at distances up to 350,000 km from the Earth. The spectra of SW ion flux fluctuations in the range of scales between 0.03 and 100 s are systematically analysed. The difference of slopes in low- and high-frequency parts of spectra and the frequency of the break point between these two characteristic slopes was analysed for different conditions in the SW. The statistical properties of the SW ion flux fluctuations were thoroughly analysed on scales less than 10 s. A high level of intermittency is demonstrated. The extended self-similarity of SW ion flux turbulent flow is constantly observed. The approximation of non-Gaussian probability distribution function of ion flux fluctuations by the Tsallis statistics shows the non-extensive character of SW fluctuations. Statistical characteristics of ion flux fluctuations are compared with the predictions of a log-Poisson model. The log-Poisson parametrization of the structure function scaling has shown that well-defined filament-like plasma structures are, as a rule, observed in the turbulent SW flows. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  10. Space-Time Point Pattern Analysis of Flavescence Dorée Epidemic in a Grapevine Field: Disease Progression and Recovery

    PubMed Central

    Maggi, Federico; Bosco, Domenico; Galetto, Luciana; Palmano, Sabrina; Marzachì, Cristina

    2017-01-01

    Analyses of space-time statistical features of a flavescence dorée (FD) epidemic in Vitis vinifera plants are presented. FD spread was surveyed from 2011 to 2015 in a vineyard of 17,500 m2 surface area in the Piemonte region, Italy; count and position of symptomatic plants were used to test the hypothesis of epidemic Complete Spatial Randomness and isotropicity in the space-time static (year-by-year) point pattern measure. Space-time dynamic (year-to-year) point pattern analyses were applied to newly infected and recovered plants to highlight statistics of FD progression and regression over time. Results highlighted point patterns ranging from disperse (at small scales) to aggregated (at large scales) over the years, suggesting that the FD epidemic is characterized by multiscale properties that may depend on infection incidence, vector population, and flight behavior. Dynamic analyses showed moderate preferential progression and regression along rows. Nearly uniform distributions of direction and negative exponential distributions of distance of newly symptomatic and recovered plants relative to existing symptomatic plants highlighted features of vector mobility similar to Brownian motion. These evidences indicate that space-time epidemics modeling should include environmental setting (e.g., vineyard geometry and topography) to capture anisotropicity as well as statistical features of vector flight behavior, plant recovery and susceptibility, and plant mortality. PMID:28111581

  11. A visual basic program to generate sediment grain-size statistics and to extrapolate particle distributions

    USGS Publications Warehouse

    Poppe, L.J.; Eliason, A.H.; Hastings, M.E.

    2004-01-01

    Measures that describe and summarize sediment grain-size distributions are important to geologists because of the large amount of information contained in textural data sets. Statistical methods are usually employed to simplify the necessary comparisons among samples and quantify the observed differences. The two statistical methods most commonly used by sedimentologists to describe particle distributions are mathematical moments (Krumbein and Pettijohn, 1938) and inclusive graphics (Folk, 1974). The choice of which of these statistical measures to use is typically governed by the amount of data available (Royse, 1970). If the entire distribution is known, the method of moments may be used; if the next to last accumulated percent is greater than 95, inclusive graphics statistics can be generated. Unfortunately, earlier programs designed to describe sediment grain-size distributions statistically do not run in a Windows environment, do not allow extrapolation of the distribution's tails, or do not generate both moment and graphic statistics (Kane and Hubert, 1963; Collias et al., 1963; Schlee and Webster, 1967; Poppe et al., 2000)1.Owing to analytical limitations, electro-resistance multichannel particle-size analyzers, such as Coulter Counters, commonly truncate the tails of the fine-fraction part of grain-size distributions. These devices do not detect fine clay in the 0.6–0.1 μm range (part of the 11-phi and all of the 12-phi and 13-phi fractions). Although size analyses performed down to 0.6 μm microns are adequate for most freshwater and near shore marine sediments, samples from many deeper water marine environments (e.g. rise and abyssal plain) may contain significant material in the fine clay fraction, and these analyses benefit from extrapolation.The program (GSSTAT) described herein generates statistics to characterize sediment grain-size distributions and can extrapolate the fine-grained end of the particle distribution. It is written in Microsoft Visual Basic 6.0 and provides a window to facilitate program execution. The input for the sediment fractions is weight percentages in whole-phi notation (Krumbein, 1934; Inman, 1952), and the program permits the user to select output in either method of moments or inclusive graphics statistics (Fig. 1). Users select options primarily with mouse-click events, or through interactive dialogue boxes.

  12. Influence of exposure assessment and parameterization on exposure response. Aspects of epidemiologic cohort analysis using the Libby Amphibole asbestos worker cohort.

    PubMed

    Bateson, Thomas F; Kopylev, Leonid

    2015-01-01

    Recent meta-analyses of occupational epidemiology studies identified two important exposure data quality factors in predicting summary effect measures for asbestos-associated lung cancer mortality risk: sufficiency of job history data and percent coverage of work history by measured exposures. The objective was to evaluate different exposure parameterizations suggested in the asbestos literature using the Libby, MT asbestos worker cohort and to evaluate influences of exposure measurement error caused by historically estimated exposure data on lung cancer risks. Focusing on workers hired after 1959, when job histories were well-known and occupational exposures were predominantly based on measured exposures (85% coverage), we found that cumulative exposure alone, and with allowance of exponential decay, fit lung cancer mortality data similarly. Residence-time-weighted metrics did not fit well. Compared with previous analyses based on the whole cohort of Libby workers hired after 1935, when job histories were less well-known and exposures less frequently measured (47% coverage), our analyses based on higher quality exposure data yielded an effect size as much as 3.6 times higher. Future occupational cohort studies should continue to refine retrospective exposure assessment methods, consider multiple exposure metrics, and explore new methods of maintaining statistical power while minimizing exposure measurement error.

  13. Anatomy of the proximal tibiofibular joint and interosseous membrane, and their contributions to joint kinematics in below-knee amputations.

    PubMed

    Burkhart, Timothy A; Asa, Benjamin; Payne, Michael W C; Johnson, Marjorie; Dunning, Cynthia E; Wilson, Timothy D

    2015-02-01

    A result of below-knee amputations (BKAs) is abnormal motion that occurs about the proximal tibiofibular joint (PTFJ). While it is known that joint morphology may play a role in joint kinematics, this is not well understood with respect to the PTFJ. Therefore, the purposes of this study were: (i) to characterize the anatomy of the PTFJ and statistically analyze the relationships within the joint; and (ii) to determine the relationships between the PTFJ characteristics and the degree of movement of the fibula in BKAs. The PTFJ was characterized in 40 embalmed specimens disarticulated at the knee, and amputated through the mid-tibia and fibula. Four metrics were measured: inclination angle (angle at which the fibula articulates with the tibia); tibial and fibular articular surface areas; articular surface concavity and shape. The specimens were mechanically tested by applying a load through the biceps femoris tendon, and the degree of motion about the tibiofibular joint was measured. Regression analyses were performed to determine the relationships between the different PTFJ characteristics and the magnitude of fibular abduction. Finally, Pearson correlation analyses were performed on inclination angle and surface area vs. fibular kinematics. The inclination angle measured on the fibula was significantly greater than that measured on the tibia. This difference may be attributed to differences in concavity of the tibial and fibular surfaces. Surface area measured on the tibia and fibula was not statistically different. The inclination angle was not statistically correlated to surface area. However, when correlating fibular kinematics in BKAs, inclination angle was positively correlated to the degree of fibular abduction, whereas surface area was negatively correlated. The characteristics of the PTFJ dictate the amount of fibular movement, specifically, fibular abduction in BKAs. Predicting BKA complications based on PTFJ characteristics can lead to recommendations in treatment. © 2014 Anatomical Society.

  14. Low statistical power in biomedical science: a review of three human research domains.

    PubMed

    Dumas-Mallet, Estelle; Button, Katherine S; Boraud, Thomas; Gonon, Francois; Munafò, Marcus R

    2017-02-01

    Studies with low statistical power increase the likelihood that a statistically significant finding represents a false positive result. We conducted a review of meta-analyses of studies investigating the association of biological, environmental or cognitive parameters with neurological, psychiatric and somatic diseases, excluding treatment studies, in order to estimate the average statistical power across these domains. Taking the effect size indicated by a meta-analysis as the best estimate of the likely true effect size, and assuming a threshold for declaring statistical significance of 5%, we found that approximately 50% of studies have statistical power in the 0-10% or 11-20% range, well below the minimum of 80% that is often considered conventional. Studies with low statistical power appear to be common in the biomedical sciences, at least in the specific subject areas captured by our search strategy. However, we also observe evidence that this depends in part on research methodology, with candidate gene studies showing very low average power and studies using cognitive/behavioural measures showing high average power. This warrants further investigation.

  15. Low statistical power in biomedical science: a review of three human research domains

    PubMed Central

    Dumas-Mallet, Estelle; Button, Katherine S.; Boraud, Thomas; Gonon, Francois

    2017-01-01

    Studies with low statistical power increase the likelihood that a statistically significant finding represents a false positive result. We conducted a review of meta-analyses of studies investigating the association of biological, environmental or cognitive parameters with neurological, psychiatric and somatic diseases, excluding treatment studies, in order to estimate the average statistical power across these domains. Taking the effect size indicated by a meta-analysis as the best estimate of the likely true effect size, and assuming a threshold for declaring statistical significance of 5%, we found that approximately 50% of studies have statistical power in the 0–10% or 11–20% range, well below the minimum of 80% that is often considered conventional. Studies with low statistical power appear to be common in the biomedical sciences, at least in the specific subject areas captured by our search strategy. However, we also observe evidence that this depends in part on research methodology, with candidate gene studies showing very low average power and studies using cognitive/behavioural measures showing high average power. This warrants further investigation. PMID:28386409

  16. A Guerilla Guide to Common Problems in ‘Neurostatistics’: Essential Statistical Topics in Neuroscience

    PubMed Central

    Smith, Paul F.

    2017-01-01

    Effective inferential statistical analysis is essential for high quality studies in neuroscience. However, recently, neuroscience has been criticised for the poor use of experimental design and statistical analysis. Many of the statistical issues confronting neuroscience are similar to other areas of biology; however, there are some that occur more regularly in neuroscience studies. This review attempts to provide a succinct overview of some of the major issues that arise commonly in the analyses of neuroscience data. These include: the non-normal distribution of the data; inequality of variance between groups; extensive correlation in data for repeated measurements across time or space; excessive multiple testing; inadequate statistical power due to small sample sizes; pseudo-replication; and an over-emphasis on binary conclusions about statistical significance as opposed to effect sizes. Statistical analysis should be viewed as just another neuroscience tool, which is critical to the final outcome of the study. Therefore, it needs to be done well and it is a good idea to be proactive and seek help early, preferably before the study even begins. PMID:29371855

  17. A Guerilla Guide to Common Problems in 'Neurostatistics': Essential Statistical Topics in Neuroscience.

    PubMed

    Smith, Paul F

    2017-01-01

    Effective inferential statistical analysis is essential for high quality studies in neuroscience. However, recently, neuroscience has been criticised for the poor use of experimental design and statistical analysis. Many of the statistical issues confronting neuroscience are similar to other areas of biology; however, there are some that occur more regularly in neuroscience studies. This review attempts to provide a succinct overview of some of the major issues that arise commonly in the analyses of neuroscience data. These include: the non-normal distribution of the data; inequality of variance between groups; extensive correlation in data for repeated measurements across time or space; excessive multiple testing; inadequate statistical power due to small sample sizes; pseudo-replication; and an over-emphasis on binary conclusions about statistical significance as opposed to effect sizes. Statistical analysis should be viewed as just another neuroscience tool, which is critical to the final outcome of the study. Therefore, it needs to be done well and it is a good idea to be proactive and seek help early, preferably before the study even begins.

  18. Lead exposure in US worksites: A literature review and development of an occupational lead exposure database from the published literature

    PubMed Central

    Koh, Dong-Hee; Locke, Sarah J.; Chen, Yu-Cheng; Purdue, Mark P.; Friesen, Melissa C.

    2016-01-01

    Background Retrospective exposure assessment of occupational lead exposure in population-based studies requires historical exposure information from many occupations and industries. Methods We reviewed published US exposure monitoring studies to identify lead exposure measurement data. We developed an occupational lead exposure database from the 175 identified papers containing 1,111 sets of lead concentration summary statistics (21% area air, 47% personal air, 32% blood). We also extracted ancillary exposure-related information, including job, industry, task/location, year collected, sampling strategy, control measures in place, and sampling and analytical methods. Results Measurements were published between 1940 and 2010 and represented 27 2-digit standardized industry classification codes. The majority of the measurements were related to lead-based paint work, joining or cutting metal using heat, primary and secondary metal manufacturing, and lead acid battery manufacturing. Conclusions This database can be used in future statistical analyses to characterize differences in lead exposure across time, jobs, and industries. PMID:25968240

  19. Early Warning Signs of Suicide in Service Members Who Engage in Unauthorized Acts of Violence

    DTIC Science & Technology

    2016-06-01

    observable to military law enforcement personnel. Statistical analyses tested for differences in warning signs between cases of suicide, violence, or...indicators, (2) Behavioral Change indicators, (3) Social indicators, and (4) Occupational indicators. Statistical analyses were conducted to test for...6 Coding _________________________________________________________________ 7 Statistical

  20. [Statistical analysis using freely-available "EZR (Easy R)" software].

    PubMed

    Kanda, Yoshinobu

    2015-10-01

    Clinicians must often perform statistical analyses for purposes such evaluating preexisting evidence and designing or executing clinical studies. R is a free software environment for statistical computing. R supports many statistical analysis functions, but does not incorporate a statistical graphical user interface (GUI). The R commander provides an easy-to-use basic-statistics GUI for R. However, the statistical function of the R commander is limited, especially in the field of biostatistics. Therefore, the author added several important statistical functions to the R commander and named it "EZR (Easy R)", which is now being distributed on the following website: http://www.jichi.ac.jp/saitama-sct/. EZR allows the application of statistical functions that are frequently used in clinical studies, such as survival analyses, including competing risk analyses and the use of time-dependent covariates and so on, by point-and-click access. In addition, by saving the script automatically created by EZR, users can learn R script writing, maintain the traceability of the analysis, and assure that the statistical process is overseen by a supervisor.

  1. Empirical evidence about inconsistency among studies in a pair-wise meta-analysis.

    PubMed

    Rhodes, Kirsty M; Turner, Rebecca M; Higgins, Julian P T

    2016-12-01

    This paper investigates how inconsistency (as measured by the I 2 statistic) among studies in a meta-analysis may differ, according to the type of outcome data and effect measure. We used hierarchical models to analyse data from 3873 binary, 5132 continuous and 880 mixed outcome meta-analyses within the Cochrane Database of Systematic Reviews. Predictive distributions for inconsistency expected in future meta-analyses were obtained, which can inform priors for between-study variance. Inconsistency estimates were highest on average for binary outcome meta-analyses of risk differences and continuous outcome meta-analyses. For a planned binary outcome meta-analysis in a general research setting, the predictive distribution for inconsistency among log odds ratios had median 22% and 95% CI: 12% to 39%. For a continuous outcome meta-analysis, the predictive distribution for inconsistency among standardized mean differences had median 40% and 95% CI: 15% to 73%. Levels of inconsistency were similar for binary data measured by log odds ratios and log relative risks. Fitted distributions for inconsistency expected in continuous outcome meta-analyses using mean differences were almost identical to those using standardized mean differences. The empirical evidence on inconsistency gives guidance on which outcome measures are most likely to be consistent in particular circumstances and facilitates Bayesian meta-analysis with an informative prior for heterogeneity. © 2015 The Authors. Research Synthesis Methods published by John Wiley & Sons, Ltd. © 2015 The Authors. Research Synthesis Methods published by John Wiley & Sons, Ltd. © 2015 The Authors. Research Synthesis Methods published by John Wiley & Sons, Ltd.

  2. The Economic Influences of Elementary School Sites on Residential Property Tax Revenue in Selected Urban Neighborhoods.

    ERIC Educational Resources Information Center

    Grube, Karl William

    This study attempted to: (1) develop research criteria and statistically valid analyses; (2) measure the economic influences of well-developed and undeveloped elementary school sites, large open space and small, or limited space school sites, on the market sale prices of comparable single-family residential housing units in matched pairs of urban…

  3. Triangulating Evidence to Investigate the Validity of Measures: Evidence from Discussion during Instruction, Cognitive Interviews, and Written Assessments

    ERIC Educational Resources Information Center

    Burmester, Kristen O'Rourke

    2011-01-01

    Classrooms are a primary site of evidence about learning. Yet classroom proceedings often occur behind closed doors and hence evidence of student learning is observable only to the classroom teacher. The informal and undocumented nature of this information means that it is rarely included in statistical models or quantifiable analyses. This…

  4. The Effects of Conditioned Reinforcement for Reading on Reading Comprehension for 5th Graders

    ERIC Educational Resources Information Center

    Cumiskey Moore, Colleen

    2017-01-01

    In three experiments, I tested the effects of the conditioned reinforcement for reading (R+Reading) on reading comprehension with 5th graders. In Experiment 1, I conducted a series of statistical analyses with data from 18 participants for one year. I administered 4 pre/post measurements for reading repertoires which included: 1) state-wide…

  5. Statistical uncertainty of eddy flux-based estimates of gross ecosystem carbon exchange at Howland Forest, Maine

    Treesearch

    S.C. Hagen; B.H. Braswell; E. Linder; S. Frolking; A.D. Richardson; David Hollinger. D.Y; Hollinger. D.Y

    2006-01-01

    We present an uncertainty analysis of gross ecosystem carbon exchange (GEE) estimates derived from 7 years of continuous eddy covariance measurements of forest atmosphere CO2 fluxes at Howland Forest, Maine, USA. These data, which have high temporal resolution, can be used to validate process modeling analyses, remote sensing assessments, and field surveys. However,...

  6. A method for estimating current attendance on sets of campgrounds...a pilot study

    Treesearch

    Richard L. Bury; Ruth Margolies

    1964-01-01

    Statistical models were devised for estimating both daily and seasonal attendance (and corresponding precision of estimates) through correlation-regression and ratio analyses. Total daily attendance for a test set of 23 campgrounds could be estimated from attendance measured in only one of them. The chances were that estimates would be within 10 percent of true...

  7. Informal Statistics Help Desk

    NASA Technical Reports Server (NTRS)

    Ploutz-Snyder, R. J.; Feiveson, A. H.

    2015-01-01

    Back by popular demand, the JSC Biostatistics Lab is offering an opportunity for informal conversation about challenges you may have encountered with issues of experimental design, analysis, data visualization or related topics. Get answers to common questions about sample size, repeated measures, violation of distributional assumptions, missing data, multiple testing, time-to-event data, when to trust the results of your analyses (reproducibility issues) and more.

  8. Seeking a fingerprint: analysis of point processes in actigraphy recording

    NASA Astrophysics Data System (ADS)

    Gudowska-Nowak, Ewa; Ochab, Jeremi K.; Oleś, Katarzyna; Beldzik, Ewa; Chialvo, Dante R.; Domagalik, Aleksandra; Fąfrowicz, Magdalena; Marek, Tadeusz; Nowak, Maciej A.; Ogińska, Halszka; Szwed, Jerzy; Tyburczyk, Jacek

    2016-05-01

    Motor activity of humans displays complex temporal fluctuations which can be characterised by scale-invariant statistics, thus demonstrating that structure and fluctuations of such kinetics remain similar over a broad range of time scales. Previous studies on humans regularly deprived of sleep or suffering from sleep disorders predicted a change in the invariant scale parameters with respect to those for healthy subjects. In this study we investigate the signal patterns from actigraphy recordings by means of characteristic measures of fractional point processes. We analyse spontaneous locomotor activity of healthy individuals recorded during a week of regular sleep and a week of chronic partial sleep deprivation. Behavioural symptoms of lack of sleep can be evaluated by analysing statistics of duration times during active and resting states, and alteration of behavioural organisation can be assessed by analysis of power laws detected in the event count distribution, distribution of waiting times between consecutive movements and detrended fluctuation analysis of recorded time series. We claim that among different measures characterising complexity of the actigraphy recordings and their variations implied by chronic sleep distress, the exponents characterising slopes of survival functions in resting states are the most effective biomarkers distinguishing between healthy and sleep-deprived groups.

  9. National Trends in Trace Metals Concentrations in Ambient Particulate Matter

    NASA Astrophysics Data System (ADS)

    McCarthy, M. C.; Hafner, H. R.; Charrier, J. G.

    2007-12-01

    Ambient measurements of trace metals identified as hazardous air pollutants (HAPs, air toxics) collected in the United States from 1990 to 2006 were analyzed for long-term trends. Trace metals analyzed include lead, manganese, arsenic, chromium, nickel, cadmium, and selenium. Visual and statistical analyses were used to identify and quantify temporal variations in air toxics at national and regional levels. Trend periods were required to be at least five years. Lead particles decreased in concentration at most monitoring sites, but trends in other metals were not consistent over time or spatially. In addition, routine ambient monitoring methods had method detection limits (MDLs) too high to adequately measure concentrations for trends analysis. Differences between measurement methods at urban and rural sites also confound trends analyses. Improvements in MDLs, and a better understanding of comparability between networks, are needed to better quantify trends in trace metal concentrations in the future.

  10. Timing, Emission, and Spectral Studies of Rotating Radio Transients

    NASA Astrophysics Data System (ADS)

    Cui, Bingyi; McLaughlin, Maura

    2018-01-01

    Rotating Radio Transients (RRATs) are a class of pulsars with unusually sporadic pulse emissions which were discovered only through their single pulses. We report in new timing solutions, pulse amplitude measurements, and spectral measurements for a number of RRATs. Timing solutions provide derived physical properties of these sources, allowing comparison with other classes of neutron stars. Analyses of single pulse properties also contribute to this study by measuring composite profiles and flux density distributions, which can constrain the RRATs' emission mechanism. We make statistical comparisons between RRATs and canonical pulsars and show that with the same spin period, RRATs are more likely to have larger period derivatives, which may indicate a higher magnetic field. Spectral analyses were also performed in order to compare spectra with those of other source classes. We describe this work and plans for application to much larger numbers of sources in the future.

  11. UNITY: Confronting Supernova Cosmology's Statistical and Systematic Uncertainties in a Unified Bayesian Framework

    NASA Astrophysics Data System (ADS)

    Rubin, D.; Aldering, G.; Barbary, K.; Boone, K.; Chappell, G.; Currie, M.; Deustua, S.; Fagrelius, P.; Fruchter, A.; Hayden, B.; Lidman, C.; Nordin, J.; Perlmutter, S.; Saunders, C.; Sofiatti, C.; Supernova Cosmology Project, The

    2015-11-01

    While recent supernova (SN) cosmology research has benefited from improved measurements, current analysis approaches are not statistically optimal and will prove insufficient for future surveys. This paper discusses the limitations of current SN cosmological analyses in treating outliers, selection effects, shape- and color-standardization relations, unexplained dispersion, and heterogeneous observations. We present a new Bayesian framework, called UNITY (Unified Nonlinear Inference for Type-Ia cosmologY), that incorporates significant improvements in our ability to confront these effects. We apply the framework to real SN observations and demonstrate smaller statistical and systematic uncertainties. We verify earlier results that SNe Ia require nonlinear shape and color standardizations, but we now include these nonlinear relations in a statistically well-justified way. This analysis was primarily performed blinded, in that the basic framework was first validated on simulated data before transitioning to real data. We also discuss possible extensions of the method.

  12. Biomass fuel use for household cooking in Swaziland: is there an association with anaemia and stunting in children aged 6-36 months?

    PubMed

    Machisa, Mercilene; Wichmann, Janine; Nyasulu, Peter S

    2013-09-01

    This study is the second to investigate the association between the use of biomass fuels (BMF) for household cooking and anaemia and stunting in children. Such fuels include coal, charcoal, wood, dung and crop residues. Data from the 2006-2007 Swaziland Demographic and Health Survey (a cross-sectional study design) were analysed. Childhood stunting was ascertained through age and height, and anaemia through haemoglobin measurement. The association between BMF use and health outcomes was determined in multinomial logistic regression analyses. Various confounders were considered in the analyses. A total of 1150 children aged 6-36 months were included in the statistical analyses, of these 596 (51.8%) and 317 (27.6%) were anaemic and stunted, respectively. BMF use was not significantly associated with childhood anaemia in univariate analysis. Independent risk factors for childhood anaemia were child's age, history of childhood diarrhoea and mother's anaemia status. No statistically significant association was observed between BMF use and childhood stunting, after adjusting for child's gender, age, birth weight and preceding birth interval. This study identified the need to prioritize childhood anaemia and stunting as health outcomes and the introduction of public health interventions in Swaziland. Further research is needed globally on the potential effects of BMF use on childhood anaemia and stunting.

  13. Benchmarking Strategies for Measuring the Quality of Healthcare: Problems and Prospects

    PubMed Central

    Lovaglio, Pietro Giorgio

    2012-01-01

    Over the last few years, increasing attention has been directed toward the problems inherent to measuring the quality of healthcare and implementing benchmarking strategies. Besides offering accreditation and certification processes, recent approaches measure the performance of healthcare institutions in order to evaluate their effectiveness, defined as the capacity to provide treatment that modifies and improves the patient's state of health. This paper, dealing with hospital effectiveness, focuses on research methods for effectiveness analyses within a strategy comparing different healthcare institutions. The paper, after having introduced readers to the principle debates on benchmarking strategies, which depend on the perspective and type of indicators used, focuses on the methodological problems related to performing consistent benchmarking analyses. Particularly, statistical methods suitable for controlling case-mix, analyzing aggregate data, rare events, and continuous outcomes measured with error are examined. Specific challenges of benchmarking strategies, such as the risk of risk adjustment (case-mix fallacy, underreporting, risk of comparing noncomparable hospitals), selection bias, and possible strategies for the development of consistent benchmarking analyses, are discussed. Finally, to demonstrate the feasibility of the illustrated benchmarking strategies, an application focused on determining regional benchmarks for patient satisfaction (using 2009 Lombardy Region Patient Satisfaction Questionnaire) is proposed. PMID:22666140

  14. Benchmarking strategies for measuring the quality of healthcare: problems and prospects.

    PubMed

    Lovaglio, Pietro Giorgio

    2012-01-01

    Over the last few years, increasing attention has been directed toward the problems inherent to measuring the quality of healthcare and implementing benchmarking strategies. Besides offering accreditation and certification processes, recent approaches measure the performance of healthcare institutions in order to evaluate their effectiveness, defined as the capacity to provide treatment that modifies and improves the patient's state of health. This paper, dealing with hospital effectiveness, focuses on research methods for effectiveness analyses within a strategy comparing different healthcare institutions. The paper, after having introduced readers to the principle debates on benchmarking strategies, which depend on the perspective and type of indicators used, focuses on the methodological problems related to performing consistent benchmarking analyses. Particularly, statistical methods suitable for controlling case-mix, analyzing aggregate data, rare events, and continuous outcomes measured with error are examined. Specific challenges of benchmarking strategies, such as the risk of risk adjustment (case-mix fallacy, underreporting, risk of comparing noncomparable hospitals), selection bias, and possible strategies for the development of consistent benchmarking analyses, are discussed. Finally, to demonstrate the feasibility of the illustrated benchmarking strategies, an application focused on determining regional benchmarks for patient satisfaction (using 2009 Lombardy Region Patient Satisfaction Questionnaire) is proposed.

  15. Mineral discrimination using a portable ratio-determining radiometer.

    USGS Publications Warehouse

    Whitney, G.; Abrams, M.J.; Goetz, A.F.H.

    1983-01-01

    A portable ratio-determining radiometer has been tested in the laboratory to evaluate the use of narrow band filters for separating geologically important minerals. The instrument has 10 bands in the visible and near-infrared portion of the spectrum (0.5-2.4mm), positioned to sample spectral regions having absorption bands characteristic of minerals in this wavelength region. Measurements and statistical analyses were performed on 66 samples, which were characterized by microscopic and X-ray diffraction analyses. Comparison with high-resolution laboratory spectral reflectance curves indicated that the radiometer's raw values faithfully reproduced the shapes of the spectra. -from Authors

  16. Measuring the statistical validity of summary meta‐analysis and meta‐regression results for use in clinical practice

    PubMed Central

    Riley, Richard D.

    2017-01-01

    An important question for clinicians appraising a meta‐analysis is: are the findings likely to be valid in their own practice—does the reported effect accurately represent the effect that would occur in their own clinical population? To this end we advance the concept of statistical validity—where the parameter being estimated equals the corresponding parameter for a new independent study. Using a simple (‘leave‐one‐out’) cross‐validation technique, we demonstrate how we may test meta‐analysis estimates for statistical validity using a new validation statistic, Vn, and derive its distribution. We compare this with the usual approach of investigating heterogeneity in meta‐analyses and demonstrate the link between statistical validity and homogeneity. Using a simulation study, the properties of Vn and the Q statistic are compared for univariate random effects meta‐analysis and a tailored meta‐regression model, where information from the setting (included as model covariates) is used to calibrate the summary estimate to the setting of application. Their properties are found to be similar when there are 50 studies or more, but for fewer studies Vn has greater power but a higher type 1 error rate than Q. The power and type 1 error rate of Vn are also shown to depend on the within‐study variance, between‐study variance, study sample size, and the number of studies in the meta‐analysis. Finally, we apply Vn to two published meta‐analyses and conclude that it usefully augments standard methods when deciding upon the likely validity of summary meta‐analysis estimates in clinical practice. © 2017 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. PMID:28620945

  17. Use of Multivariate Linkage Analysis for Dissection of a Complex Cognitive Trait

    PubMed Central

    Marlow, Angela J.; Fisher, Simon E.; Francks, Clyde; MacPhie, I. Laurence; Cherny, Stacey S.; Richardson, Alex J.; Talcott, Joel B.; Stein, John F.; Monaco, Anthony P.; Cardon, Lon R.

    2003-01-01

    Replication of linkage results for complex traits has been exceedingly difficult, owing in part to the inability to measure the precise underlying phenotype, small sample sizes, genetic heterogeneity, and statistical methods employed in analysis. Often, in any particular study, multiple correlated traits have been collected, yet these have been analyzed independently or, at most, in bivariate analyses. Theoretical arguments suggest that full multivariate analysis of all available traits should offer more power to detect linkage; however, this has not yet been evaluated on a genomewide scale. Here, we conduct multivariate genomewide analyses of quantitative-trait loci that influence reading- and language-related measures in families affected with developmental dyslexia. The results of these analyses are substantially clearer than those of previous univariate analyses of the same data set, helping to resolve a number of key issues. These outcomes highlight the relevance of multivariate analysis for complex disorders for dissection of linkage results in correlated traits. The approach employed here may aid positional cloning of susceptibility genes in a wide spectrum of complex traits. PMID:12587094

  18. Using R-Project for Free Statistical Analysis in Extension Research

    ERIC Educational Resources Information Center

    Mangiafico, Salvatore S.

    2013-01-01

    One option for Extension professionals wishing to use free statistical software is to use online calculators, which are useful for common, simple analyses. A second option is to use a free computing environment capable of performing statistical analyses, like R-project. R-project is free, cross-platform, powerful, and respected, but may be…

  19. Psychopathological Symptoms and Psychological Wellbeing in Mexican Undergraduate Students

    PubMed Central

    Contreras, Mariel; de León, Ana Mariela; Martínez, Estela; Peña, Elsa Melissa; Marques, Luana; Gallegos, Julia

    2017-01-01

    College life involves a process of adaptation to changes that have an impact on the psycho-emotional development of students. Successful adaptation to this stage involves the balance between managing personal resources and potential stressors that generate distress. This epidemiological descriptive and transversal study estimates the prevalence of psychopathological symptomatology and psychological well-being among 516 college students, 378 (73.26%) women and 138 (26.74%) men, ages between 17 and 24, from the city of Monterrey in Mexico. It describes the relationship between psychopathological symptomatology and psychological well-being, and explores gender differences. For data collection, two measures were used: The Symptom Checklist Revised and the Scale of Psychological Well-being. Statistical analyses used were t test for independent samples, Pearson’s r and regression analysis with the Statistical Package for the Social Sciences (SPSS v21.0). Statistical analyses showed that the prevalence of psychopathological symptoms was 10–13%, being Aggression the highest. The dimension of psychological well-being with the lowest scores was Environmental Mastery. Participants with a higher level of psychological well-being had a lower level of psychopathological symptoms, which shows the importance of early identification and prevention. Gender differences were found on some subscales of the psychopathological symptomatology and of the psychological well-being measures. This study provides a basis for future research and development of resources to promote the psychological well-being and quality of life of university students. PMID:29104876

  20. Modeling stimulus variation in three common implicit attitude tasks.

    PubMed

    Wolsiefer, Katie; Westfall, Jacob; Judd, Charles M

    2017-08-01

    We explored the consequences of ignoring the sampling variation due to stimuli in the domain of implicit attitudes. A large literature in psycholinguistics has examined the statistical treatment of random stimulus materials, but the recommendations from this literature have not been applied to the social psychological literature on implicit attitudes. This is partly because of inherent complications in applying crossed random-effect models to some of the most common implicit attitude tasks, and partly because no work to date has demonstrated that random stimulus variation is in fact consequential in implicit attitude measurement. We addressed this problem by laying out statistically appropriate and practically feasible crossed random-effect models for three of the most commonly used implicit attitude measures-the Implicit Association Test, affect misattribution procedure, and evaluative priming task-and then applying these models to large datasets (average N = 3,206) that assess participants' implicit attitudes toward race, politics, and self-esteem. We showed that the test statistics from the traditional analyses are substantially (about 60 %) inflated relative to the more-appropriate analyses that incorporate stimulus variation. Because all three tasks used the same stimulus words and faces, we could also meaningfully compare the relative contributions of stimulus variation across the tasks. In an appendix, we give syntax in R, SAS, and SPSS for fitting the recommended crossed random-effects models to data from all three tasks, as well as instructions on how to structure the data file.

  1. Analysis of Anatomic and Functional Measures in X-Linked Retinoschisis

    PubMed Central

    Cukras, Catherine A.; Huryn, Laryssa A.; Jeffrey, Brett P.; Turriff, Amy; Sieving, Paul A.

    2018-01-01

    Purpose To examine the symmetry of structural and functional parameters between eyes in patients with X-linked retinoschisis (XLRS), as well as changes in visual acuity and electrophysiology over time. Methods This is a single-center observational study of 120 males with XLRS who were evaluated at the National Eye Institute. Examinations included best-corrected visual acuity for all participants, as well as ERG recording and optical coherence tomography (OCT) on a subset of participants. Statistical analyses were performed using nonparametric Spearman correlations and linear regression. Results Our analyses demonstrated a statistically significant correlation of structural and functional measures between the two eyes of XLRS patients for all parameters. OCT central macular thickness (n = 78; Spearman r = 0.83, P < 0.0001) and ERG b/a ratio (n = 78; Spearman r = 0.82, P < 0.0001) were the most strongly correlated between a participant's eyes, whereas visual acuity was less strongly correlated (n = 120; Spearman r = 0.47, P < 0.0001). Stability of visual acuity was observed with an average change of less than one letter (n = 74; OD −0.66 and OS −0.70 letters) in a mean follow-up time of 6.8 years. There was no statistically significant change in the ERG b/a ratio within eyes over time. Conclusions Although a broad spectrum of clinical phenotypes is observed across individuals with XLRS, our study demonstrates a significant correlation of structural and functional findings between the two eyes and stability of measures of acuity and ERG parameters over time. These results highlight the utility of the fellow eye as a useful reference for monocular interventional trials.

  2. A statistical framework for neuroimaging data analysis based on mutual information estimated via a gaussian copula

    PubMed Central

    Giordano, Bruno L.; Kayser, Christoph; Rousselet, Guillaume A.; Gross, Joachim; Schyns, Philippe G.

    2016-01-01

    Abstract We begin by reviewing the statistical framework of information theory as applicable to neuroimaging data analysis. A major factor hindering wider adoption of this framework in neuroimaging is the difficulty of estimating information theoretic quantities in practice. We present a novel estimation technique that combines the statistical theory of copulas with the closed form solution for the entropy of Gaussian variables. This results in a general, computationally efficient, flexible, and robust multivariate statistical framework that provides effect sizes on a common meaningful scale, allows for unified treatment of discrete, continuous, unidimensional and multidimensional variables, and enables direct comparisons of representations from behavioral and brain responses across any recording modality. We validate the use of this estimate as a statistical test within a neuroimaging context, considering both discrete stimulus classes and continuous stimulus features. We also present examples of analyses facilitated by these developments, including application of multivariate analyses to MEG planar magnetic field gradients, and pairwise temporal interactions in evoked EEG responses. We show the benefit of considering the instantaneous temporal derivative together with the raw values of M/EEG signals as a multivariate response, how we can separately quantify modulations of amplitude and direction for vector quantities, and how we can measure the emergence of novel information over time in evoked responses. Open‐source Matlab and Python code implementing the new methods accompanies this article. Hum Brain Mapp 38:1541–1573, 2017. © 2016 Wiley Periodicals, Inc. PMID:27860095

  3. Statistical analysis of lightning electric field measured under Malaysian condition

    NASA Astrophysics Data System (ADS)

    Salimi, Behnam; Mehranzamir, Kamyar; Abdul-Malek, Zulkurnain

    2014-02-01

    Lightning is an electrical discharge during thunderstorms that can be either within clouds (Inter-Cloud), or between clouds and ground (Cloud-Ground). The Lightning characteristics and their statistical information are the foundation for the design of lightning protection system as well as for the calculation of lightning radiated fields. Nowadays, there are various techniques to detect lightning signals and to determine various parameters produced by a lightning flash. Each technique provides its own claimed performances. In this paper, the characteristics of captured broadband electric fields generated by cloud-to-ground lightning discharges in South of Malaysia are analyzed. A total of 130 cloud-to-ground lightning flashes from 3 separate thunderstorm events (each event lasts for about 4-5 hours) were examined. Statistical analyses of the following signal parameters were presented: preliminary breakdown pulse train time duration, time interval between preliminary breakdowns and return stroke, multiplicity of stroke, and percentages of single stroke only. The BIL model is also introduced to characterize the lightning signature patterns. Observations on the statistical analyses show that about 79% of lightning signals fit well with the BIL model. The maximum and minimum of preliminary breakdown time duration of the observed lightning signals are 84 ms and 560 us, respectively. The findings of the statistical results show that 7.6% of the flashes were single stroke flashes, and the maximum number of strokes recorded was 14 multiple strokes per flash. A preliminary breakdown signature in more than 95% of the flashes can be identified.

  4. Metrological assessment of the methods for measuring the contents of acids and ion metals responsible for the exchangeable acidity of soils

    NASA Astrophysics Data System (ADS)

    Vanchikova, E. V.; Shamrikova, E. V.; Bespyatykh, N. V.; Kyz"yurova, E. V.; Kondratenok, B. M.

    2015-02-01

    Metrological characteristics—precision, trueness, and accuracy—of the results of measurements of the exchangeable acidity and its components by the potentiometric titration method were studied on the basis of multiple analyses of the soil samples with the examination of statistical data for the outliers and their correspondence to the normal distribution. Measurement errors were estimated. The applied method was certified by the Metrological Center of the Uralian Branch of the Russian Academy of Sciences (certificate no. 88-17641-094-2013) and included in the Federal Information Fund on Assurance of Measurements (FR 1.31.2013.16382).

  5. The cancellous bone multiscale morphology-elasticity relationship.

    PubMed

    Agić, Ante; Nikolić, Vasilije; Mijović, Budimir

    2006-06-01

    The cancellous bone effective properties relations are analysed on multiscale across two aspects; properties of representative volume element on micro scale and statistical measure of trabecular trajectory orientation on mesoscale. Anisotropy of the microstructure is described across fabric tensor measure with trajectory orientation tensor as bridging scale connection. The scatter measured data (elastic modulus, trajectory orientation, apparent density) from compression test are fitted by stochastic interpolation procedure. The engineering constants of the elasticity tensor are estimated by last square fitt procedure in multidimensional space by Nelder-Mead simplex. The multiaxial failure surface in strain space is constructed and interpolated by modified super-ellipsoid.

  6. Atmospheric water vapour over oceans from SSM/I measurements

    NASA Technical Reports Server (NTRS)

    Schluessel, Peter; Emery, William J.

    1990-01-01

    A statistical retrieval technique is developed to derive the atmospheric water vapor column content from the Special Sensor Microwave/Imager (SSM/I) measurements. The radiometer signals are simulated by means of radiative-transfer calculations for a large set of atmospheric/oceanic situations. These simulated responses are subsequently summarized by multivariate analyses, giving water-vapor coefficients and error estimates. Radiative-transfer calculations show that the SSM/I microwave imager can detect atmospheric water vapor structures with an accuracy from 0.145 to 0.17 g/sq cm. The accuracy of the method is confirmed by globally distributed match-ups with radiosonde measurements.

  7. Systematic Mapping and Statistical Analyses of Valley Landform and Vegetation Asymmetries Across Hydroclimatic Gradients

    NASA Astrophysics Data System (ADS)

    Poulos, M. J.; Pierce, J. L.; McNamara, J. P.; Flores, A. N.; Benner, S. G.

    2015-12-01

    Terrain aspect alters the spatial distribution of insolation across topography, driving eco-pedo-hydro-geomorphic feedbacks that can alter landform evolution and result in valley asymmetries for a suite of land surface characteristics (e.g. slope length and steepness, vegetation, soil properties, and drainage development). Asymmetric valleys serve as natural laboratories for studying how landscapes respond to climate perturbation. In the semi-arid montane granodioritic terrain of the Idaho batholith, Northern Rocky Mountains, USA, prior works indicate that reduced insolation on northern (pole-facing) aspects prolongs snow pack persistence, and is associated with thicker, finer-grained soils, that retain more water, prolong the growing season, support coniferous forest rather than sagebrush steppe ecosystems, stabilize slopes at steeper angles, and produce sparser drainage networks. We hypothesize that the primary drivers of valley asymmetry development are changes in the pedon-scale water-balance that coalesce to alter catchment-scale runoff and drainage development, and ultimately cause the divide between north and south-facing land surfaces to migrate northward. We explore this conceptual framework by coupling land surface analyses with statistical modeling to assess relationships and the relative importance of land surface characteristics. Throughout the Idaho batholith, we systematically mapped and tabulated various statistical measures of landforms, land cover, and hydroclimate within discrete valley segments (n=~10,000). We developed a random forest based statistical model to predict valley slope asymmetry based upon numerous measures (n>300) of landscape asymmetries. Preliminary results suggest that drainages are tightly coupled with hillslopes throughout the region, with drainage-network slope being one of the strongest predictors of land-surface-averaged slope asymmetry. When slope-related statistics are excluded, due to possible autocorrelation, valley slope asymmetry is most strongly predicted by asymmetries of insolation and drainage density, which generally supports a water-balance based conceptual model of valley asymmetry development. Surprisingly, vegetation asymmetries had relatively low predictive importance.

  8. Quantifying and reducing statistical uncertainty in sample-based health program costing studies in low- and middle-income countries.

    PubMed

    Rivera-Rodriguez, Claudia L; Resch, Stephen; Haneuse, Sebastien

    2018-01-01

    In many low- and middle-income countries, the costs of delivering public health programs such as for HIV/AIDS, nutrition, and immunization are not routinely tracked. A number of recent studies have sought to estimate program costs on the basis of detailed information collected on a subsample of facilities. While unbiased estimates can be obtained via accurate measurement and appropriate analyses, they are subject to statistical uncertainty. Quantification of this uncertainty, for example, via standard errors and/or 95% confidence intervals, provides important contextual information for decision-makers and for the design of future costing studies. While other forms of uncertainty, such as that due to model misspecification, are considered and can be investigated through sensitivity analyses, statistical uncertainty is often not reported in studies estimating the total program costs. This may be due to a lack of awareness/understanding of (1) the technical details regarding uncertainty estimation and (2) the availability of software with which to calculate uncertainty for estimators resulting from complex surveys. We provide an overview of statistical uncertainty in the context of complex costing surveys, emphasizing the various potential specific sources that contribute to overall uncertainty. We describe how analysts can compute measures of uncertainty, either via appropriately derived formulae or through resampling techniques such as the bootstrap. We also provide an overview of calibration as a means of using additional auxiliary information that is readily available for the entire program, such as the total number of doses administered, to decrease uncertainty and thereby improve decision-making and the planning of future studies. A recent study of the national program for routine immunization in Honduras shows that uncertainty can be reduced by using information available prior to the study. This method can not only be used when estimating the total cost of delivering established health programs but also to decrease uncertainty when the interest lies in assessing the incremental effect of an intervention. Measures of statistical uncertainty associated with survey-based estimates of program costs, such as standard errors and 95% confidence intervals, provide important contextual information for health policy decision-making and key inputs for the design of future costing studies. Such measures are often not reported, possibly because of technical challenges associated with their calculation and a lack of awareness of appropriate software. Modern statistical analysis methods for survey data, such as calibration, provide a means to exploit additional information that is readily available but was not used in the design of the study to significantly improve the estimation of total cost through the reduction of statistical uncertainty.

  9. Quantifying and reducing statistical uncertainty in sample-based health program costing studies in low- and middle-income countries

    PubMed Central

    Resch, Stephen

    2018-01-01

    Objectives: In many low- and middle-income countries, the costs of delivering public health programs such as for HIV/AIDS, nutrition, and immunization are not routinely tracked. A number of recent studies have sought to estimate program costs on the basis of detailed information collected on a subsample of facilities. While unbiased estimates can be obtained via accurate measurement and appropriate analyses, they are subject to statistical uncertainty. Quantification of this uncertainty, for example, via standard errors and/or 95% confidence intervals, provides important contextual information for decision-makers and for the design of future costing studies. While other forms of uncertainty, such as that due to model misspecification, are considered and can be investigated through sensitivity analyses, statistical uncertainty is often not reported in studies estimating the total program costs. This may be due to a lack of awareness/understanding of (1) the technical details regarding uncertainty estimation and (2) the availability of software with which to calculate uncertainty for estimators resulting from complex surveys. We provide an overview of statistical uncertainty in the context of complex costing surveys, emphasizing the various potential specific sources that contribute to overall uncertainty. Methods: We describe how analysts can compute measures of uncertainty, either via appropriately derived formulae or through resampling techniques such as the bootstrap. We also provide an overview of calibration as a means of using additional auxiliary information that is readily available for the entire program, such as the total number of doses administered, to decrease uncertainty and thereby improve decision-making and the planning of future studies. Results: A recent study of the national program for routine immunization in Honduras shows that uncertainty can be reduced by using information available prior to the study. This method can not only be used when estimating the total cost of delivering established health programs but also to decrease uncertainty when the interest lies in assessing the incremental effect of an intervention. Conclusion: Measures of statistical uncertainty associated with survey-based estimates of program costs, such as standard errors and 95% confidence intervals, provide important contextual information for health policy decision-making and key inputs for the design of future costing studies. Such measures are often not reported, possibly because of technical challenges associated with their calculation and a lack of awareness of appropriate software. Modern statistical analysis methods for survey data, such as calibration, provide a means to exploit additional information that is readily available but was not used in the design of the study to significantly improve the estimation of total cost through the reduction of statistical uncertainty. PMID:29636964

  10. Within-individual versus between-individual predictors of antisocial behaviour: A longitudinal study of young people in Victoria, Australia

    PubMed Central

    Hemphill, Sheryl A; Heerde, Jessica A; Herrenkohl, Todd I; Farrington, David P

    2016-01-01

    In an influential 2002 paper, Farrington and colleagues argued that to understand ‘causes’ of delinquency, within-individual analyses of longitudinal data are required (compared to the vast majority of analyses that have focused on between-individual differences). The current paper aimed to complete similar analyses to those conducted by Farrington and colleagues by focusing on the developmental correlates and risk factors for antisocial behaviour and by comparing within-individual and between-individual predictors of antisocial behaviour using data from the youngest Victorian cohort of the International Youth Development Study, a state-wide representative sample of 927 students from Victoria, Australia. Data analysed in the current paper are from participants in Year 6 (age 11–12 years) in 2003 to Year 11 (age 16–17 years) in 2008 (N = 791; 85% retention) with data collected almost annually. Participants completed a self-report survey of risk and protective factors and antisocial behaviour. Complete data were available for 563 participants. The results of this study showed all but one of the forward- (family conflict) and backward-lagged (low attachment to parents) correlations were statistically significant for the within-individual analyses compared with all analyses being statistically significant for the between-individual analyses. In general, between-individual correlations were greater in magnitude than within-individual correlations. Given that forward-lagged within-individual correlations provide more salient measures of causes of delinquency, it is important that longitudinal studies with multi-wave data analyse and report their data using both between-individual and within-individual correlations to inform current prevention and early intervention programs seeking to reduce rates of antisocial behaviour. PMID:28123186

  11. Objectively measured sedentary time among five ethnic groups in Amsterdam: The HELIUS study

    PubMed Central

    Nicolaou, Mary; Snijder, Marieke B.; Peters, Ron J. G.; Stronks, Karien; Langøien, Lars J.; van der Ploeg, Hidde P.; Brug, Johannes; Lakerveld, Jeroen

    2017-01-01

    Introduction Sedentary behaviour is increasingly recognised as a health risk. While differences in this behaviour might help explain ethnic differences in disease profiles, studies on sedentary behaviour in ethnic minorities are scarce. The aim of this study was to compare the levels and the socio-demographic and lifestyle-related correlates of objectively measured sedentary time among five ethnic groups in Amsterdam, the Netherlands. Methods Data were collected as part of the HELIUS study. The sample consisted of adults from a Dutch, Moroccan, African Surinamese, South-Asian Surinamese and Turkish ethnic origin. Data were collected by questionnaire, physical examination, and a combined heart rate and accelerometry monitor (Actiheart). Sedentary time was defined as waking time spent on activities of <1.5 metabolic equivalents. Ethnic differences in the levels of sedentary time were tested using ANOVA and ANCOVA analyses, while ethnic differences in the correlates of sedentary time were tested with interactions between ethnicity and potential correlates using general linear models. Associations between these correlates and sedentary time were explored using linear regression analyses stratified by ethnicity (pre-determined). All analyses were adjusted for gender and age. Results 447 participants were included in the analyses, ranging from 73 to 109 participants per ethnic group. Adjusted levels of sedentary time ranged from 569 minutes/day (9.5 hours/day) for participants with a Moroccan and Turkish origin to 621 minutes/day (10.3 hours/day) in African Surinamese participants. There were no statistically significant differences in the levels or correlates of sedentary time between the ethnic groups. Meeting the physical activity recommendations (150 minutes/week) was consistently inversely associated with sedentary time across all ethnic groups, while age was positively associated with sedentary time in most groups. Conclusions No statistically significant differences in the levels of objectively measured sedentary time or its socio-demographic and lifestyle-related correlates were observed among five ethnic groups in Amsterdam, the Netherlands. PMID:28759597

  12. The Work-Family Conflict Scale (WAFCS): development and initial validation of a self-report measure of work-family conflict for use with parents.

    PubMed

    Haslam, Divna; Filus, Ania; Morawska, Alina; Sanders, Matthew R; Fletcher, Renee

    2015-06-01

    This paper outlines the development and validation of the Work-Family Conflict Scale (WAFCS) designed to measure work-to-family conflict (WFC) and family-to-work conflict (FWC) for use with parents of young children. An expert informant and consumer feedback approach was utilised to develop and refine 20 items, which were subjected to a rigorous validation process using two separate samples of parents of 2-12 year old children (n = 305 and n = 264). As a result of statistical analyses several items were dropped resulting in a brief 10-item scale comprising two subscales assessing theoretically distinct but related constructs: FWC (five items) and WFC (five items). Analyses revealed both subscales have good internal consistency, construct validity as well as concurrent and predictive validity. The results indicate the WAFCS is a promising brief measure for the assessment of work-family conflict in parents. Benefits of the measure as well as potential uses are discussed.

  13. Elastic properties and apparent density of human edentulous maxilla and mandible

    PubMed Central

    Seong, Wook-Jin; Kim, Uk-Kyu; Swift, James Q.; Heo, Young-Cheul; Hodges, James S.; Ko, Ching-Chang

    2009-01-01

    The aim of this study aim was to determine whether elastic properties and apparent density of bone differ in different anatomical regions of the maxilla and mandible. Additional analyses assessed how elastic properties and apparent density were related. Four pairs of edentulous maxilla and mandibles were retrieved from fresh human cadavers. Bone samples from four anatomical regions (maxillary anterior, maxillary posterior, mandibular anterior, mandibular posterior) were obtained. Elastic modulus (EM) and hardness (H) were measured using the nano-indentation technique. Bone samples containing cortical and trabecular bone were used to measure composite apparent density (cAD) using Archimedes’ principle. Statistical analyses used repeated measures ANOVA and Pearson correlations. Bone physical properties differed between regions of the maxilla and mandible. Generally, mandible had higher physical property measurements than maxilla. EM and H were higher in posterior than in anterior regions; the reverse was true for cAD. Posterior maxillary cAD was significantly lower than that in the three other regions. PMID:19647417

  14. Elastic properties and apparent density of human edentulous maxilla and mandible.

    PubMed

    Seong, W-J; Kim, U-K; Swift, J Q; Heo, Y-C; Hodges, J S; Ko, C-C

    2009-10-01

    The aim of this study was to determine whether elastic properties and apparent density of bone differ in different anatomical regions of the maxilla and mandible. Additional analyses assessed how elastic properties and apparent density were related. Four pairs of edentulous maxilla and mandibles were retrieved from fresh human cadavers. Bone samples from four anatomical regions (maxillary anterior, maxillary posterior, mandibular anterior, mandibular posterior) were obtained. Elastic modulus (EM) and hardness (H) were measured using the nano-indentation technique. Bone samples containing cortical and trabecular bone were used to measure composite apparent density (cAD) using Archimedes' principle. Statistical analyses used repeated measures ANOVA and Pearson correlations. Bone physical properties differed between regions of the maxilla and mandible. Generally, mandible had higher physical property measurements than maxilla. EM and H were higher in posterior than in anterior regions; the reverse was true for cAD. Posterior maxillary cAD was significantly lower than that in the three other regions.

  15. Analysis of temperature-dependent neutron transmission and self-indication measurements on tantalum at 2-keV neutron energy

    NASA Technical Reports Server (NTRS)

    Semler, T. T.

    1973-01-01

    The method of pseudo-resonance cross sections is used to analyze published temperature-dependent neutron transmission and self-indication measurements on tantalum in the unresolved region. In the energy region analyzed, 1825.0 to 2017.0 eV, a direct application of the pseudo-resonance approach using a customary average strength function will not provide effective cross sections which fit the measured cross section behavior. Rather a local value of the strength function is required, and a set of resonances which model the measured behavior of the effective cross sections is derived. This derived set of resonance parameters adequately represents the observed resonance hehavior in this local energy region. Similar analyses for the measurements in other unresolved energy regions are necessary to obtain local resonance parameters for improved reactor calculations. This study suggests that Doppler coefficients calculated by sampling from grand average statistical distributions over the entire unresolved resonance region can be in error, since significant local variations in the statistical distributions are not taken into consideration.

  16. What is preexisting strength? Predicting free association probabilities, similarity ratings, and cued recall probabilities.

    PubMed

    Nelson, Douglas L; Dyrdal, Gunvor M; Goodmon, Leilani B

    2005-08-01

    Measuring lexical knowledge poses a challenge to the study of the influence of preexisting knowledge on the retrieval of new memories. Many tasks focus on word pairs, but words are embedded in associative networks, so how should preexisting pair strength be measured? It has been measured by free association, similarity ratings, and co-occurrence statistics. Researchers interpret free association response probabilities as unbiased estimates of forward cue-to-target strength. In Study 1, analyses of large free association and extralist cued recall databases indicate that this interpretation is incorrect. Competitor and backward strengths bias free association probabilities, and as with other recall tasks, preexisting strength is described by a ratio rule. In Study 2, associative similarity ratings are predicted by forward and backward, but not by competitor, strength. Preexisting strength is not a unitary construct, because its measurement varies with method. Furthermore, free association probabilities predict extralist cued recall better than do ratings and co-occurrence statistics. The measure that most closely matches the criterion task may provide the best estimate of the identity of preexisting strength.

  17. Toward standardized reporting for a cohort study on functioning: The Swiss Spinal Cord Injury Cohort Study.

    PubMed

    Prodinger, Birgit; Ballert, Carolina S; Brach, Mirjam; Brinkhof, Martin W G; Cieza, Alarcos; Hug, Kerstin; Jordan, Xavier; Post, Marcel W M; Scheel-Sailer, Anke; Schubert, Martin; Tennant, Alan; Stucki, Gerold

    2016-02-01

    Functioning is an important outcome to measure in cohort studies. Clear and operational outcomes are needed to judge the quality of a cohort study. This paper outlines guiding principles for reporting functioning in cohort studies and addresses some outstanding issues. Principles of how to standardize reporting of data from a cohort study on functioning, by deriving scores that are most useful for further statistical analysis and reporting, are outlined. The Swiss Spinal Cord Injury Cohort Study Community Survey serves as a case in point to provide a practical application of these principles. Development of reporting scores must be conceptually coherent and metrically sound. The International Classification of Functioning, Disability and Health (ICF) can serve as the frame of reference for this, with its categories serving as reference units for reporting. To derive a score for further statistical analysis and reporting, items measuring a single latent trait must be invariant across groups. The Rasch measurement model is well suited to test these assumptions. Our approach is a valuable guide for researchers and clinicians, as it fosters comparability of data, strengthens the comprehensiveness of scope, and provides invariant, interval-scaled data for further statistical analyses of functioning.

  18. Detecting Genomic Clustering of Risk Variants from Sequence Data: Cases vs. Controls

    PubMed Central

    Schaid, Daniel J.; Sinnwell, Jason P.; McDonnell, Shannon K.; Thibodeau, Stephen N.

    2013-01-01

    As the ability to measure dense genetic markers approaches the limit of the DNA sequence itself, taking advantage of possible clustering of genetic variants in, and around, a gene would benefit genetic association analyses, and likely provide biological insights. The greatest benefit might be realized when multiple rare variants cluster in a functional region. Several statistical tests have been developed, one of which is based on the popular Kulldorff scan statistic for spatial clustering of disease. We extended another popular spatial clustering method – Tango’s statistic – to genomic sequence data. An advantage of Tango’s method is that it is rapid to compute, and when single test statistic is computed, its distribution is well approximated by a scaled chi-square distribution, making computation of p-values very rapid. We compared the Type-I error rates and power of several clustering statistics, as well as the omnibus sequence kernel association test (SKAT). Although our version of Tango’s statistic, which we call “Kernel Distance” statistic, took approximately half the time to compute than the Kulldorff scan statistic, it had slightly less power than the scan statistic. Our results showed that the Ionita-Laza version of Kulldorff’s scan statistic had the greatest power over a range of clustering scenarios. PMID:23842950

  19. Quantifying the impact of between-study heterogeneity in multivariate meta-analyses

    PubMed Central

    Jackson, Dan; White, Ian R; Riley, Richard D

    2012-01-01

    Measures that quantify the impact of heterogeneity in univariate meta-analysis, including the very popular I2 statistic, are now well established. Multivariate meta-analysis, where studies provide multiple outcomes that are pooled in a single analysis, is also becoming more commonly used. The question of how to quantify heterogeneity in the multivariate setting is therefore raised. It is the univariate R2 statistic, the ratio of the variance of the estimated treatment effect under the random and fixed effects models, that generalises most naturally, so this statistic provides our basis. This statistic is then used to derive a multivariate analogue of I2, which we call . We also provide a multivariate H2 statistic, the ratio of a generalisation of Cochran's heterogeneity statistic and its associated degrees of freedom, with an accompanying generalisation of the usual I2 statistic, . Our proposed heterogeneity statistics can be used alongside all the usual estimates and inferential procedures used in multivariate meta-analysis. We apply our methods to some real datasets and show how our statistics are equally appropriate in the context of multivariate meta-regression, where study level covariate effects are included in the model. Our heterogeneity statistics may be used when applying any procedure for fitting the multivariate random effects model. Copyright © 2012 John Wiley & Sons, Ltd. PMID:22763950

  20. A Quantitative Analysis of Latino Acculturation and Alcohol Use: Myth Versus Reality.

    PubMed

    Alvarez, Miriam J; Frietze, Gabriel; Ramos, Corin; Field, Craig; Zárate, Michael A

    2017-07-01

    Research on health among Latinos often focuses on acculturation processes and the associated stressors that influence drinking behavior. Given the common use of acculturation measures and the state of the knowledge on alcohol-related health among Latino populations, the current analyses tested the efficacy of acculturation measures to predict various indicators of alcohol consumption. Specifically, this quantitative review assessed the predictive utility of acculturation on alcohol consumption behaviors (frequency, volume, and quantity). Two main analyses were conducted-a p-curve analysis and a meta-analysis of the observed associations between acculturation and drinking behavior. Results demonstrated that current measures of acculturation are a statistically significant predictor of alcohol use (Z = -20.75, p < 0.0001). The meta-analysis included a cumulative sample size of 29,589 Latino participants across 31 studies. A random-effects model yielded a weighted average correlation of 0.16 (95% confidence interval = 0.12, 0.19). Additional subgroup analyses examined the effects of gender and using different scales to measure acculturation. Altogether, results demonstrated that acculturation is a useful predictor of alcohol use. In addition, the meta-analysis revealed that a small positive correlation exists between acculturation and alcohol use in Latinos with a between-study variance of only 1.5% (τ 2  = 0.015). Our analyses reveal that the association between current measures of acculturation and alcohol use is relatively small. Copyright © 2017 by the Research Society on Alcoholism.

  1. Survey mode matters: adults' self-reported statistical confidence, ability to obtain health information, and perceptions of patient-health-care provider communication.

    PubMed

    Wallace, Lorraine S; Chisolm, Deena J; Abdel-Rasoul, Mahmoud; DeVoe, Jennifer E

    2013-08-01

    This study examined adults' self-reported understanding and formatting preferences of medical statistics, confidence in self-care and ability to obtain health advice or information, and perceptions of patient-health-care provider communication measured through dual survey modes (random digital dial and mail). Even while controlling for sociodemographic characteristics, significant differences in regard to adults' responses to survey variables emerged as a function of survey mode. While the analyses do not allow us to pinpoint the underlying causes of the differences observed, they do suggest that mode of administration should be carefully adjusted for and considered.

  2. Crowdsourcing awareness: exploration of the ovarian cancer knowledge gap through Amazon Mechanical Turk.

    PubMed

    Carter, Rebecca R; DiFeo, Analisa; Bogie, Kath; Zhang, Guo-Qiang; Sun, Jiayang

    2014-01-01

    Ovarian cancer is the most lethal gynecologic disease in the United States, with more women dying from this cancer than all gynecological cancers combined. Ovarian cancer has been termed the "silent killer" because some patients do not show clear symptoms at an early stage. Currently, there is a lack of approved and effective early diagnostic tools for ovarian cancer. There is also an apparent severe knowledge gap of ovarian cancer in general and of its indicative symptoms among both public and many health professionals. These factors have significantly contributed to the late stage diagnosis of most ovarian cancer patients (63% are diagnosed at Stage III or above), where the 5-year survival rate is less than 30%. The paucity of knowledge concerning ovarian cancer in the United States is unknown. The present investigation examined current public awareness and knowledge about ovarian cancer. The study implemented design strategies to develop an unbiased survey with quality control measures, including the modern application of multiple statistical analyses. The survey assessed a reasonable proxy of the US population by crowdsourcing participants through the online task marketplace Amazon Mechanical Turk, at a highly condensed rate of cost and time compared to traditional recruitment methods. Knowledge of ovarian cancer was compared to that of breast cancer using repeated measures, bias control and other quality control measures in the survey design. Analyses included multinomial logistic regression and categorical data analysis procedures such as correspondence analysis, among other statistics. We confirmed the relatively poor public knowledge of ovarian cancer among the US population. The simple, yet novel design should set an example for designing surveys to obtain quality data via Amazon Mechanical Turk with the associated analyses.

  3. Characteristics of genomic signatures derived using univariate methods and mechanistically anchored functional descriptors for predicting drug- and xenobiotic-induced nephrotoxicity.

    PubMed

    Shi, Weiwei; Bugrim, Andrej; Nikolsky, Yuri; Nikolskya, Tatiana; Brennan, Richard J

    2008-01-01

    ABSTRACT The ideal toxicity biomarker is composed of the properties of prediction (is detected prior to traditional pathological signs of injury), accuracy (high sensitivity and specificity), and mechanistic relationships to the endpoint measured (biological relevance). Gene expression-based toxicity biomarkers ("signatures") have shown good predictive power and accuracy, but are difficult to interpret biologically. We have compared different statistical methods of feature selection with knowledge-based approaches, using GeneGo's database of canonical pathway maps, to generate gene sets for the classification of renal tubule toxicity. The gene set selection algorithms include four univariate analyses: t-statistics, fold-change, B-statistics, and RankProd, and their combination and overlap for the identification of differentially expressed probes. Enrichment analysis following the results of the four univariate analyses, Hotelling T-square test, and, finally out-of-bag selection, a variant of cross-validation, were used to identify canonical pathway maps-sets of genes coordinately involved in key biological processes-with classification power. Differentially expressed genes identified by the different statistical univariate analyses all generated reasonably performing classifiers of tubule toxicity. Maps identified by enrichment analysis or Hotelling T-square had lower classification power, but highlighted perturbed lipid homeostasis as a common discriminator of nephrotoxic treatments. The out-of-bag method yielded the best functionally integrated classifier. The map "ephrins signaling" performed comparably to a classifier derived using sparse linear programming, a machine learning algorithm, and represents a signaling network specifically involved in renal tubule development and integrity. Such functional descriptors of toxicity promise to better integrate predictive toxicogenomics with mechanistic analysis, facilitating the interpretation and risk assessment of predictive genomic investigations.

  4. Systematic review of statistical approaches to quantify, or correct for, measurement error in a continuous exposure in nutritional epidemiology.

    PubMed

    Bennett, Derrick A; Landry, Denise; Little, Julian; Minelli, Cosetta

    2017-09-19

    Several statistical approaches have been proposed to assess and correct for exposure measurement error. We aimed to provide a critical overview of the most common approaches used in nutritional epidemiology. MEDLINE, EMBASE, BIOSIS and CINAHL were searched for reports published in English up to May 2016 in order to ascertain studies that described methods aimed to quantify and/or correct for measurement error for a continuous exposure in nutritional epidemiology using a calibration study. We identified 126 studies, 43 of which described statistical methods and 83 that applied any of these methods to a real dataset. The statistical approaches in the eligible studies were grouped into: a) approaches to quantify the relationship between different dietary assessment instruments and "true intake", which were mostly based on correlation analysis and the method of triads; b) approaches to adjust point and interval estimates of diet-disease associations for measurement error, mostly based on regression calibration analysis and its extensions. Two approaches (multiple imputation and moment reconstruction) were identified that can deal with differential measurement error. For regression calibration, the most common approach to correct for measurement error used in nutritional epidemiology, it is crucial to ensure that its assumptions and requirements are fully met. Analyses that investigate the impact of departures from the classical measurement error model on regression calibration estimates can be helpful to researchers in interpreting their findings. With regard to the possible use of alternative methods when regression calibration is not appropriate, the choice of method should depend on the measurement error model assumed, the availability of suitable calibration study data and the potential for bias due to violation of the classical measurement error model assumptions. On the basis of this review, we provide some practical advice for the use of methods to assess and adjust for measurement error in nutritional epidemiology.

  5. Sampling surface and subsurface particle-size distributions in wadable gravel-and cobble-bed streams for analyses in sediment transport, hydraulics, and streambed monitoring

    Treesearch

    Kristin Bunte; Steven R. Abt

    2001-01-01

    This document provides guidance for sampling surface and subsurface sediment from wadable gravel-and cobble-bed streams. After a short introduction to streams types and classifications in gravel-bed rivers, the document explains the field and laboratory measurement of particle sizes and the statistical analysis of particle-size distributions. Analysis of particle...

  6. Dropouts from the Great City Schools Vol. 1. Technical Analyses of Dropout Statistics in Selected Districts.

    ERIC Educational Resources Information Center

    Stevens, Floraline, Comp.

    To address the important issue of dropouts from their schools, the Council of Great City Schools undertook a major research effort to make sense of the disparate ways in which cities keep their dropout data, and to advise various policy makers on the development of common metrics for measuring the problem. A survey of Council member schools…

  7. A Vignette (User’s Guide) for “An R Package for Statistical Analysis of Chemistry, Histopathology, and Reproduction Endpoints Including Repeated Measures and Multi-Generation Studies (StatCharrms).”

    EPA Science Inventory

    StatCharrms is a graphical user front-end for ease of use in analyzing data generated from OCSPP 890.2200, Medaka Extended One Generation Reproduction Test (MEOGRT) and OCSPP 890.2300, Larval Amphibian Gonad Development Assay (LAGDA). The analyses StatCharrms is capable of perfor...

  8. Methods for estimating low-flow statistics for Massachusetts streams

    USGS Publications Warehouse

    Ries, Kernell G.; Friesz, Paul J.

    2000-01-01

    Methods and computer software are described in this report for determining flow duration, low-flow frequency statistics, and August median flows. These low-flow statistics can be estimated for unregulated streams in Massachusetts using different methods depending on whether the location of interest is at a streamgaging station, a low-flow partial-record station, or an ungaged site where no data are available. Low-flow statistics for streamgaging stations can be estimated using standard U.S. Geological Survey methods described in the report. The MOVE.1 mathematical method and a graphical correlation method can be used to estimate low-flow statistics for low-flow partial-record stations. The MOVE.1 method is recommended when the relation between measured flows at a partial-record station and daily mean flows at a nearby, hydrologically similar streamgaging station is linear, and the graphical method is recommended when the relation is curved. Equations are presented for computing the variance and equivalent years of record for estimates of low-flow statistics for low-flow partial-record stations when either a single or multiple index stations are used to determine the estimates. The drainage-area ratio method or regression equations can be used to estimate low-flow statistics for ungaged sites where no data are available. The drainage-area ratio method is generally as accurate as or more accurate than regression estimates when the drainage-area ratio for an ungaged site is between 0.3 and 1.5 times the drainage area of the index data-collection site. Regression equations were developed to estimate the natural, long-term 99-, 98-, 95-, 90-, 85-, 80-, 75-, 70-, 60-, and 50-percent duration flows; the 7-day, 2-year and the 7-day, 10-year low flows; and the August median flow for ungaged sites in Massachusetts. Streamflow statistics and basin characteristics for 87 to 133 streamgaging stations and low-flow partial-record stations were used to develop the equations. The streamgaging stations had from 2 to 81 years of record, with a mean record length of 37 years. The low-flow partial-record stations had from 8 to 36 streamflow measurements, with a median of 14 measurements. All basin characteristics were determined from digital map data. The basin characteristics that were statistically significant in most of the final regression equations were drainage area, the area of stratified-drift deposits per unit of stream length plus 0.1, mean basin slope, and an indicator variable that was 0 in the eastern region and 1 in the western region of Massachusetts. The equations were developed by use of weighted-least-squares regression analyses, with weights assigned proportional to the years of record and inversely proportional to the variances of the streamflow statistics for the stations. Standard errors of prediction ranged from 70.7 to 17.5 percent for the equations to predict the 7-day, 10-year low flow and 50-percent duration flow, respectively. The equations are not applicable for use in the Southeast Coastal region of the State, or where basin characteristics for the selected ungaged site are outside the ranges of those for the stations used in the regression analyses. A World Wide Web application was developed that provides streamflow statistics for data collection stations from a data base and for ungaged sites by measuring the necessary basin characteristics for the site and solving the regression equations. Output provided by the Web application for ungaged sites includes a map of the drainage-basin boundary determined for the site, the measured basin characteristics, the estimated streamflow statistics, and 90-percent prediction intervals for the estimates. An equation is provided for combining regression and correlation estimates to obtain improved estimates of the streamflow statistics for low-flow partial-record stations. An equation is also provided for combining regression and drainage-area ratio estimates to obtain improved e

  9. Altered white matter development in children born very preterm.

    PubMed

    Young, Julia M; Vandewouw, Marlee M; Morgan, Benjamin R; Smith, Mary Lou; Sled, John G; Taylor, Margot J

    2018-06-01

    Children born very preterm (VPT) at less than 32 weeks' gestational age (GA) are prone to disrupted white matter maturation and impaired cognitive development. The aims of the present study were to identify differences in white matter microstructure and connectivity of children born VPT compared to term-born children, as well as relations between white matter measures with cognitive outcomes and early brain injury. Diffusion images and T1-weighted anatomical MR images were acquired along with developmental assessments in 31 VPT children (mean GA: 28.76 weeks) and 28 term-born children at 4 years of age. FSL's tract-based spatial statistics was used to create a cohort-specific template and mean fractional anisotropy (FA) skeleton that was applied to each child's DTI data. Whole brain deterministic tractography was performed and graph theoretical measures of connectivity were calculated based on the number of streamlines between cortical and subcortical nodes derived from the Desikan-Killiany atlas. Between-group analyses included FSL Randomise for voxel-wise statistics and permutation testing for connectivity analyses. Within-group analyses between FA values and graph measures with IQ, language and visual-motor scores as well as history of white matter injury (WMI) and germinal matrix/intraventricular haemorrhage (GMH/IVH) were performed. In the children born VPT, FA values within major white matter tracts were reduced compared to term-born children. Reduced measures of local strength, clustering coefficient, local and global efficiency were present in the children born VPT within nodes in the lateral frontal, middle and superior temporal, cingulate, precuneus and lateral occipital regions. Within-group analyses revealed associations in term-born children between FA, Verbal IQ, Performance IQ and Full scale IQ within regions of the superior longitudinal fasciculus, inferior fronto-occipital fasciculus, forceps minor and forceps major. No associations with outcome were found in the VPT group. Global efficiency was reduced in the children born VPT with a history of WMI and GMH/IVH. These findings are evidence for under-developed and less connected white matter in children born VPT, contributing to our understanding of white matter development within this population.

  10. Confounding in statistical mediation analysis: What it is and how to address it.

    PubMed

    Valente, Matthew J; Pelham, William E; Smyth, Heather; MacKinnon, David P

    2017-11-01

    Psychology researchers are often interested in mechanisms underlying how randomized interventions affect outcomes such as substance use and mental health. Mediation analysis is a common statistical method for investigating psychological mechanisms that has benefited from exciting new methodological improvements over the last 2 decades. One of the most important new developments is methodology for estimating causal mediated effects using the potential outcomes framework for causal inference. Potential outcomes-based methods developed in epidemiology and statistics have important implications for understanding psychological mechanisms. We aim to provide a concise introduction to and illustration of these new methods and emphasize the importance of confounder adjustment. First, we review the traditional regression approach for estimating mediated effects. Second, we describe the potential outcomes framework. Third, we define what a confounder is and how the presence of a confounder can provide misleading evidence regarding mechanisms of interventions. Fourth, we describe experimental designs that can help rule out confounder bias. Fifth, we describe new statistical approaches to adjust for measured confounders of the mediator-outcome relation and sensitivity analyses to probe effects of unmeasured confounders on the mediated effect. All approaches are illustrated with application to a real counseling intervention dataset. Counseling psychologists interested in understanding the causal mechanisms of their interventions can benefit from incorporating the most up-to-date techniques into their mediation analyses. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  11. A practical and systematic review of Weibull statistics for reporting strengths of dental materials

    PubMed Central

    Quinn, George D.; Quinn, Janet B.

    2011-01-01

    Objectives To review the history, theory and current applications of Weibull analyses sufficient to make informed decisions regarding practical use of the analysis in dental material strength testing. Data References are made to examples in the engineering and dental literature, but this paper also includes illustrative analyses of Weibull plots, fractographic interpretations, and Weibull distribution parameters obtained for a dense alumina, two feldspathic porcelains, and a zirconia. Sources Informational sources include Weibull's original articles, later articles specific to applications and theoretical foundations of Weibull analysis, texts on statistics and fracture mechanics and the international standards literature. Study Selection The chosen Weibull analyses are used to illustrate technique, the importance of flaw size distributions, physical meaning of Weibull parameters and concepts of “equivalent volumes” to compare measured strengths obtained from different test configurations. Conclusions Weibull analysis has a strong theoretical basis and can be of particular value in dental applications, primarily because of test specimen size limitations and the use of different test configurations. Also endemic to dental materials, however, is increased difficulty in satisfying application requirements, such as confirming fracture origin type and diligence in obtaining quality strength data. PMID:19945745

  12. Biomechanical Analysis of Military Boots. Phase 1. Materials Testing of Military and Commercial Footwear

    DTIC Science & Technology

    1992-10-01

    N=8) and Results of 44 Statistical Analyses for Impact Test Performed on Forefoot of Unworn Footwear A-2. Summary Statistics (N=8) and Results of...on Forefoot of Worn Footwear Vlll Tables (continued) Table Page B-2. Summary Statistics (N=4) and Results of 76 Statistical Analyses for Impact...used tests to assess heel and forefoot shock absorption, upper and sole durability, and flexibility (Cavanagh, 1978). Later, the number of tests was

  13. Evaluating test-retest reliability in patient-reported outcome measures for older people: A systematic review.

    PubMed

    Park, Myung Sook; Kang, Kyung Ja; Jang, Sun Joo; Lee, Joo Yun; Chang, Sun Ju

    2018-03-01

    This study aimed to evaluate the components of test-retest reliability including time interval, sample size, and statistical methods used in patient-reported outcome measures in older people and to provide suggestions on the methodology for calculating test-retest reliability for patient-reported outcomes in older people. This was a systematic literature review. MEDLINE, Embase, CINAHL, and PsycINFO were searched from January 1, 2000 to August 10, 2017 by an information specialist. This systematic review was guided by both the Preferred Reporting Items for Systematic Reviews and Meta-Analyses checklist and the guideline for systematic review published by the National Evidence-based Healthcare Collaborating Agency in Korea. The methodological quality was assessed by the Consensus-based Standards for the selection of health Measurement Instruments checklist box B. Ninety-five out of 12,641 studies were selected for the analysis. The median time interval for test-retest reliability was 14days, and the ratio of sample size for test-retest reliability to the number of items in each measure ranged from 1:1 to 1:4. The most frequently used statistical methods for continuous scores was intraclass correlation coefficients (ICCs). Among the 63 studies that used ICCs, 21 studies presented models for ICC calculations and 30 studies reported 95% confidence intervals of the ICCs. Additional analyses using 17 studies that reported a strong ICC (>0.09) showed that the mean time interval was 12.88days and the mean ratio of the number of items to sample size was 1:5.37. When researchers plan to assess the test-retest reliability of patient-reported outcome measures for older people, they need to consider an adequate time interval of approximately 13days and the sample size of about 5 times the number of items. Particularly, statistical methods should not only be selected based on the types of scores of the patient-reported outcome measures, but should also be described clearly in the studies that report the results of test-retest reliability. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Quantifying, displaying and accounting for heterogeneity in the meta-analysis of RCTs using standard and generalised Q statistics

    PubMed Central

    2011-01-01

    Background Clinical researchers have often preferred to use a fixed effects model for the primary interpretation of a meta-analysis. Heterogeneity is usually assessed via the well known Q and I2 statistics, along with the random effects estimate they imply. In recent years, alternative methods for quantifying heterogeneity have been proposed, that are based on a 'generalised' Q statistic. Methods We review 18 IPD meta-analyses of RCTs into treatments for cancer, in order to quantify the amount of heterogeneity present and also to discuss practical methods for explaining heterogeneity. Results Differing results were obtained when the standard Q and I2 statistics were used to test for the presence of heterogeneity. The two meta-analyses with the largest amount of heterogeneity were investigated further, and on inspection the straightforward application of a random effects model was not deemed appropriate. Compared to the standard Q statistic, the generalised Q statistic provided a more accurate platform for estimating the amount of heterogeneity in the 18 meta-analyses. Conclusions Explaining heterogeneity via the pre-specification of trial subgroups, graphical diagnostic tools and sensitivity analyses produced a more desirable outcome than an automatic application of the random effects model. Generalised Q statistic methods for quantifying and adjusting for heterogeneity should be incorporated as standard into statistical software. Software is provided to help achieve this aim. PMID:21473747

  15. Intermediate and advanced topics in multilevel logistic regression analysis.

    PubMed

    Austin, Peter C; Merlo, Juan

    2017-09-10

    Multilevel data occur frequently in health services, population and public health, and epidemiologic research. In such research, binary outcomes are common. Multilevel logistic regression models allow one to account for the clustering of subjects within clusters of higher-level units when estimating the effect of subject and cluster characteristics on subject outcomes. A search of the PubMed database demonstrated that the use of multilevel or hierarchical regression models is increasing rapidly. However, our impression is that many analysts simply use multilevel regression models to account for the nuisance of within-cluster homogeneity that is induced by clustering. In this article, we describe a suite of analyses that can complement the fitting of multilevel logistic regression models. These ancillary analyses permit analysts to estimate the marginal or population-average effect of covariates measured at the subject and cluster level, in contrast to the within-cluster or cluster-specific effects arising from the original multilevel logistic regression model. We describe the interval odds ratio and the proportion of opposed odds ratios, which are summary measures of effect for cluster-level covariates. We describe the variance partition coefficient and the median odds ratio which are measures of components of variance and heterogeneity in outcomes. These measures allow one to quantify the magnitude of the general contextual effect. We describe an R 2 measure that allows analysts to quantify the proportion of variation explained by different multilevel logistic regression models. We illustrate the application and interpretation of these measures by analyzing mortality in patients hospitalized with a diagnosis of acute myocardial infarction. © 2017 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. © 2017 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.

  16. Power, effects, confidence, and significance: an investigation of statistical practices in nursing research.

    PubMed

    Gaskin, Cadeyrn J; Happell, Brenda

    2014-05-01

    To (a) assess the statistical power of nursing research to detect small, medium, and large effect sizes; (b) estimate the experiment-wise Type I error rate in these studies; and (c) assess the extent to which (i) a priori power analyses, (ii) effect sizes (and interpretations thereof), and (iii) confidence intervals were reported. Statistical review. Papers published in the 2011 volumes of the 10 highest ranked nursing journals, based on their 5-year impact factors. Papers were assessed for statistical power, control of experiment-wise Type I error, reporting of a priori power analyses, reporting and interpretation of effect sizes, and reporting of confidence intervals. The analyses were based on 333 papers, from which 10,337 inferential statistics were identified. The median power to detect small, medium, and large effect sizes was .40 (interquartile range [IQR]=.24-.71), .98 (IQR=.85-1.00), and 1.00 (IQR=1.00-1.00), respectively. The median experiment-wise Type I error rate was .54 (IQR=.26-.80). A priori power analyses were reported in 28% of papers. Effect sizes were routinely reported for Spearman's rank correlations (100% of papers in which this test was used), Poisson regressions (100%), odds ratios (100%), Kendall's tau correlations (100%), Pearson's correlations (99%), logistic regressions (98%), structural equation modelling/confirmatory factor analyses/path analyses (97%), and linear regressions (83%), but were reported less often for two-proportion z tests (50%), analyses of variance/analyses of covariance/multivariate analyses of variance (18%), t tests (8%), Wilcoxon's tests (8%), Chi-squared tests (8%), and Fisher's exact tests (7%), and not reported for sign tests, Friedman's tests, McNemar's tests, multi-level models, and Kruskal-Wallis tests. Effect sizes were infrequently interpreted. Confidence intervals were reported in 28% of papers. The use, reporting, and interpretation of inferential statistics in nursing research need substantial improvement. Most importantly, researchers should abandon the misleading practice of interpreting the results from inferential tests based solely on whether they are statistically significant (or not) and, instead, focus on reporting and interpreting effect sizes, confidence intervals, and significance levels. Nursing researchers also need to conduct and report a priori power analyses, and to address the issue of Type I experiment-wise error inflation in their studies. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.

  17. Methodological and Reporting Quality of Systematic Reviews and Meta-analyses in Endodontics.

    PubMed

    Nagendrababu, Venkateshbabu; Pulikkotil, Shaju Jacob; Sultan, Omer Sheriff; Jayaraman, Jayakumar; Peters, Ove A

    2018-06-01

    The aim of this systematic review (SR) was to evaluate the quality of SRs and meta-analyses (MAs) in endodontics. A comprehensive literature search was conducted to identify relevant articles in the electronic databases from January 2000 to June 2017. Two reviewers independently assessed the articles for eligibility and data extraction. SRs and MAs on interventional studies with a minimum of 2 therapeutic strategies in endodontics were included in this SR. Methodologic and reporting quality were assessed using A Measurement Tool to Assess Systematic Reviews (AMSTAR) and Preferred Reporting Items for Systematic Review and Meta-Analyses (PRISMA), respectively. The interobserver reliability was calculated using the Cohen kappa statistic. Statistical analysis with the level of significance at P < .05 was performed using Kruskal-Wallis tests and simple linear regression analysis. A total of 30 articles were selected for the current SR. Using AMSTAR, the item related to the scientific quality of studies used in conclusion was adhered by less than 40% of studies. Using PRISMA, 3 items were reported by less than 40% of studies, which were on objectives, protocol registration, and funding. No association was evident comparing the number of authors and country with quality. Statistical significance was observed when quality was compared among journals, with studies published as Cochrane reviews superior to those published in other journals. AMSTAR and PRISMA scores were significantly related. SRs in endodontics showed variability in both methodologic and reporting quality. Copyright © 2018 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.

  18. Advanced Behavioral Analyses Show that the Presence of Food Causes Subtle Changes in C. elegans Movement.

    PubMed

    Angstman, Nicholas B; Frank, Hans-Georg; Schmitz, Christoph

    2016-01-01

    As a widely used and studied model organism, Caenorhabditis elegans worms offer the ability to investigate implications of behavioral change. Although, investigation of C. elegans behavioral traits has been shown, analysis is often narrowed down to measurements based off a single point, and thus cannot pick up on subtle behavioral and morphological changes. In the present study videos were captured of four different C. elegans strains grown in liquid cultures and transferred to NGM-agar plates with an E. coli lawn or with no lawn. Using an advanced software, WormLab, the full skeleton and outline of worms were tracked to determine whether the presence of food affects behavioral traits. In all seven investigated parameters, statistically significant differences were found in worm behavior between those moving on NGM-agar plates with an E. coli lawn and NGM-agar plates with no lawn. Furthermore, multiple test groups showed differences in interaction between variables as the parameters that significantly correlated statistically with speed of locomotion varied. In the present study, we demonstrate the validity of a model to analyze C. elegans behavior beyond simple speed of locomotion. The need to account for a nested design while performing statistical analyses in similar studies is also demonstrated. With extended analyses, C. elegans behavioral change can be investigated with greater sensitivity, which could have wide utility in fields such as, but not limited to, toxicology, drug discovery, and RNAi screening.

  19. Reservoir zonation based on statistical analyses: A case study of the Nubian sandstone, Gulf of Suez, Egypt

    NASA Astrophysics Data System (ADS)

    El Sharawy, Mohamed S.; Gaafar, Gamal R.

    2016-12-01

    Both reservoir engineers and petrophysicists have been concerned about dividing a reservoir into zones for engineering and petrophysics purposes. Through decades, several techniques and approaches were introduced. Out of them, statistical reservoir zonation, stratigraphic modified Lorenz (SML) plot and the principal component and clustering analyses techniques were chosen to apply on the Nubian sandstone reservoir of Palaeozoic - Lower Cretaceous age, Gulf of Suez, Egypt, by using five adjacent wells. The studied reservoir consists mainly of sandstone with some intercalation of shale layers with varying thickness from one well to another. The permeability ranged from less than 1 md to more than 1000 md. The statistical reservoir zonation technique, depending on core permeability, indicated that the cored interval of the studied reservoir can be divided into two zones. Using reservoir properties such as porosity, bulk density, acoustic impedance and interval transit time indicated also two zones with an obvious variation in separation depth and zones continuity. The stratigraphic modified Lorenz (SML) plot indicated the presence of more than 9 flow units in the cored interval as well as a high degree of microscopic heterogeneity. On the other hand, principal component and cluster analyses, depending on well logging data (gamma ray, sonic, density and neutron), indicated that the whole reservoir can be divided at least into four electrofacies having a noticeable variation in reservoir quality, as correlated with the measured permeability. Furthermore, continuity or discontinuity of the reservoir zones can be determined using this analysis.

  20. Validation of decisional balance and self-efficacy measures for HPV vaccination in college women.

    PubMed

    Lipschitz, Jessica M; Fernandez, Anne C; Larson, H Elsa; Blaney, Cerissa L; Meier, Kathy S; Redding, Colleen A; Prochaska, James O; Paiva, Andrea L

    2013-01-01

    Women younger than 25 years are at greatest risk for human papillomavirus (HPV) infection, including high-risk strains associated with 70% of cervical cancers. Effective model-based measures that can lead to intervention development to increase HPV vaccination rates are necessary. This study validated Transtheoretical Model measures of Decisional Balance and Self-Efficacy for seeking the HPV vaccine in a sample of female college students. Cross-sectional measurement development. Setting. Online survey of undergraduate college students. A total of 340 female students ages 18 to 26 years. Stage of Change, Decisional Balance, and Self-Efficacy. The sample was randomly split into halves for exploratory principal components analyses (PCAs), followed by confirmatory factor analyses (CFAs) to test measurement models. Multivariate analyses examined relationships between constructs. For Decisional Balance, PCA indicated two 4-item factors (Pros -α = .90; and Cons -α = .66). CFA supported a two-factor correlated model, χ(2)(19) = 39.33; p < .01; comparative fit index (CFI) = .97; and average absolute standardized residual statistic (AASR) = .03; with Pros α = .90 and Cons α = .67. For Self-Efficacy, PCA indicated one 6-item factor (α = .84). CFA supported this structure, χ(2)(9) = 50.87; p < .05; CFI = .94; AASR = .03; and α = .90. Multivariate analyses indicated significant cross-stage differences on Pros, Cons, and Self-Efficacy in expected directions. Findings support the internal and external validity of these measures and their use in Transtheoretical Model-tailored interventions. Stage-construct relationships suggest that reducing the Cons of vaccination may be more important for HPV than for behaviors with a true Maintenance stage.

  1. A proposal for the measurement of graphical statistics effectiveness: Does it enhance or interfere with statistical reasoning?

    NASA Astrophysics Data System (ADS)

    Agus, M.; Penna, M. P.; Peró-Cebollero, M.; Guàrdia-Olmos, J.

    2015-02-01

    Numerous studies have examined students' difficulties in understanding some notions related to statistical problems. Some authors observed that the presentation of distinct visual representations could increase statistical reasoning, supporting the principle of graphical facilitation. But other researchers disagree with this viewpoint, emphasising the impediments related to the use of illustrations that could overcharge the cognitive system with insignificant data. In this work we aim at comparing the probabilistic statistical reasoning regarding two different formats of problem presentations: graphical and verbal-numerical. We have conceived and presented five pairs of homologous simple problems in the verbal numerical and graphical format to 311 undergraduate Psychology students (n=156 in Italy and n=155 in Spain) without statistical expertise. The purpose of our work was to evaluate the effect of graphical facilitation in probabilistic statistical reasoning. Every undergraduate has solved each pair of problems in two formats in different problem presentation orders and sequences. Data analyses have highlighted that the effect of graphical facilitation is infrequent in psychology undergraduates. This effect is related to many factors (as knowledge, abilities, attitudes, and anxiety); moreover it might be considered the resultant of interaction between individual and task characteristics.

  2. Point-by-point compositional analysis for atom probe tomography.

    PubMed

    Stephenson, Leigh T; Ceguerra, Anna V; Li, Tong; Rojhirunsakool, Tanaporn; Nag, Soumya; Banerjee, Rajarshi; Cairney, Julie M; Ringer, Simon P

    2014-01-01

    This new alternate approach to data processing for analyses that traditionally employed grid-based counting methods is necessary because it removes a user-imposed coordinate system that not only limits an analysis but also may introduce errors. We have modified the widely used "binomial" analysis for APT data by replacing grid-based counting with coordinate-independent nearest neighbour identification, improving the measurements and the statistics obtained, allowing quantitative analysis of smaller datasets, and datasets from non-dilute solid solutions. It also allows better visualisation of compositional fluctuations in the data. Our modifications include:.•using spherical k-atom blocks identified by each detected atom's first k nearest neighbours.•3D data visualisation of block composition and nearest neighbour anisotropy.•using z-statistics to directly compare experimental and expected composition curves. Similar modifications may be made to other grid-based counting analyses (contingency table, Langer-Bar-on-Miller, sinusoidal model) and could be instrumental in developing novel data visualisation options.

  3. Climate sensitivity to the lower stratospheric ozone variations

    NASA Astrophysics Data System (ADS)

    Kilifarska, N. A.

    2012-12-01

    The strong sensitivity of the Earth's radiation balance to variations in the lower stratospheric ozone—reported previously—is analysed here by the use of non-linear statistical methods. Our non-linear model of the land air temperature (T)—driven by the measured Arosa total ozone (TOZ)—explains 75% of total variability of Earth's T variations during the period 1926-2011. We have analysed also the factors which could influence the TOZ variability and found that the strongest impact belongs to the multi-decadal variations of galactic cosmic rays. Constructing a statistical model of the ozone variability, we have been able to predict the tendency in the land air T evolution till the end of the current decade. Results show that Earth is facing a weak cooling of the surface T by 0.05-0.25 K (depending on the ozone model) until the end of the current solar cycle. A new mechanism for O3 influence on climate is proposed.

  4. Prevention and anthropology.

    PubMed

    Jopp, Eilin; Scheffler, Christiane; Hermanussen, Michael

    2014-01-01

    Screening is an important issue in medicine and is used to early identify unrecognised diseases in persons who are apparently in good health. Screening strongly relies on the concept of "normal values". Normal values are defined as values that are frequently observed in a population and usually range within certain statistical limits. Screening for obesity should start early as the prevalence of obesity consolidates already at early school age. Though widely practiced, measuring BMI is not the ultimate solution for detecting obesity. Children with high BMI may be "robust" in skeletal dimensions. Assessing skeletal robustness and in particularly assessing developmental tempo in adolescents are also important issues in health screening. Yet, in spite of the necessity of screening investigations, appropriate reference values are often missing. Meanwhile, new concepts of growth diagrams have been developed. Stage line diagrams are useful for tracking developmental processes over time. Functional data analyses have efficiently been used for analysing longitudinal growth in height and assessing the tempo of maturation. Convenient low-cost statistics have also been developed for generating synthetic national references.

  5. A marked correlation function for constraining modified gravity models

    NASA Astrophysics Data System (ADS)

    White, Martin

    2016-11-01

    Future large scale structure surveys will provide increasingly tight constraints on our cosmological model. These surveys will report results on the distance scale and growth rate of perturbations through measurements of Baryon Acoustic Oscillations and Redshift-Space Distortions. It is interesting to ask: what further analyses should become routine, so as to test as-yet-unknown models of cosmic acceleration? Models which aim to explain the accelerated expansion rate of the Universe by modifications to General Relativity often invoke screening mechanisms which can imprint a non-standard density dependence on their predictions. This suggests density-dependent clustering as a `generic' constraint. This paper argues that a density-marked correlation function provides a density-dependent statistic which is easy to compute and report and requires minimal additional infrastructure beyond what is routinely available to such survey analyses. We give one realization of this idea and study it using low order perturbation theory. We encourage groups developing modified gravity theories to see whether such statistics provide discriminatory power for their models.

  6. The impact of alcohol taxation on liver cirrhosis mortality.

    PubMed

    Ponicki, William R; Gruenewald, Paul J

    2006-11-01

    The objective of this study is to investigate the impact of distilled spirits, wine, and beer taxes on cirrhosis mortality using a large-panel data set and statistical models that control for various other factors that may affect that mortality. The analyses were performed on a panel of 30 U.S. license states during the period 1971-1998 (N = 840 state-by-year observations). Exogenous measures included current and lagged versions of beverage taxes and income, as well as controls for states' age distribution, religion, race, health care availability, urbanity, tourism, and local bans on alcohol sales. Regression analyses were performed using random-effects models with corrections for serial autocorrelation and heteroscedasticity among states. Cirrhosis rates were found to be significantly related to taxes on distilled spirits but not to taxation of wine and beer. Consistent results were found using different statistical models and model specifications. Consistent with prior research, cirrhosis mortality in the United States appears more closely linked to consumption of distilled spirits than to that of other alcoholic beverages.

  7. Fluoride Content of Bottled Waters in Hong Kong and Qatar.

    PubMed

    Al-Mulla, Hessa I; Anthonappa, Robert P; King, Nigel M

    2016-01-01

    To determine the F concentration of bottled waters that was available in Hong Kong and Qatar. The F concentrations of bottled waters collected from Hong Kong (n=81) and Qatar (n=32) were analysed. The F ion selective electrode method was used to measure the F concentration in the samples. Three measurements were obtained for every sample to ensure reproducibility and appropriate statistical analyses were employed. Qatar group: F concentrations ranged from 0.06 ppm to 3.0 ppm with a mean value of 0.8 ppm. The F concentrations displayed on the labels of the samples (60%) were significantly lower than the measured F concentration (p < 0.0001). Hong Kong group: F concentrations ranged from 0.04 ppm to 2.52 ppm with a mean value of 0.44 ppm. The F concentrations displayed on the samples (16%) were significantly lower than the measured F concentration (p< 0.0001). Wide variations exist in the F concentration among the different brands of bottled water available in Hong Kong and Qatar. The F concentrations displayed on the labels were not consistent with the measured F concentrations.

  8. Comparison of two surface temperature measurement using thermocouples and infrared camera

    NASA Astrophysics Data System (ADS)

    Michalski, Dariusz; Strąk, Kinga; Piasecka, Magdalena

    This paper compares two methods applied to measure surface temperatures at an experimental setup designed to analyse flow boiling heat transfer. The temperature measurements were performed in two parallel rectangular minichannels, both 1.7 mm deep, 16 mm wide and 180 mm long. The heating element for the fluid flowing in each minichannel was a thin foil made of Haynes-230. The two measurement methods employed to determine the surface temperature of the foil were: the contact method, which involved mounting thermocouples at several points in one minichannel, and the contactless method to study the other minichannel, where the results were provided with an infrared camera. Calculations were necessary to compare the temperature results. Two sets of measurement data obtained for different values of the heat flux were analysed using the basic statistical methods, the method error and the method accuracy. The experimental error and the method accuracy were taken into account. The comparative analysis showed that although the values and distributions of the surface temperatures obtained with the two methods were similar but both methods had certain limitations.

  9. Water Masses in the Eastern Mediterranean Sea: An Analysis of Measured Isotopic Oxygen

    NASA Astrophysics Data System (ADS)

    de Ruggiero, Paola; Zanchettin, Davide; Bensi, Manuel; Hainbucher, Dagmar; Stenni, Barbara; Pierini, Stefano; Rubino, Angelo

    2018-04-01

    We investigate aspects of the water mass structure of the Adriatic and Ionian basins (Eastern Mediterranean Sea) and their interdecadal variability through statistical analyses focused on δ18Ο measurements carried out in 1985, 1990, and 2011. In particular, the more recent δ18Ο measurements extend throughout the entire water column and constitute, to the best of our knowledge, the largest synoptic dataset encompassing different sub-basins of the Mediterranean Sea. We study the statistical linkages between temperature, salinity, dissolved oxygen and δ18Ο. We find that δ18Ο is largely independent from the other parameters, and it can be used to trace major water masses that are typically found in the basins, including the Adriatic Dense Water, the Levantine Intermediate Water, and the Cretan Intermediate and Dense Waters. Finally, we explore the possibility of using δ18Ο concentration as a proxy for dominant modes of large-scale oceanic variability in the Mediterranean Sea.

  10. Methods for measuring, enhancing, and accounting for medication adherence in clinical trials.

    PubMed

    Vrijens, B; Urquhart, J

    2014-06-01

    Adherence to rationally prescribed medications is essential for effective pharmacotherapy. However, widely variable adherence to protocol-specified dosing regimens is prevalent among participants in ambulatory drug trials, mostly manifested in the form of underdosing. Drug actions are inherently dose and time dependent, and as a result, variable underdosing diminishes the actions of trial medications by various degrees. The ensuing combination of increased variability and decreased magnitude of trial drug actions reduces statistical power to discern between-group differences in drug actions. Variable underdosing has many adverse consequences, some of which can be mitigated by the combination of reliable measurements of ambulatory patients' adherence to trial and nontrial medications, measurement-guided management of adherence, statistically and pharmacometrically sound analyses, and modifications in trial design. Although nonadherence is prevalent across all therapeutic areas in which the patients are responsible for treatment administration, the significance of the adverse consequences depends on the characteristics of both the disease and the medications.

  11. Handwriting Examination: Moving from Art to Science

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jarman, K.H.; Hanlen, R.C.; Manzolillo, P.A.

    In this document, we present a method for validating the premises and methodology of forensic handwriting examination. This method is intuitively appealing because it relies on quantitative measurements currently used qualitatively by FDE's in making comparisons, and it is scientifically rigorous because it exploits the power of multivariate statistical analysis. This approach uses measures of both central tendency and variation to construct a profile for a given individual. (Central tendency and variation are important for characterizing an individual's writing and both are currently used by FDE's in comparative analyses). Once constructed, different profiles are then compared for individuality using clustermore » analysis; they are grouped so that profiles within a group cannot be differentiated from one another based on the measured characteristics, whereas profiles between groups can. The cluster analysis procedure used here exploits the power of multivariate hypothesis testing. The result is not only a profile grouping but also an indication of statistical significance of the groups generated.« less

  12. Levels of asymmetry in Formica pratensis Retz. (Hymenoptera, Insecta) from a chronic metal-contaminated site

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rabitsch, W.B.

    1997-07-01

    Asymmetries of bilaterally symmetrical morphological traits in workers of the ant Formica pratensis Retzius were compared at sites with different levels of metal contamination and between mature and pre-mature colonies. Statistical analyses of the right-minus-left differences revealed that their distributions fit assumptions of fluctuating asymmetry (FA). No direct asymmetry or antisymmetry were present. Mean measurement error accounts for a third of the variation, but the maximum measurement error was 65%. Although significant differences of FA in ants were observed, the inconsistent results render uncovering a clear pattern difficult. Lead, cadmium, and zinc concentrations in the ants decreased with the distancemore » from the contamination source, but no relation was found between FA and the heavy metal levels. Ants from the premature colonies were more asymmetrical than those from mature colonies but accumulated less metals. The use of asymmetry measures in ecotoxicology and biomonitoring is criticized, but should remain widely applicable if statistical assumptions are complemented by genetic and historical data.« less

  13. Volumetric MRI study of brain in children with intrauterine exposure to cocaine, alcohol, tobacco, and marijuana.

    PubMed

    Rivkin, Michael J; Davis, Peter E; Lemaster, Jennifer L; Cabral, Howard J; Warfield, Simon K; Mulkern, Robert V; Robson, Caroline D; Rose-Jacobs, Ruth; Frank, Deborah A

    2008-04-01

    The objective of this study was to use volumetric MRI to study brain volumes in 10- to 14-year-old children with and without intrauterine exposure to cocaine, alcohol, cigarettes, or marijuana. Volumetric MRI was performed on 35 children (mean age: 12.3 years; 14 with intrauterine exposure to cocaine, 21 with no intrauterine exposure to cocaine) to determine the effect of prenatal drug exposure on volumes of cortical gray matter; white matter; subcortical gray matter; cerebrospinal fluid; and total parenchymal volume. Head circumference was also obtained. Analyses of each individual substance were adjusted for demographic characteristics and the remaining 3 prenatal substance exposures. Regression analyses adjusted for demographic characteristics showed that children with intrauterine exposure to cocaine had lower mean cortical gray matter and total parenchymal volumes and smaller mean head circumference than comparison children. After adjustment for other prenatal exposures, these volumes remained smaller but lost statistical significance. Similar analyses conducted for prenatal ethanol exposure adjusted for demographics showed significant reduction in mean cortical gray matter; total parenchymal volumes; and head circumference, which remained smaller but lost statistical significance after adjustment for the remaining 3 exposures. Notably, prenatal cigarette exposure was associated with significant reductions in cortical gray matter and total parenchymal volumes and head circumference after adjustment for demographics that retained marginal significance after adjustment for the other 3 exposures. Finally, as the number of exposures to prenatal substances grew, cortical gray matter and total parenchymal volumes and head circumference declined significantly with smallest measures found among children exposed to all 4. CONCLUSIONS; These data suggest that intrauterine exposures to cocaine, alcohol, and cigarettes are individually related to reduced head circumference; cortical gray matter; and total parenchymal volumes as measured by MRI at school age. Adjustment for other substance exposures precludes determination of statistically significant individual substance effect on brain volume in this small sample; however, these substances may act cumulatively during gestation to exert lasting effects on brain size and volume.

  14. Flexible Magnets Are Not Effective in Decreasing Pain Perception and Recovery Time After Muscle Microinjury

    PubMed Central

    Borsa, Paul A.; Liggett, Charles L.

    1998-01-01

    Objective: To assess the therapeutic effects of flexible magnets on pain perception, intramuscular swelling, range of motion, and muscular strength in individuals with a muscle microinjury. Design and Setting: This experiment was a single-blind, placebo study using a repeated-measures design. Subjects performed an intense exercise protocol to induce a muscle microinjury. After pretreatment measurements were recorded, subjects were randomly assigned to an experimental (magnet), placebo (imitation magnet), or control (no magnet) group. Posttreatment measurements were repeated at 24, 48, and 72 hours. Subjects: Forty-five healthy subjects participated in the study. Measurements: Subjects were measured repeatedly for pain perception, upper arm girth, range of motion, and static force production. Four separate univariate analyses of variances were used to reveal statistically significant mean (±SD) differences between variables over time. Interaction effects were analyzed using Scheffe post hoc analysis. Results: Analysis of variance revealed no statistically significant (P > .05) mean differences between conditions for any dependent pretreatment and posttreatment measurements. No significant interaction effects were demonstrated between conditions and times. Conclusions: No significant therapeutic effects on pain control and muscular dysfunction were observed in subjects wearing flexible magnets. ImagesFig 2.Fig 3. PMID:16558503

  15. A comparison of time dependent Cox regression, pooled logistic regression and cross sectional pooling with simulations and an application to the Framingham Heart Study.

    PubMed

    Ngwa, Julius S; Cabral, Howard J; Cheng, Debbie M; Pencina, Michael J; Gagnon, David R; LaValley, Michael P; Cupples, L Adrienne

    2016-11-03

    Typical survival studies follow individuals to an event and measure explanatory variables for that event, sometimes repeatedly over the course of follow up. The Cox regression model has been used widely in the analyses of time to diagnosis or death from disease. The associations between the survival outcome and time dependent measures may be biased unless they are modeled appropriately. In this paper we explore the Time Dependent Cox Regression Model (TDCM), which quantifies the effect of repeated measures of covariates in the analysis of time to event data. This model is commonly used in biomedical research but sometimes does not explicitly adjust for the times at which time dependent explanatory variables are measured. This approach can yield different estimates of association compared to a model that adjusts for these times. In order to address the question of how different these estimates are from a statistical perspective, we compare the TDCM to Pooled Logistic Regression (PLR) and Cross Sectional Pooling (CSP), considering models that adjust and do not adjust for time in PLR and CSP. In a series of simulations we found that time adjusted CSP provided identical results to the TDCM while the PLR showed larger parameter estimates compared to the time adjusted CSP and the TDCM in scenarios with high event rates. We also observed upwardly biased estimates in the unadjusted CSP and unadjusted PLR methods. The time adjusted PLR had a positive bias in the time dependent Age effect with reduced bias when the event rate is low. The PLR methods showed a negative bias in the Sex effect, a subject level covariate, when compared to the other methods. The Cox models yielded reliable estimates for the Sex effect in all scenarios considered. We conclude that survival analyses that explicitly account in the statistical model for the times at which time dependent covariates are measured provide more reliable estimates compared to unadjusted analyses. We present results from the Framingham Heart Study in which lipid measurements and myocardial infarction data events were collected over a period of 26 years.

  16. First derivative versus absolute spectral reflectance of citrus varieties

    NASA Astrophysics Data System (ADS)

    Blazquez, Carlos H.; Nigg, H. N.; Hedley, Lou E.; Ramos, L. E.; Sorrell, R. W.; Simpson, S. E.

    1996-06-01

    Spectral reflectance measurements from 400 to 800 nm were taken from immature and mature leaves of grapefruit ('McCarty' and 'Rio Red'), 'Minneola' tangelo, 'Satsuma' mandarin, 'Dancy' tangerine, 'Nagami' oval kumquat, and 'Valencia' sweet orange, at the Florida Citrus Arboretum, Division of Plant Industry, Winter Haven, Florida. Immature and mature leaves of 'Minneola' tangelo had greater percent reflectance in the 400 to 800 nm range than the other varieties and leaf ages measured. The slope of the citrus spectral curves in the 800 nm range was not as sharp as conventional spectrometers, but had a much higher reflectance value than those obtained with a DK-2 spectrometer. Statistical analyses of absolute spectral data yielded significant differences between mature and immature leaves and between varieties. First derivative data analyses did not yield significant differences between varieties.

  17. 40 CFR 91.512 - Request for public hearing.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... plans and statistical analyses have been properly applied (specifically, whether sampling procedures and statistical analyses specified in this subpart were followed and whether there exists a basis for... will be made available to the public during Agency business hours. ...

  18. A retrospective survey of research design and statistical analyses in selected Chinese medical journals in 1998 and 2008.

    PubMed

    Jin, Zhichao; Yu, Danghui; Zhang, Luoman; Meng, Hong; Lu, Jian; Gao, Qingbin; Cao, Yang; Ma, Xiuqiang; Wu, Cheng; He, Qian; Wang, Rui; He, Jia

    2010-05-25

    High quality clinical research not only requires advanced professional knowledge, but also needs sound study design and correct statistical analyses. The number of clinical research articles published in Chinese medical journals has increased immensely in the past decade, but study design quality and statistical analyses have remained suboptimal. The aim of this investigation was to gather evidence on the quality of study design and statistical analyses in clinical researches conducted in China for the first decade of the new millennium. Ten (10) leading Chinese medical journals were selected and all original articles published in 1998 (N = 1,335) and 2008 (N = 1,578) were thoroughly categorized and reviewed. A well-defined and validated checklist on study design, statistical analyses, results presentation, and interpretation was used for review and evaluation. Main outcomes were the frequencies of different types of study design, error/defect proportion in design and statistical analyses, and implementation of CONSORT in randomized clinical trials. From 1998 to 2008: The error/defect proportion in statistical analyses decreased significantly ( = 12.03, p<0.001), 59.8% (545/1,335) in 1998 compared to 52.2% (664/1,578) in 2008. The overall error/defect proportion of study design also decreased ( = 21.22, p<0.001), 50.9% (680/1,335) compared to 42.40% (669/1,578). In 2008, design with randomized clinical trials remained low in single digit (3.8%, 60/1,578) with two-third showed poor results reporting (defects in 44 papers, 73.3%). Nearly half of the published studies were retrospective in nature, 49.3% (658/1,335) in 1998 compared to 48.2% (761/1,578) in 2008. Decreases in defect proportions were observed in both results presentation ( = 93.26, p<0.001), 92.7% (945/1,019) compared to 78.2% (1023/1,309) and interpretation ( = 27.26, p<0.001), 9.7% (99/1,019) compared to 4.3% (56/1,309), some serious ones persisted. Chinese medical research seems to have made significant progress regarding statistical analyses, but there remains ample room for improvement regarding study designs. Retrospective clinical studies are the most often used design, whereas randomized clinical trials are rare and often show methodological weaknesses. Urgent implementation of the CONSORT statement is imperative.

  19. [Quality assessment in anesthesia].

    PubMed

    Kupperwasser, B

    1996-01-01

    Quality assessment (assurance/improvement) is the set of methods used to measure and improve the delivered care and the department's performance against pre-established criteria or standards. The four stages of the self-maintained quality assessment cycle are: problem identification, problem analysis, problem correction and evaluation of corrective actions. Quality assessment is a measurable entity for which it is necessary to define and calibrate measurement parameters (indicators) from available data gathered from the hospital anaesthesia environment. Problem identification comes from the accumulation of indicators. There are four types of quality indicators: structure, process, outcome and sentinel indicators. The latter signal a quality defect, are independent of outcomes, are easier to analyse by statistical methods and closely related to processes and main targets of quality improvement. The three types of methods to analyse the problems (indicators) are: peer review, quantitative methods and risks management techniques. Peer review is performed by qualified anaesthesiologists. To improve its validity, the review process should be explicited and conclusions based on standards of practice and literature references. The quantitative methods are statistical analyses applied to the collected data and presented in a graphic format (histogram, Pareto diagram, control charts). The risks management techniques include: a) critical incident analysis establishing an objective relationship between a 'critical' event and the associated human behaviours; b) system accident analysis, based on the fact that accidents continue to occur despite safety systems and sophisticated technologies, checks of all the process components leading to the impredictable outcome and not just the human factors; c) cause-effect diagrams facilitate the problem analysis in reducing its causes to four fundamental components (persons, regulations, equipment, process). Definition and implementation of corrective measures, based on the findings of the two previous stages, are the third step of the evaluation cycle. The Hawthorne effect is an outcome improvement, before the implementation of any corrective actions. Verification of the implemented actions is the final and mandatory step closing the evaluation cycle.

  20. A Meta-Meta-Analysis: Empirical Review of Statistical Power, Type I Error Rates, Effect Sizes, and Model Selection of Meta-Analyses Published in Psychology

    ERIC Educational Resources Information Center

    Cafri, Guy; Kromrey, Jeffrey D.; Brannick, Michael T.

    2010-01-01

    This article uses meta-analyses published in "Psychological Bulletin" from 1995 to 2005 to describe meta-analyses in psychology, including examination of statistical power, Type I errors resulting from multiple comparisons, and model choice. Retrospective power estimates indicated that univariate categorical and continuous moderators, individual…

  1. Statistical inference for classification of RRIM clone series using near IR reflectance properties

    NASA Astrophysics Data System (ADS)

    Ismail, Faridatul Aima; Madzhi, Nina Korlina; Hashim, Hadzli; Abdullah, Noor Ezan; Khairuzzaman, Noor Aishah; Azmi, Azrie Faris Mohd; Sampian, Ahmad Faiz Mohd; Harun, Muhammad Hafiz

    2015-08-01

    RRIM clone is a rubber breeding series produced by RRIM (Rubber Research Institute of Malaysia) through "rubber breeding program" to improve latex yield and producing clones attractive to farmers. The objective of this work is to analyse measurement of optical sensing device on latex of selected clone series. The device using transmitting NIR properties and its reflectance is converted in terms of voltage. The obtained reflectance index value via voltage was analyzed using statistical technique in order to find out the discrimination among the clones. From the statistical results using error plots and one-way ANOVA test, there is an overwhelming evidence showing discrimination of RRIM 2002, RRIM 2007 and RRIM 3001 clone series with p value = 0.000. RRIM 2008 cannot be discriminated with RRIM 2014; however both of these groups are distinct from the other clones.

  2. Experimental design and statistical methods for improved hit detection in high-throughput screening.

    PubMed

    Malo, Nathalie; Hanley, James A; Carlile, Graeme; Liu, Jing; Pelletier, Jerry; Thomas, David; Nadon, Robert

    2010-09-01

    Identification of active compounds in high-throughput screening (HTS) contexts can be substantially improved by applying classical experimental design and statistical inference principles to all phases of HTS studies. The authors present both experimental and simulated data to illustrate how true-positive rates can be maximized without increasing false-positive rates by the following analytical process. First, the use of robust data preprocessing methods reduces unwanted variation by removing row, column, and plate biases. Second, replicate measurements allow estimation of the magnitude of the remaining random error and the use of formal statistical models to benchmark putative hits relative to what is expected by chance. Receiver Operating Characteristic (ROC) analyses revealed superior power for data preprocessed by a trimmed-mean polish method combined with the RVM t-test, particularly for small- to moderate-sized biological hits.

  3. Determination of quality parameters from statistical analysis of routine TLD dosimetry data.

    PubMed

    German, U; Weinstein, M; Pelled, O

    2006-01-01

    Following the as low as reasonably achievable (ALARA) practice, there is a need to measure very low doses, of the same order of magnitude as the natural background, and the limits of detection of the dosimetry systems. The different contributions of the background signals to the total zero dose reading of thermoluminescence dosemeter (TLD) cards were analysed by using the common basic definitions of statistical indicators: the critical level (L(C)), the detection limit (L(D)) and the determination limit (L(Q)). These key statistical parameters for the system operated at NRC-Negev were quantified, based on the history of readings of the calibration cards in use. The electronic noise seems to play a minor role, but the reading of the Teflon coating (without the presence of a TLD crystal) gave a significant contribution.

  4. Algorithm for Identifying Erroneous Rain-Gauge Readings

    NASA Technical Reports Server (NTRS)

    Rickman, Doug

    2005-01-01

    An algorithm analyzes rain-gauge data to identify statistical outliers that could be deemed to be erroneous readings. Heretofore, analyses of this type have been performed in burdensome manual procedures that have involved subjective judgements. Sometimes, the analyses have included computational assistance for detecting values falling outside of arbitrary limits. The analyses have been performed without statistically valid knowledge of the spatial and temporal variations of precipitation within rain events. In contrast, the present algorithm makes it possible to automate such an analysis, makes the analysis objective, takes account of the spatial distribution of rain gauges in conjunction with the statistical nature of spatial variations in rainfall readings, and minimizes the use of arbitrary criteria. The algorithm implements an iterative process that involves nonparametric statistics.

  5. PULPAL BLOOD FLOW CHANGES IN ABUTMENT TEETH OF REMOVABLE PARTIAL DENTURES

    PubMed Central

    Kunt, Göknil Ergün; Kökçü, Deniz; Ceylan, Gözlem; Yılmaz, Nergiz; Güler, Ahmet Umut

    2009-01-01

    The purpose of this study was to investigate the effect of tooth supported (TSD) and toothtissue supported (TTSD) removable partial denture wearing on pulpal blood flow (PBF) of the abutment teeth by using Laser Doppler Flowmeter (LDF). Measurements were carried out on 60 teeth of 28 patients (28 teeth and 12 patients of TTSD group, 32 teeth and 16 patients of TSD group) who had not worn any type of removable partial dentures before, had no systemic problems and were non smokers. PBF values were recorded by LDF before insertion (day 0) and after insertion of dentures at day 1, day 7 and day 30. Statistical analysis was performed by student t test and covariance analyses of repeated measurements. In the group TTSD, the mean values of PBF decreased statistically significantly at day 1 after insertion when compared with PBF values before insertion (p<0,01). There was no statistically significant difference among PBF mean values on 1st, 7th and 30th day. However, in the group TSD, there was no statistically significant difference among PBF mean values before insertion and on 1st, 7th and 30th day. In other words, PBF mean values in group TSD continued without changing statistically significant on 1st, 7th and 30th day. TTSD wearing may show negative effect on the abutment teeth due to decreasing basal PBF. PMID:20001995

  6. Citation of previous meta-analyses on the same topic: a clue to perpetuation of incorrect methods?

    PubMed

    Li, Tianjing; Dickersin, Kay

    2013-06-01

    Systematic reviews and meta-analyses serve as a basis for decision-making and clinical practice guidelines and should be carried out using appropriate methodology to avoid incorrect inferences. We describe the characteristics, statistical methods used for meta-analyses, and citation patterns of all 21 glaucoma systematic reviews we identified pertaining to the effectiveness of prostaglandin analog eye drops in treating primary open-angle glaucoma, published between December 2000 and February 2012. We abstracted data, assessed whether appropriate statistical methods were applied in meta-analyses, and examined citation patterns of included reviews. We identified two forms of problematic statistical analyses in 9 of the 21 systematic reviews examined. Except in 1 case, none of the 9 reviews that used incorrect statistical methods cited a previously published review that used appropriate methods. Reviews that used incorrect methods were cited 2.6 times more often than reviews that used appropriate statistical methods. We speculate that by emulating the statistical methodology of previous systematic reviews, systematic review authors may have perpetuated incorrect approaches to meta-analysis. The use of incorrect statistical methods, perhaps through emulating methods described in previous research, calls conclusions of systematic reviews into question and may lead to inappropriate patient care. We urge systematic review authors and journal editors to seek the advice of experienced statisticians before undertaking or accepting for publication a systematic review and meta-analysis. The author(s) have no proprietary or commercial interest in any materials discussed in this article. Copyright © 2013 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.

  7. Bayesian correction for covariate measurement error: A frequentist evaluation and comparison with regression calibration.

    PubMed

    Bartlett, Jonathan W; Keogh, Ruth H

    2018-06-01

    Bayesian approaches for handling covariate measurement error are well established and yet arguably are still relatively little used by researchers. For some this is likely due to unfamiliarity or disagreement with the Bayesian inferential paradigm. For others a contributory factor is the inability of standard statistical packages to perform such Bayesian analyses. In this paper, we first give an overview of the Bayesian approach to handling covariate measurement error, and contrast it with regression calibration, arguably the most commonly adopted approach. We then argue why the Bayesian approach has a number of statistical advantages compared to regression calibration and demonstrate that implementing the Bayesian approach is usually quite feasible for the analyst. Next, we describe the closely related maximum likelihood and multiple imputation approaches and explain why we believe the Bayesian approach to generally be preferable. We then empirically compare the frequentist properties of regression calibration and the Bayesian approach through simulation studies. The flexibility of the Bayesian approach to handle both measurement error and missing data is then illustrated through an analysis of data from the Third National Health and Nutrition Examination Survey.

  8. Reporting quality of statistical methods in surgical observational studies: protocol for systematic review.

    PubMed

    Wu, Robert; Glen, Peter; Ramsay, Tim; Martel, Guillaume

    2014-06-28

    Observational studies dominate the surgical literature. Statistical adjustment is an important strategy to account for confounders in observational studies. Research has shown that published articles are often poor in statistical quality, which may jeopardize their conclusions. The Statistical Analyses and Methods in the Published Literature (SAMPL) guidelines have been published to help establish standards for statistical reporting.This study will seek to determine whether the quality of statistical adjustment and the reporting of these methods are adequate in surgical observational studies. We hypothesize that incomplete reporting will be found in all surgical observational studies, and that the quality and reporting of these methods will be of lower quality in surgical journals when compared with medical journals. Finally, this work will seek to identify predictors of high-quality reporting. This work will examine the top five general surgical and medical journals, based on a 5-year impact factor (2007-2012). All observational studies investigating an intervention related to an essential component area of general surgery (defined by the American Board of Surgery), with an exposure, outcome, and comparator, will be included in this systematic review. Essential elements related to statistical reporting and quality were extracted from the SAMPL guidelines and include domains such as intent of analysis, primary analysis, multiple comparisons, numbers and descriptive statistics, association and correlation analyses, linear regression, logistic regression, Cox proportional hazard analysis, analysis of variance, survival analysis, propensity analysis, and independent and correlated analyses. Each article will be scored as a proportion based on fulfilling criteria in relevant analyses used in the study. A logistic regression model will be built to identify variables associated with high-quality reporting. A comparison will be made between the scores of surgical observational studies published in medical versus surgical journals. Secondary outcomes will pertain to individual domains of analysis. Sensitivity analyses will be conducted. This study will explore the reporting and quality of statistical analyses in surgical observational studies published in the most referenced surgical and medical journals in 2013 and examine whether variables (including the type of journal) can predict high-quality reporting.

  9. Differential cross sections for the reactions γ p → p η and γ p → p η '

    DOE PAGES

    Williams, M.; Krahn, Z.; Applegate, D.; ...

    2009-10-29

    In high-statistics differential cross sections for the reactions γ p -> p η and γ p -> p η' the CLAS at Jefferson Lab was used to measure the center-of-mass energies from near threshold up to 2.84 GeV. The eta-prime results are the most precise to date and provide the largest energy and angular coverage. The eta measurements extend the energy range of the world's large-angle results by approximately 300 MeV. These new data, in particular the η' measurements, are likely to help constrain the analyses being performed to search for new baryon resonance states.

  10. A statistical framework for neuroimaging data analysis based on mutual information estimated via a gaussian copula.

    PubMed

    Ince, Robin A A; Giordano, Bruno L; Kayser, Christoph; Rousselet, Guillaume A; Gross, Joachim; Schyns, Philippe G

    2017-03-01

    We begin by reviewing the statistical framework of information theory as applicable to neuroimaging data analysis. A major factor hindering wider adoption of this framework in neuroimaging is the difficulty of estimating information theoretic quantities in practice. We present a novel estimation technique that combines the statistical theory of copulas with the closed form solution for the entropy of Gaussian variables. This results in a general, computationally efficient, flexible, and robust multivariate statistical framework that provides effect sizes on a common meaningful scale, allows for unified treatment of discrete, continuous, unidimensional and multidimensional variables, and enables direct comparisons of representations from behavioral and brain responses across any recording modality. We validate the use of this estimate as a statistical test within a neuroimaging context, considering both discrete stimulus classes and continuous stimulus features. We also present examples of analyses facilitated by these developments, including application of multivariate analyses to MEG planar magnetic field gradients, and pairwise temporal interactions in evoked EEG responses. We show the benefit of considering the instantaneous temporal derivative together with the raw values of M/EEG signals as a multivariate response, how we can separately quantify modulations of amplitude and direction for vector quantities, and how we can measure the emergence of novel information over time in evoked responses. Open-source Matlab and Python code implementing the new methods accompanies this article. Hum Brain Mapp 38:1541-1573, 2017. © 2016 Wiley Periodicals, Inc. 2016 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc.

  11. Limited-information goodness-of-fit testing of diagnostic classification item response models.

    PubMed

    Hansen, Mark; Cai, Li; Monroe, Scott; Li, Zhen

    2016-11-01

    Despite the growing popularity of diagnostic classification models (e.g., Rupp et al., 2010, Diagnostic measurement: theory, methods, and applications, Guilford Press, New York, NY) in educational and psychological measurement, methods for testing their absolute goodness of fit to real data remain relatively underdeveloped. For tests of reasonable length and for realistic sample size, full-information test statistics such as Pearson's X 2 and the likelihood ratio statistic G 2 suffer from sparseness in the underlying contingency table from which they are computed. Recently, limited-information fit statistics such as Maydeu-Olivares and Joe's (2006, Psychometrika, 71, 713) M 2 have been found to be quite useful in testing the overall goodness of fit of item response theory models. In this study, we applied Maydeu-Olivares and Joe's (2006, Psychometrika, 71, 713) M 2 statistic to diagnostic classification models. Through a series of simulation studies, we found that M 2 is well calibrated across a wide range of diagnostic model structures and was sensitive to certain misspecifications of the item model (e.g., fitting disjunctive models to data generated according to a conjunctive model), errors in the Q-matrix (adding or omitting paths, omitting a latent variable), and violations of local item independence due to unmodelled testlet effects. On the other hand, M 2 was largely insensitive to misspecifications in the distribution of higher-order latent dimensions and to the specification of an extraneous attribute. To complement the analyses of the overall model goodness of fit using M 2 , we investigated the utility of the Chen and Thissen (1997, J. Educ. Behav. Stat., 22, 265) local dependence statistic XLD2 for characterizing sources of misfit, an important aspect of model appraisal often overlooked in favour of overall statements. The XLD2 statistic was found to be slightly conservative (with Type I error rates consistently below the nominal level) but still useful in pinpointing the sources of misfit. Patterns of local dependence arising due to specific model misspecifications are illustrated. Finally, we used the M 2 and XLD2 statistics to evaluate a diagnostic model fit to data from the Trends in Mathematics and Science Study, drawing upon analyses previously conducted by Lee et al., (2011, IJT, 11, 144). © 2016 The British Psychological Society.

  12. The Relationships between the Iowa Test of Basic Skills and the Washington Assessment of Student Learning in the State of Washington. Technical Report.

    ERIC Educational Resources Information Center

    Joireman, Jeff; Abbott, Martin L.

    This report examines the overlap between student test results on the Iowa Test of Basic Skills (ITBS) and the Washington Assessment of Student Learning (WASL). The two tests were compared and contrasted in terms of content and measurement philosophy, and analyses studied the statistical relationship between the ITBS and the WASL. The ITBS assesses…

  13. Statistical analyses of commercial vehicle accident factors. Volume 1 Part 1

    DOT National Transportation Integrated Search

    1978-02-01

    Procedures for conducting statistical analyses of commercial vehicle accidents have been established and initially applied. A file of some 3,000 California Highway Patrol accident reports from two areas of California during a period of about one year...

  14. 40 CFR 90.712 - Request for public hearing.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... sampling plans and statistical analyses have been properly applied (specifically, whether sampling procedures and statistical analyses specified in this subpart were followed and whether there exists a basis... Clerk and will be made available to the public during Agency business hours. ...

  15. Development of a model of the tobacco industry's interference with tobacco control programmes

    PubMed Central

    Trochim, W; Stillman, F; Clark, P; Schmitt, C

    2003-01-01

    Objective: To construct a conceptual model of tobacco industry tactics to undermine tobacco control programmes for the purposes of: (1) developing measures to evaluate industry tactics, (2) improving tobacco control planning, and (3) supplementing current or future frameworks used to classify and analyse tobacco industry documents. Design: Web based concept mapping was conducted, including expert brainstorming, sorting, and rating of statements describing industry tactics. Statistical analyses used multidimensional scaling and cluster analysis. Interpretation of the resulting maps was accomplished by an expert panel during a face-to-face meeting. Subjects: 34 experts, selected because of their previous encounters with industry resistance or because of their research into industry tactics, took part in some or all phases of the project. Results: Maps with eight non-overlapping clusters in two dimensional space were developed, with importance ratings of the statements and clusters. Cluster and quadrant labels were agreed upon by the experts. Conclusions: The conceptual maps summarise the tactics used by the industry and their relationships to each other, and suggest a possible hierarchy for measures that can be used in statistical modelling of industry tactics and for review of industry documents. Finally, the maps enable hypothesis of a likely progression of industry reactions as public health programmes become more successful, and therefore more threatening to industry profits. PMID:12773723

  16. Do different data analytic approaches generate discrepant findings when measuring mother-infant HPA axis attunement?

    PubMed

    Bernard, Nicola K; Kashy, Deborah A; Levendosky, Alytia A; Bogat, G Anne; Lonstein, Joseph S

    2017-03-01

    Attunement between mothers and infants in their hypothalamic-pituitary-adrenal (HPA) axis responsiveness to acute stressors is thought to benefit the child's emerging physiological and behavioral self-regulation, as well as their socioemotional development. However, there is no universally accepted definition of attunement in the literature, which appears to have resulted in inconsistent statistical analyses for determining its presence or absence, and contributed to discrepant results. We used a series of data analytic approaches, some previously used in the attunement literature and others not, to evaluate the attunement between 182 women and their 1-year-old infants in their HPA axis responsivity to acute stress. Cortisol was measured in saliva samples taken from mothers and infants before and twice after a naturalistic laboratory stressor (infant arm restraint). The results of the data analytic approaches were mixed, with some analyses suggesting attunement while others did not. The strengths and weaknesses of each statistical approach are discussed, and an analysis using a cross-lagged model that considered both time and interactions between mother and infant appeared the most appropriate. Greater consensus in the field about the conceptualization and analysis of physiological attunement would be valuable in order to advance our understanding of this phenomenon. © 2016 Wiley Periodicals, Inc.

  17. A review of geographic variation and Geographic Information Systems (GIS) applications in prescription drug use research.

    PubMed

    Wangia, Victoria; Shireman, Theresa I

    2013-01-01

    While understanding geography's role in healthcare has been an area of research for over 40 years, the application of geography-based analyses to prescription medication use is limited. The body of literature was reviewed to assess the current state of such studies to demonstrate the scale and scope of projects in order to highlight potential research opportunities. To review systematically how researchers have applied geography-based analyses to medication use data. Empiric, English language research articles were identified through PubMed and bibliographies. Original research articles were independently reviewed as to the medications or classes studied, data sources, measures of medication exposure, geographic units of analysis, geospatial measures, and statistical approaches. From 145 publications matching key search terms, forty publications met the inclusion criteria. Cardiovascular and psychotropic classes accounted for the largest proportion of studies. Prescription drug claims were the primary source, and medication exposure was frequently captured as period prevalence. Medication exposure was documented across a variety of geopolitical units such as countries, provinces, regions, states, and postal codes. Most results were descriptive and formal statistical modeling capitalizing on geospatial techniques was rare. Despite the extensive research on small area variation analysis in healthcare, there are a limited number of studies that have examined geographic variation in medication use. Clearly, there is opportunity to collaborate with geographers and GIS professionals to harness the power of GIS technologies and to strengthen future medication studies by applying more robust geospatial statistical methods. Copyright © 2013 Elsevier Inc. All rights reserved.

  18. Quantitative Susceptibility Mapping after Sports-Related Concussion.

    PubMed

    Koch, K M; Meier, T B; Karr, R; Nencka, A S; Muftuler, L T; McCrea, M

    2018-06-07

    Quantitative susceptibility mapping using MR imaging can assess changes in brain tissue structure and composition. This report presents preliminary results demonstrating changes in tissue magnetic susceptibility after sports-related concussion. Longitudinal quantitative susceptibility mapping metrics were produced from imaging data acquired from cohorts of concussed and control football athletes. One hundred thirty-six quantitative susceptibility mapping datasets were analyzed across 3 separate visits (24 hours after injury, 8 days postinjury, and 6 months postinjury). Longitudinal quantitative susceptibility mapping group analyses were performed on stability-thresholded brain tissue compartments and selected subregions. Clinical concussion metrics were also measured longitudinally in both cohorts and compared with the measured quantitative susceptibility mapping. Statistically significant increases in white matter susceptibility were identified in the concussed athlete group during the acute (24 hour) and subacute (day 8) period. These effects were most prominent at the 8-day visit but recovered and showed no significant difference from controls at the 6-month visit. The subcortical gray matter showed no statistically significant group differences. Observed susceptibility changes after concussion appeared to outlast self-reported clinical recovery metrics at a group level. At an individual subject level, susceptibility increases within the white matter showed statistically significant correlations with return-to-play durations. The results of this preliminary investigation suggest that sports-related concussion can induce physiologic changes to brain tissue that can be detected using MR imaging-based magnetic susceptibility estimates. In group analyses, the observed tissue changes appear to persist beyond those detected on clinical outcome assessments and were associated with return-to-play duration after sports-related concussion. © 2018 by American Journal of Neuroradiology.

  19. Using venlafaxine to treat behavioral disorders in patients with autism spectrum disorder.

    PubMed

    Carminati, Giuliana Galli; Gerber, Fabienne; Darbellay, Barbara; Kosel, Markus Mathaus; Deriaz, Nicolas; Chabert, Jocelyne; Fathi, Marc; Bertschy, Gilles; Ferrero, François; Carminati, Federico

    2016-02-04

    To test the efficacy of venlafaxine at a dose of 18.75 mg/day on the reduction of behavioral problems such as irritability and hyperactivity/noncompliance in patients with intellectual disabilities and autism spectrum disorder (ASD). Our secondary hypothesis was that the usual doses of zuclopenthixol and/or clonazepam would decrease in the venlafaxine-treated group. In a randomized double-blind study, we compared six patients who received venlafaxine along with their usual treatment (zuclopenthixol and/or clonazepam) with seven patients who received placebo plus usual care. Irritability, hyperactivity/noncompliance, and overall clinical improvement were measured after 2 and 8 weeks, using validated clinical scales. Univariate analyses showed that the symptom of irritability improved in the entire sample (p = 0.023 after 2 weeks, p = 0.061 at study endpoint), although no difference was observed between the venlafaxine and placebo groups. No significant decrease in hyperactivity/noncompliance was observed during the study. At the end of the study, global improvement was observed in 33% of participants treated with venlafaxine and in 71% of participants in the placebo group (p = 0.29). The study found that decreased cumulative doses of clonazepam and zuclopenthixol were required for the venlafaxine group. Multivariate analyses (principal component analyses) with at least three combinations of variables showed that the two populations could be clearly separated (p b 0.05). Moreover, in all cases, the venlafaxine population had lower values for the Aberrant Behavior Checklist (ABC), Behavior Problems Inventory (BPI), and levels of urea with respect to the placebo group. In one case, a reduction in the dosage of clonazepam was also suggested. For an additional set of variables (ABC factor 2, BPI frequency of aggressive behaviors, hematic ammonia at Day 28, and zuclopenthixol and clonazepam intake), the separation between the two samples was statistically significant as was the Bartlett's test, but the Kaiser–Meyer–Olkin Measure of Sampling Adequacy was below the accepted threshold. This set of variables showed a reduction in the cumulative intake of both zuclopenthixol and clonazepam. Despite the small sample sizes, this study documented a statistically significant effect of venlafaxine. Moreover, we showed that lower doses of zuclopenthixol and clonazepam were needed in the venlafaxine group, although this difference was not statistically significant. This was confirmed by multivariate analyses, where this difference reached statistical significance when using a combination of variables involving zuclopenthixol. Larger-scale studies are recommended to better investigate the effectiveness of venlafaxine treatment in patients with intellectual disabilities and ASD.

  20. Seeking Temporal Predictability in Speech: Comparing Statistical Approaches on 18 World Languages.

    PubMed

    Jadoul, Yannick; Ravignani, Andrea; Thompson, Bill; Filippi, Piera; de Boer, Bart

    2016-01-01

    Temporal regularities in speech, such as interdependencies in the timing of speech events, are thought to scaffold early acquisition of the building blocks in speech. By providing on-line clues to the location and duration of upcoming syllables, temporal structure may aid segmentation and clustering of continuous speech into separable units. This hypothesis tacitly assumes that learners exploit predictability in the temporal structure of speech. Existing measures of speech timing tend to focus on first-order regularities among adjacent units, and are overly sensitive to idiosyncrasies in the data they describe. Here, we compare several statistical methods on a sample of 18 languages, testing whether syllable occurrence is predictable over time. Rather than looking for differences between languages, we aim to find across languages (using clearly defined acoustic, rather than orthographic, measures), temporal predictability in the speech signal which could be exploited by a language learner. First, we analyse distributional regularities using two novel techniques: a Bayesian ideal learner analysis, and a simple distributional measure. Second, we model higher-order temporal structure-regularities arising in an ordered series of syllable timings-testing the hypothesis that non-adjacent temporal structures may explain the gap between subjectively-perceived temporal regularities, and the absence of universally-accepted lower-order objective measures. Together, our analyses provide limited evidence for predictability at different time scales, though higher-order predictability is difficult to reliably infer. We conclude that temporal predictability in speech may well arise from a combination of individually weak perceptual cues at multiple structural levels, but is challenging to pinpoint.

  1. Seeking Temporal Predictability in Speech: Comparing Statistical Approaches on 18 World Languages

    PubMed Central

    Jadoul, Yannick; Ravignani, Andrea; Thompson, Bill; Filippi, Piera; de Boer, Bart

    2016-01-01

    Temporal regularities in speech, such as interdependencies in the timing of speech events, are thought to scaffold early acquisition of the building blocks in speech. By providing on-line clues to the location and duration of upcoming syllables, temporal structure may aid segmentation and clustering of continuous speech into separable units. This hypothesis tacitly assumes that learners exploit predictability in the temporal structure of speech. Existing measures of speech timing tend to focus on first-order regularities among adjacent units, and are overly sensitive to idiosyncrasies in the data they describe. Here, we compare several statistical methods on a sample of 18 languages, testing whether syllable occurrence is predictable over time. Rather than looking for differences between languages, we aim to find across languages (using clearly defined acoustic, rather than orthographic, measures), temporal predictability in the speech signal which could be exploited by a language learner. First, we analyse distributional regularities using two novel techniques: a Bayesian ideal learner analysis, and a simple distributional measure. Second, we model higher-order temporal structure—regularities arising in an ordered series of syllable timings—testing the hypothesis that non-adjacent temporal structures may explain the gap between subjectively-perceived temporal regularities, and the absence of universally-accepted lower-order objective measures. Together, our analyses provide limited evidence for predictability at different time scales, though higher-order predictability is difficult to reliably infer. We conclude that temporal predictability in speech may well arise from a combination of individually weak perceptual cues at multiple structural levels, but is challenging to pinpoint. PMID:27994544

  2. Within What Distance Does “Greenness” Best Predict Physical Health? A Systematic Review of Articles with GIS Buffer Analyses across the Lifespan

    PubMed Central

    2017-01-01

    Is the amount of “greenness” within a 250-m, 500-m, 1000-m or a 2000-m buffer surrounding a person’s home a good predictor of their physical health? The evidence is inconclusive. We reviewed Web of Science articles that used geographic information system buffer analyses to identify trends between physical health, greenness, and distance within which greenness is measured. Our inclusion criteria were: (1) use of buffers to estimate residential greenness; (2) statistical analyses that calculated significance of the greenness-physical health relationship; and (3) peer-reviewed articles published in English between 2007 and 2017. To capture multiple findings from a single article, we selected our unit of inquiry as the analysis, not the article. Our final sample included 260 analyses in 47 articles. All aspects of the review were in accordance with PRISMA guidelines. Analyses were independently judged as more, less, or least likely to be biased based on the inclusion of objective health measures and income/education controls. We found evidence that larger buffer sizes, up to 2000 m, better predicted physical health than smaller ones. We recommend that future analyses use nested rather than overlapping buffers to evaluate to what extent greenness not immediately around a person’s home (i.e., within 1000–2000 m) predicts physical health. PMID:28644420

  3. Supply Chain Collaboration: Information Sharing in a Tactical Operating Environment

    DTIC Science & Technology

    2013-06-01

    architecture, there are four tiers: Client (Web Application Clients ), Presentation (Web-Server), Processing (Application-Server), Data (Database...organization in each period. This data will be collected to analyze. i) Analyses and Validation: We will do a statistics test in this data, Pareto ...notes, outstanding deliveries, and inventory. i) Analyses and Validation: We will do a statistics test in this data, Pareto analyses and confirmation

  4. Primary implant stability in a bone model simulating clinical situations for the posterior maxilla: an in vitro study

    PubMed Central

    2016-01-01

    Purpose The aim of this study was to determine the influence of anatomical conditions on primary stability in the models simulating posterior maxilla. Methods Polyurethane blocks were designed to simulate monocortical (M) and bicortical (B) conditions. Each condition had four subgroups measuring 3 mm (M3, B3), 5 mm (M5, B5), 8 mm (M8, B8), and 12 mm (M12, B12) in residual bone height (RBH). After implant placement, the implant stability quotient (ISQ), Periotest value (PTV), insertion torque (IT), and reverse torque (RT) were measured. Two-factor ANOVA (two cortical conditions×four RBHs) and additional analyses for simple main effects were performed. Results A significant interaction between cortical condition and RBH was demonstrated for all methods measuring stability with two-factor ANOVA. In the analyses for simple main effects, ISQ and PTV were statistically higher in the bicortical groups than the corresponding monocortical groups, respectively. In the monocortical group, ISQ and PTV showed a statistically significant rise with increasing RBH. Measurements of IT and RT showed a similar tendency, measuring highest in the M3 group, followed by the M8, the M5, and the M12 groups. In the bicortical group, all variables showed a similar tendency, with different degrees of rise and decline. The B8 group showed the highest values, followed by the B12, the B5, and the B3 groups. The highest coefficient was demonstrated between ISQ and PTV. Conclusions Primary stability was enhanced by the presence of bicortex and increased RBH, which may be better demonstrated by ISQ and PTV than by IT and RT. PMID:27588215

  5. Meta- and statistical analysis of single-case intervention research data: quantitative gifts and a wish list.

    PubMed

    Kratochwill, Thomas R; Levin, Joel R

    2014-04-01

    In this commentary, we add to the spirit of the articles appearing in the special series devoted to meta- and statistical analysis of single-case intervention-design data. Following a brief discussion of historical factors leading to our initial involvement in statistical analysis of such data, we discuss: (a) the value added by including statistical-analysis recommendations in the What Works Clearinghouse Standards for single-case intervention designs; (b) the importance of visual analysis in single-case intervention research, along with the distinctive role that could be played by single-case effect-size measures; and (c) the elevated internal validity and statistical-conclusion validity afforded by the incorporation of various forms of randomization into basic single-case design structures. For the future, we envision more widespread application of quantitative analyses, as critical adjuncts to visual analysis, in both primary single-case intervention research studies and literature reviews in the behavioral, educational, and health sciences. Copyright © 2014 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved.

  6. Quantile regression for the statistical analysis of immunological data with many non-detects.

    PubMed

    Eilers, Paul H C; Röder, Esther; Savelkoul, Huub F J; van Wijk, Roy Gerth

    2012-07-07

    Immunological parameters are hard to measure. A well-known problem is the occurrence of values below the detection limit, the non-detects. Non-detects are a nuisance, because classical statistical analyses, like ANOVA and regression, cannot be applied. The more advanced statistical techniques currently available for the analysis of datasets with non-detects can only be used if a small percentage of the data are non-detects. Quantile regression, a generalization of percentiles to regression models, models the median or higher percentiles and tolerates very high numbers of non-detects. We present a non-technical introduction and illustrate it with an implementation to real data from a clinical trial. We show that by using quantile regression, groups can be compared and that meaningful linear trends can be computed, even if more than half of the data consists of non-detects. Quantile regression is a valuable addition to the statistical methods that can be used for the analysis of immunological datasets with non-detects.

  7. Evaluating Cellular Polyfunctionality with a Novel Polyfunctionality Index

    PubMed Central

    Larsen, Martin; Sauce, Delphine; Arnaud, Laurent; Fastenackels, Solène; Appay, Victor; Gorochov, Guy

    2012-01-01

    Functional evaluation of naturally occurring or vaccination-induced T cell responses in mice, men and monkeys has in recent years advanced from single-parameter (e.g. IFN-γ-secretion) to much more complex multidimensional measurements. Co-secretion of multiple functional molecules (such as cytokines and chemokines) at the single-cell level is now measurable due primarily to major advances in multiparametric flow cytometry. The very extensive and complex datasets generated by this technology raise the demand for proper analytical tools that enable the analysis of combinatorial functional properties of T cells, hence polyfunctionality. Presently, multidimensional functional measures are analysed either by evaluating all combinations of parameters individually or by summing frequencies of combinations that include the same number of simultaneous functions. Often these evaluations are visualized as pie charts. Whereas pie charts effectively represent and compare average polyfunctionality profiles of particular T cell subsets or patient groups, they do not document the degree or variation of polyfunctionality within a group nor does it allow more sophisticated statistical analysis. Here we propose a novel polyfunctionality index that numerically evaluates the degree and variation of polyfuntionality, and enable comparative and correlative parametric and non-parametric statistical tests. Moreover, it allows the usage of more advanced statistical approaches, such as cluster analysis. We believe that the polyfunctionality index will render polyfunctionality an appropriate end-point measure in future studies of T cell responsiveness. PMID:22860124

  8. Validating the cross-cultural factor structure and invariance property of the Insomnia Severity Index: evidence based on ordinal EFA and CFA.

    PubMed

    Chen, Po-Yi; Yang, Chien-Ming; Morin, Charles M

    2015-05-01

    The purpose of this study is to examine the factor structure of the Insomnia Severity Index (ISI) across samples recruited from different countries. We tried to identify the most appropriate factor model for the ISI and further examined the measurement invariance property of the ISI across samples from different countries. Our analyses included one data set collected from a Taiwanese sample and two data sets obtained from samples in Hong Kong and Canada. The data set collected in Taiwan was analyzed with ordinal exploratory factor analysis (EFA) to obtain the appropriate factor model for the ISI. After that, we conducted a series of confirmatory factor analyses (CFAs), which is a special case of the structural equation model (SEM) that concerns the parameters in the measurement model, to the statistics collected in Canada and Hong Kong. The purposes of these CFA were to cross-validate the result obtained from EFA and further examine the cross-cultural measurement invariance of the ISI. The three-factor model outperforms other models in terms of global fit indices in Taiwan's population. Its external validity is also supported by confirmatory factor analyses. Furthermore, the measurement invariance analyses show that the strong invariance property between the samples from different cultures holds, providing evidence that the ISI results obtained in different cultures are comparable. The factorial validity of the ISI is stable in different populations. More importantly, its invariance property across cultures suggests that the ISI is a valid measure of the insomnia severity construct across countries. Copyright © 2014 Elsevier B.V. All rights reserved.

  9. Association between smoking status and the parameters of vascular structure and function in adults: results from the EVIDENT study

    PubMed Central

    2013-01-01

    Background The present study analyses the relation between smoking status and the parameters used to assess vascular structure and function. Methods This cross-sectional, multi-centre study involved a random sample of 1553 participants from the EVIDENT study. Measurements: The smoking status, peripheral augmentation index and ankle-brachial index were measured in all participants. In a small subset of the main population (265 participants), the carotid intima-media thickness and pulse wave velocity were also measured. Results After controlling for the effect of age, sex and other risk factors, present smokers have higher values of carotid intima-media thickness (p = 0.011). Along the same lines, current smokers have higher values of pulse wave velocity and lower mean values of ankle-brachial index but without statistical significance in both cases. Conclusions Among the parameters of vascular structure and function analysed, only the IMT shows association with the smoking status, after adjusting for confounders. PMID:24289208

  10. Sunspot activity and influenza pandemics: a statistical assessment of the purported association.

    PubMed

    Towers, S

    2017-10-01

    Since 1978, a series of papers in the literature have claimed to find a significant association between sunspot activity and the timing of influenza pandemics. This paper examines these analyses, and attempts to recreate the three most recent statistical analyses by Ertel (1994), Tapping et al. (2001), and Yeung (2006), which all have purported to find a significant relationship between sunspot numbers and pandemic influenza. As will be discussed, each analysis had errors in the data. In addition, in each analysis arbitrary selections or assumptions were also made, and the authors did not assess the robustness of their analyses to changes in those arbitrary assumptions. Varying the arbitrary assumptions to other, equally valid, assumptions negates the claims of significance. Indeed, an arbitrary selection made in one of the analyses appears to have resulted in almost maximal apparent significance; changing it only slightly yields a null result. This analysis applies statistically rigorous methodology to examine the purported sunspot/pandemic link, using more statistically powerful un-binned analysis methods, rather than relying on arbitrarily binned data. The analyses are repeated using both the Wolf and Group sunspot numbers. In all cases, no statistically significant evidence of any association was found. However, while the focus in this particular analysis was on the purported relationship of influenza pandemics to sunspot activity, the faults found in the past analyses are common pitfalls; inattention to analysis reproducibility and robustness assessment are common problems in the sciences, that are unfortunately not noted often enough in review.

  11. Automated brain volumetrics in multiple sclerosis: a step closer to clinical application.

    PubMed

    Wang, C; Beadnall, H N; Hatton, S N; Bader, G; Tomic, D; Silva, D G; Barnett, M H

    2016-07-01

    Whole brain volume (WBV) estimates in patients with multiple sclerosis (MS) correlate more robustly with clinical disability than traditional, lesion-based metrics. Numerous algorithms to measure WBV have been developed over the past two decades. We compare Structural Image Evaluation using Normalisation of Atrophy-Cross-sectional (SIENAX) to NeuroQuant and MSmetrix, for assessment of cross-sectional WBV in patients with MS. MRIs from 61 patients with relapsing-remitting MS and 2 patients with clinically isolated syndrome were analysed. WBV measurements were calculated using SIENAX, NeuroQuant and MSmetrix. Statistical agreement between the methods was evaluated using linear regression and Bland-Altman plots. Precision and accuracy of WBV measurement was calculated for (1) NeuroQuant versus SIENAX and (2) MSmetrix versus SIENAX. Precision (Pearson's r) of WBV estimation for NeuroQuant and MSmetrix versus SIENAX was 0.983 and 0.992, respectively. Accuracy (Cb) was 0.871 and 0.994, respectively. NeuroQuant and MSmetrix showed a 5.5% and 1.0% volume difference compared with SIENAX, respectively, that was consistent across low and high values. In the analysed population, NeuroQuant and MSmetrix both quantified cross-sectional WBV with comparable statistical agreement to SIENAX, a well-validated cross-sectional tool that has been used extensively in MS clinical studies. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  12. Manual therapy compared with physical therapy in patients with non-specific neck pain: a randomized controlled trial.

    PubMed

    Groeneweg, Ruud; van Assen, Luite; Kropman, Hans; Leopold, Huco; Mulder, Jan; Smits-Engelsman, Bouwien C M; Ostelo, Raymond W J G; Oostendorp, Rob A B; van Tulder, Maurits W

    2017-01-01

    Manual therapy according to the School of Manual Therapy Utrecht (MTU) is a specific type of passive manual joint mobilization. MTU has not yet been systematically compared to other manual therapies and physical therapy. In this study the effectiveness of MTU is compared to physical therapy, particularly active exercise therapy (PT) in patients with non-specific neck pain. Patients neck pain, aged between 18-70 years, were included in a pragmatic randomized controlled trial with a one-year follow-up. Primary outcome measures were global perceived effect and functioning (Neck Disability Index), the secondary outcome was pain intensity (Numeric Rating Scale for Pain). Outcomes were measured at 3, 7, 13, 26 and 52 weeks. Multilevel analyses (intention-to-treat) were the primary analyses for overall between-group differences. Additional to the primary and secondary outcomes the number of treatment sessions of the MTU group and PT group was analyzed. Data were collected from September 2008 to February 2011. A total of 181 patients were included. Multilevel analyses showed no statistically significant overall differences at one year between the MTU and PT groups on any of the primary and secondary outcomes. The MTU group showed significantly lower treatment sessions compared to the PT group (respectively 3.1 vs. 5.9 after 7 weeks; 6.1 vs. 10.0 after 52 weeks). Patients with neck pain improved in both groups without statistical significantly or clinically relevant differences between the MTU and PT groups during one-year follow-up. ClinicalTrials.gov Identifier: NCT00713843.

  13. Isotropy analyses of the Planck convergence map

    NASA Astrophysics Data System (ADS)

    Marques, G. A.; Novaes, C. P.; Bernui, A.; Ferreira, I. S.

    2018-01-01

    The presence of matter in the path of relic photons causes distortions in the angular pattern of the cosmic microwave background (CMB) temperature fluctuations, modifying their properties in a slight but measurable way. Recently, the Planck Collaboration released the estimated convergence map, an integrated measure of the large-scale matter distribution that produced the weak gravitational lensing (WL) phenomenon observed in Planck CMB data. We perform exhaustive analyses of this convergence map calculating the variance in small and large regions of the sky, but excluding the area masked due to Galactic contaminations, and compare them with the features expected in the set of simulated convergence maps, also released by the Planck Collaboration. Our goal is to search for sky directions or regions where the WL imprints anomalous signatures to the variance estimator revealed through a χ2 analyses at a statistically significant level. In the local analysis of the Planck convergence map, we identified eight patches of the sky in disagreement, in more than 2σ, with what is observed in the average of the simulations. In contrast, in the large regions analysis we found no statistically significant discrepancies, but, interestingly, the regions with the highest χ2 values are surrounding the ecliptic poles. Thus, our results show a good agreement with the features expected by the Λ cold dark matter concordance model, as given by the simulations. Yet, the outliers regions found here could suggest that the data still contain residual contamination, like noise, due to over- or underestimation of systematic effects in the simulation data set.

  14. Brushing force of manual and sonic toothbrushes affects dental hard tissue abrasion.

    PubMed

    Wiegand, Annette; Burkhard, John Patrik Matthias; Eggmann, Florin; Attin, Thomas

    2013-04-01

    This study aimed to determine the brushing forces applied during in vivo toothbrushing with manual and sonic toothbrushes and to analyse the effect of these brushing forces on abrasion of sound and eroded enamel and dentin in vitro. Brushing forces of a manual and two sonic toothbrushes (low and high frequency mode) were measured in 27 adults before and after instruction of the respective brushing technique and statistically analysed by repeated measures analysis of variance (ANOVA). In the in vitro experiment, sound and eroded enamel and dentin specimens (each subgroup n = 12) were brushed in an automatic brushing machine with the respective brushing forces using a fluoridated toothpaste slurry. Abrasion was determined by profilometry and statistically analysed by one-way ANOVA. Average brushing force of the manual toothbrush (1.6 ± 0.3 N) was significantly higher than for the sonic toothbrushes (0.9 ± 0.2 N), which were not significantly different from each other. Brushing force prior and after instruction of the brushing technique was not significantly different. The manual toothbrush caused highest abrasion of sound and eroded dentin, but lowest on sound enamel. No significant differences were detected on eroded enamel. Brushing forces of manual and sonic toothbrushes are different and affect their abrasive capacity. Patients with severe tooth wear and exposed and/or eroded dentin surfaces should use sonic toothbrushes to reduce abrasion, while patients without tooth wear or with erosive lesions confining only to enamel do not benefit from sonic toothbrushes with regard to abrasion.

  15. Assessing dynamics, spatial scale, and uncertainty in task-related brain network analyses

    PubMed Central

    Stephen, Emily P.; Lepage, Kyle Q.; Eden, Uri T.; Brunner, Peter; Schalk, Gerwin; Brumberg, Jonathan S.; Guenther, Frank H.; Kramer, Mark A.

    2014-01-01

    The brain is a complex network of interconnected elements, whose interactions evolve dynamically in time to cooperatively perform specific functions. A common technique to probe these interactions involves multi-sensor recordings of brain activity during a repeated task. Many techniques exist to characterize the resulting task-related activity, including establishing functional networks, which represent the statistical associations between brain areas. Although functional network inference is commonly employed to analyze neural time series data, techniques to assess the uncertainty—both in the functional network edges and the corresponding aggregate measures of network topology—are lacking. To address this, we describe a statistically principled approach for computing uncertainty in functional networks and aggregate network measures in task-related data. The approach is based on a resampling procedure that utilizes the trial structure common in experimental recordings. We show in simulations that this approach successfully identifies functional networks and associated measures of confidence emergent during a task in a variety of scenarios, including dynamically evolving networks. In addition, we describe a principled technique for establishing functional networks based on predetermined regions of interest using canonical correlation. Doing so provides additional robustness to the functional network inference. Finally, we illustrate the use of these methods on example invasive brain voltage recordings collected during an overt speech task. The general strategy described here—appropriate for static and dynamic network inference and different statistical measures of coupling—permits the evaluation of confidence in network measures in a variety of settings common to neuroscience. PMID:24678295

  16. Assessing dynamics, spatial scale, and uncertainty in task-related brain network analyses.

    PubMed

    Stephen, Emily P; Lepage, Kyle Q; Eden, Uri T; Brunner, Peter; Schalk, Gerwin; Brumberg, Jonathan S; Guenther, Frank H; Kramer, Mark A

    2014-01-01

    The brain is a complex network of interconnected elements, whose interactions evolve dynamically in time to cooperatively perform specific functions. A common technique to probe these interactions involves multi-sensor recordings of brain activity during a repeated task. Many techniques exist to characterize the resulting task-related activity, including establishing functional networks, which represent the statistical associations between brain areas. Although functional network inference is commonly employed to analyze neural time series data, techniques to assess the uncertainty-both in the functional network edges and the corresponding aggregate measures of network topology-are lacking. To address this, we describe a statistically principled approach for computing uncertainty in functional networks and aggregate network measures in task-related data. The approach is based on a resampling procedure that utilizes the trial structure common in experimental recordings. We show in simulations that this approach successfully identifies functional networks and associated measures of confidence emergent during a task in a variety of scenarios, including dynamically evolving networks. In addition, we describe a principled technique for establishing functional networks based on predetermined regions of interest using canonical correlation. Doing so provides additional robustness to the functional network inference. Finally, we illustrate the use of these methods on example invasive brain voltage recordings collected during an overt speech task. The general strategy described here-appropriate for static and dynamic network inference and different statistical measures of coupling-permits the evaluation of confidence in network measures in a variety of settings common to neuroscience.

  17. Functional constraints on tooth morphology in carnivorous mammals

    PubMed Central

    2012-01-01

    Background The range of potential morphologies resulting from evolution is limited by complex interacting processes, ranging from development to function. Quantifying these interactions is important for understanding adaptation and convergent evolution. Using three-dimensional reconstructions of carnivoran and dasyuromorph tooth rows, we compared statistical models of the relationship between tooth row shape and the opposing tooth row, a static feature, as well as measures of mandibular motion during chewing (occlusion), which are kinetic features. This is a new approach to quantifying functional integration because we use measures of movement and displacement, such as the amount the mandible translates laterally during occlusion, as opposed to conventional morphological measures, such as mandible length and geometric landmarks. By sampling two distantly related groups of ecologically similar mammals, we study carnivorous mammals in general rather than a specific group of mammals. Results Statistical model comparisons demonstrate that the best performing models always include some measure of mandibular motion, indicating that functional and statistical models of tooth shape as purely a function of the opposing tooth row are too simple and that increased model complexity provides a better understanding of tooth form. The predictors of the best performing models always included the opposing tooth row shape and a relative linear measure of mandibular motion. Conclusions Our results provide quantitative support of long-standing hypotheses of tooth row shape as being influenced by mandibular motion in addition to the opposing tooth row. Additionally, this study illustrates the utility and necessity of including kinetic features in analyses of morphological integration. PMID:22899809

  18. Trial Sequential Analysis in systematic reviews with meta-analysis.

    PubMed

    Wetterslev, Jørn; Jakobsen, Janus Christian; Gluud, Christian

    2017-03-06

    Most meta-analyses in systematic reviews, including Cochrane ones, do not have sufficient statistical power to detect or refute even large intervention effects. This is why a meta-analysis ought to be regarded as an interim analysis on its way towards a required information size. The results of the meta-analyses should relate the total number of randomised participants to the estimated required meta-analytic information size accounting for statistical diversity. When the number of participants and the corresponding number of trials in a meta-analysis are insufficient, the use of the traditional 95% confidence interval or the 5% statistical significance threshold will lead to too many false positive conclusions (type I errors) and too many false negative conclusions (type II errors). We developed a methodology for interpreting meta-analysis results, using generally accepted, valid evidence on how to adjust thresholds for significance in randomised clinical trials when the required sample size has not been reached. The Lan-DeMets trial sequential monitoring boundaries in Trial Sequential Analysis offer adjusted confidence intervals and restricted thresholds for statistical significance when the diversity-adjusted required information size and the corresponding number of required trials for the meta-analysis have not been reached. Trial Sequential Analysis provides a frequentistic approach to control both type I and type II errors. We define the required information size and the corresponding number of required trials in a meta-analysis and the diversity (D 2 ) measure of heterogeneity. We explain the reasons for using Trial Sequential Analysis of meta-analysis when the actual information size fails to reach the required information size. We present examples drawn from traditional meta-analyses using unadjusted naïve 95% confidence intervals and 5% thresholds for statistical significance. Spurious conclusions in systematic reviews with traditional meta-analyses can be reduced using Trial Sequential Analysis. Several empirical studies have demonstrated that the Trial Sequential Analysis provides better control of type I errors and of type II errors than the traditional naïve meta-analysis. Trial Sequential Analysis represents analysis of meta-analytic data, with transparent assumptions, and better control of type I and type II errors than the traditional meta-analysis using naïve unadjusted confidence intervals.

  19. A Space–Time Permutation Scan Statistic for Disease Outbreak Detection

    PubMed Central

    Kulldorff, Martin; Heffernan, Richard; Hartman, Jessica; Assunção, Renato; Mostashari, Farzad

    2005-01-01

    Background The ability to detect disease outbreaks early is important in order to minimize morbidity and mortality through timely implementation of disease prevention and control measures. Many national, state, and local health departments are launching disease surveillance systems with daily analyses of hospital emergency department visits, ambulance dispatch calls, or pharmacy sales for which population-at-risk information is unavailable or irrelevant. Methods and Findings We propose a prospective space–time permutation scan statistic for the early detection of disease outbreaks that uses only case numbers, with no need for population-at-risk data. It makes minimal assumptions about the time, geographical location, or size of the outbreak, and it adjusts for natural purely spatial and purely temporal variation. The new method was evaluated using daily analyses of hospital emergency department visits in New York City. Four of the five strongest signals were likely local precursors to citywide outbreaks due to rotavirus, norovirus, and influenza. The number of false signals was at most modest. Conclusion If such results hold up over longer study times and in other locations, the space–time permutation scan statistic will be an important tool for local and national health departments that are setting up early disease detection surveillance systems. PMID:15719066

  20. Statistical analysis of environmental monitoring data: does a worst case time for monitoring clean rooms exist?

    PubMed

    Cundell, A M; Bean, R; Massimore, L; Maier, C

    1998-01-01

    To determine the relationship between the sampling time of the environmental monitoring, i.e., viable counts, in aseptic filling areas and the microbial count and frequency of alerts for air, surface and personnel microbial monitoring, statistical analyses were conducted on 1) the frequency of alerts versus the time of day for routine environmental sampling conducted in calendar year 1994, and 2) environmental monitoring data collected at 30-minute intervals during routine aseptic filling operations over two separate days in four different clean rooms with multiple shifts and equipment set-ups at a parenteral manufacturing facility. Statistical analyses showed, except for one floor location that had significantly higher number of counts but no alert or action level samplings in the first two hours of operation, there was no relationship between the number of counts and the time of sampling. Further studies over a 30-day period at the floor location showed no relationship between time of sampling and microbial counts. The conclusion reached in the study was that there is no worst case time for environmental monitoring at that facility and that sampling any time during the aseptic filling operation will give a satisfactory measure of the microbial cleanliness in the clean room during the set-up and aseptic filling operation.

  1. Reliability, precision, and measurement in the context of data from ability tests, surveys, and assessments

    NASA Astrophysics Data System (ADS)

    Fisher, W. P., Jr.; Elbaum, B.; Coulter, A.

    2010-07-01

    Reliability coefficients indicate the proportion of total variance attributable to differences among measures separated along a quantitative continuum by a testing, survey, or assessment instrument. Reliability is usually considered to be influenced by both the internal consistency of a data set and the number of items, though textbooks and research papers rarely evaluate the extent to which these factors independently affect the data in question. Probabilistic formulations of the requirements for unidimensional measurement separate consistency from error by modelling individual response processes instead of group-level variation. The utility of this separation is illustrated via analyses of small sets of simulated data, and of subsets of data from a 78-item survey of over 2,500 parents of children with disabilities. Measurement reliability ultimately concerns the structural invariance specified in models requiring sufficient statistics, parameter separation, unidimensionality, and other qualities that historically have made quantification simple, practical, and convenient for end users. The paper concludes with suggestions for a research program aimed at focusing measurement research more on the calibration and wide dissemination of tools applicable to individuals, and less on the statistical study of inter-variable relations in large data sets.

  2. A statistical examination of Nimbus 7 SMMR data and remote sensing of sea surface temperature, liquid water content in the atmosphere and surfaces wind speed

    NASA Technical Reports Server (NTRS)

    Prabhakara, C.; Wang, I.; Chang, A. T. C.; Gloersen, P.

    1982-01-01

    Nimbus 7 Scanning Multichannel Microwave Radiometer (SMMR) brightness temperature measurements over the global oceans have been examined with the help of statistical and empirical techniques. Such analyses show that zonal averages of brightness temperature measured by SMMR, over the oceans, on a large scale are primarily influenced by the water vapor in the atmosphere. Liquid water in the clouds and rain, which has a much smaller spatial and temporal scale, contributes substantially to the variability of the SMMR measurements within the latitudinal zones. The surface wind not only increases the surface emissivity but through its interactions with the atmosphere produces correlations, in the SMMR brightness temperature data, that have significant meteorological implications. It is found that a simple meteorological model can explain the general characteristics of the SMMR data. With the help of this model methods to infer over the global oceans, the surface temperature, liquid water content in the atmosphere, and surface wind speed are developed. Monthly mean estimates of the sea surface temperature and surface winds are compared with the ship measurements. Estimates of liquid water content in the atmosphere are consistent with earlier satellite measurements.

  3. Genomic similarity and kernel methods I: advancements by building on mathematical and statistical foundations.

    PubMed

    Schaid, Daniel J

    2010-01-01

    Measures of genomic similarity are the basis of many statistical analytic methods. We review the mathematical and statistical basis of similarity methods, particularly based on kernel methods. A kernel function converts information for a pair of subjects to a quantitative value representing either similarity (larger values meaning more similar) or distance (smaller values meaning more similar), with the requirement that it must create a positive semidefinite matrix when applied to all pairs of subjects. This review emphasizes the wide range of statistical methods and software that can be used when similarity is based on kernel methods, such as nonparametric regression, linear mixed models and generalized linear mixed models, hierarchical models, score statistics, and support vector machines. The mathematical rigor for these methods is summarized, as is the mathematical framework for making kernels. This review provides a framework to move from intuitive and heuristic approaches to define genomic similarities to more rigorous methods that can take advantage of powerful statistical modeling and existing software. A companion paper reviews novel approaches to creating kernels that might be useful for genomic analyses, providing insights with examples [1]. Copyright © 2010 S. Karger AG, Basel.

  4. Assessment of the beryllium lymphocyte proliferation test using statistical process control.

    PubMed

    Cher, Daniel J; Deubner, David C; Kelsh, Michael A; Chapman, Pamela S; Ray, Rose M

    2006-10-01

    Despite more than 20 years of surveillance and epidemiologic studies using the beryllium blood lymphocyte proliferation test (BeBLPT) as a measure of beryllium sensitization (BeS) and as an aid for diagnosing subclinical chronic beryllium disease (CBD), improvements in specific understanding of the inhalation toxicology of CBD have been limited. Although epidemiologic data suggest that BeS and CBD risks vary by process/work activity, it has proven difficult to reach specific conclusions regarding the dose-response relationship between workplace beryllium exposure and BeS or subclinical CBD. One possible reason for this uncertainty could be misclassification of BeS resulting from variation in BeBLPT testing performance. The reliability of the BeBLPT, a biological assay that measures beryllium sensitization, is unknown. To assess the performance of four laboratories that conducted this test, we used data from a medical surveillance program that offered testing for beryllium sensitization with the BeBLPT. The study population was workers exposed to beryllium at various facilities over a 10-year period (1992-2001). Workers with abnormal results were offered diagnostic workups for CBD. Our analyses used a standard statistical technique, statistical process control (SPC), to evaluate test reliability. The study design involved a repeated measures analysis of BeBLPT results generated from the company-wide, longitudinal testing. Analytical methods included use of (1) statistical process control charts that examined temporal patterns of variation for the stimulation index, a measure of cell reactivity to beryllium; (2) correlation analysis that compared prior perceptions of BeBLPT instability to the statistical measures of test variation; and (3) assessment of the variation in the proportion of missing test results and how time periods with more missing data influenced SPC findings. During the period of this study, all laboratories displayed variation in test results that were beyond what would be expected due to chance alone. Patterns of test results suggested that variations were systematic. We conclude that laboratories performing the BeBLPT or other similar biological assays of immunological response could benefit from a statistical approach such as SPC to improve quality management.

  5. STATISTICAL ANALYSIS OF TANK 5 FLOOR SAMPLE RESULTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shine, E.

    2012-03-14

    Sampling has been completed for the characterization of the residual material on the floor of Tank 5 in the F-Area Tank Farm at the Savannah River Site (SRS), near Aiken, SC. The sampling was performed by Savannah River Remediation (SRR) LLC using a stratified random sampling plan with volume-proportional compositing. The plan consisted of partitioning the residual material on the floor of Tank 5 into three non-overlapping strata: two strata enclosed accumulations, and a third stratum consisted of a thin layer of material outside the regions of the two accumulations. Each of three composite samples was constructed from five primarymore » sample locations of residual material on the floor of Tank 5. Three of the primary samples were obtained from the stratum containing the thin layer of material, and one primary sample was obtained from each of the two strata containing an accumulation. This report documents the statistical analyses of the analytical results for the composite samples. The objective of the analysis is to determine the mean concentrations and upper 95% confidence (UCL95) bounds for the mean concentrations for a set of analytes in the tank residuals. The statistical procedures employed in the analyses were consistent with the Environmental Protection Agency (EPA) technical guidance by Singh and others [2010]. Savannah River National Laboratory (SRNL) measured the sample bulk density, nonvolatile beta, gross alpha, radionuclide, inorganic, and anion concentrations three times for each of the composite samples. The analyte concentration data were partitioned into three separate groups for further analysis: analytes with every measurement above their minimum detectable concentrations (MDCs), analytes with no measurements above their MDCs, and analytes with a mixture of some measurement results above and below their MDCs. The means, standard deviations, and UCL95s were computed for the analytes in the two groups that had at least some measurements above their MDCs. The identification of distributions and the selection of UCL95 procedures generally followed the protocol in Singh, Armbya, and Singh [2010]. When all of an analyte's measurements lie below their MDCs, only a summary of the MDCs can be provided. The measurement results reported by SRNL are listed in Appendix A, and the results of this analysis are reported in Appendix B. The data were generally found to follow a normal distribution, and to be homogeneous across composite samples.« less

  6. Statistical Analysis of Tank 5 Floor Sample Results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shine, E. P.

    2013-01-31

    Sampling has been completed for the characterization of the residual material on the floor of Tank 5 in the F-Area Tank Farm at the Savannah River Site (SRS), near Aiken, SC. The sampling was performed by Savannah River Remediation (SRR) LLC using a stratified random sampling plan with volume-proportional compositing. The plan consisted of partitioning the residual material on the floor of Tank 5 into three non-overlapping strata: two strata enclosed accumulations, and a third stratum consisted of a thin layer of material outside the regions of the two accumulations. Each of three composite samples was constructed from five primarymore » sample locations of residual material on the floor of Tank 5. Three of the primary samples were obtained from the stratum containing the thin layer of material, and one primary sample was obtained from each of the two strata containing an accumulation. This report documents the statistical analyses of the analytical results for the composite samples. The objective of the analysis is to determine the mean concentrations and upper 95% confidence (UCL95) bounds for the mean concentrations for a set of analytes in the tank residuals. The statistical procedures employed in the analyses were consistent with the Environmental Protection Agency (EPA) technical guidance by Singh and others [2010]. Savannah River National Laboratory (SRNL) measured the sample bulk density, nonvolatile beta, gross alpha, and the radionuclide1, elemental, and chemical concentrations three times for each of the composite samples. The analyte concentration data were partitioned into three separate groups for further analysis: analytes with every measurement above their minimum detectable concentrations (MDCs), analytes with no measurements above their MDCs, and analytes with a mixture of some measurement results above and below their MDCs. The means, standard deviations, and UCL95s were computed for the analytes in the two groups that had at least some measurements above their MDCs. The identification of distributions and the selection of UCL95 procedures generally followed the protocol in Singh, Armbya, and Singh [2010]. When all of an analyte's measurements lie below their MDCs, only a summary of the MDCs can be provided. The measurement results reported by SRNL are listed, and the results of this analysis are reported. The data were generally found to follow a normal distribution, and to be homogenous across composite samples.« less

  7. Statistical Analysis Of Tank 5 Floor Sample Results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shine, E. P.

    2012-08-01

    Sampling has been completed for the characterization of the residual material on the floor of Tank 5 in the F-Area Tank Farm at the Savannah River Site (SRS), near Aiken, SC. The sampling was performed by Savannah River Remediation (SRR) LLC using a stratified random sampling plan with volume-proportional compositing. The plan consisted of partitioning the residual material on the floor of Tank 5 into three non-overlapping strata: two strata enclosed accumulations, and a third stratum consisted of a thin layer of material outside the regions of the two accumulations. Each of three composite samples was constructed from five primarymore » sample locations of residual material on the floor of Tank 5. Three of the primary samples were obtained from the stratum containing the thin layer of material, and one primary sample was obtained from each of the two strata containing an accumulation. This report documents the statistical analyses of the analytical results for the composite samples. The objective of the analysis is to determine the mean concentrations and upper 95% confidence (UCL95) bounds for the mean concentrations for a set of analytes in the tank residuals. The statistical procedures employed in the analyses were consistent with the Environmental Protection Agency (EPA) technical guidance by Singh and others [2010]. Savannah River National Laboratory (SRNL) measured the sample bulk density, nonvolatile beta, gross alpha, and the radionuclide, elemental, and chemical concentrations three times for each of the composite samples. The analyte concentration data were partitioned into three separate groups for further analysis: analytes with every measurement above their minimum detectable concentrations (MDCs), analytes with no measurements above their MDCs, and analytes with a mixture of some measurement results above and below their MDCs. The means, standard deviations, and UCL95s were computed for the analytes in the two groups that had at least some measurements above their MDCs. The identification of distributions and the selection of UCL95 procedures generally followed the protocol in Singh, Armbya, and Singh [2010]. When all of an analyte's measurements lie below their MDCs, only a summary of the MDCs can be provided. The measurement results reported by SRNL are listed in Appendix A, and the results of this analysis are reported in Appendix B. The data were generally found to follow a normal distribution, and to be homogenous across composite samples.« less

  8. Multi-Scale Modeling to Improve Single-Molecule, Single-Cell Experiments

    NASA Astrophysics Data System (ADS)

    Munsky, Brian; Shepherd, Douglas

    2014-03-01

    Single-cell, single-molecule experiments are producing an unprecedented amount of data to capture the dynamics of biological systems. When integrated with computational models, observations of spatial, temporal and stochastic fluctuations can yield powerful quantitative insight. We concentrate on experiments that localize and count individual molecules of mRNA. These high precision experiments have large imaging and computational processing costs, and we explore how improved computational analyses can dramatically reduce overall data requirements. In particular, we show how analyses of spatial, temporal and stochastic fluctuations can significantly enhance parameter estimation results for small, noisy data sets. We also show how full probability distribution analyses can constrain parameters with far less data than bulk analyses or statistical moment closures. Finally, we discuss how a systematic modeling progression from simple to more complex analyses can reduce total computational costs by orders of magnitude. We illustrate our approach using single-molecule, spatial mRNA measurements of Interleukin 1-alpha mRNA induction in human THP1 cells following stimulation. Our approach could improve the effectiveness of single-molecule gene regulation analyses for many other process.

  9. Systematic survey of the design, statistical analysis, and reporting of studies published in the 2008 volume of the Journal of Cerebral Blood Flow and Metabolism.

    PubMed

    Vesterinen, Hanna M; Vesterinen, Hanna V; Egan, Kieren; Deister, Amelie; Schlattmann, Peter; Macleod, Malcolm R; Dirnagl, Ulrich

    2011-04-01

    Translating experimental findings into clinically effective therapies is one of the major bottlenecks of modern medicine. As this has been particularly true for cerebrovascular research, attention has turned to the quality and validity of experimental cerebrovascular studies. We set out to assess the study design, statistical analyses, and reporting of cerebrovascular research. We assessed all original articles published in the Journal of Cerebral Blood Flow and Metabolism during the year 2008 against a checklist designed to capture the key attributes relating to study design, statistical analyses, and reporting. A total of 156 original publications were included (animal, in vitro, human). Few studies reported a primary research hypothesis, statement of purpose, or measures to safeguard internal validity (such as randomization, blinding, exclusion or inclusion criteria). Many studies lacked sufficient information regarding methods and results to form a reasonable judgment about their validity. In nearly 20% of studies, statistical tests were either not appropriate or information to allow assessment of appropriateness was lacking. This study identifies a number of factors that should be addressed if the quality of research in basic and translational biomedicine is to be improved. We support the widespread implementation of the ARRIVE (Animal Research Reporting In Vivo Experiments) statement for the reporting of experimental studies in biomedicine, for improving training in proper study design and analysis, and that reviewers and editors adopt a more constructively critical approach in the assessment of manuscripts for publication.

  10. Systematic survey of the design, statistical analysis, and reporting of studies published in the 2008 volume of the Journal of Cerebral Blood Flow and Metabolism

    PubMed Central

    Vesterinen, Hanna V; Egan, Kieren; Deister, Amelie; Schlattmann, Peter; Macleod, Malcolm R; Dirnagl, Ulrich

    2011-01-01

    Translating experimental findings into clinically effective therapies is one of the major bottlenecks of modern medicine. As this has been particularly true for cerebrovascular research, attention has turned to the quality and validity of experimental cerebrovascular studies. We set out to assess the study design, statistical analyses, and reporting of cerebrovascular research. We assessed all original articles published in the Journal of Cerebral Blood Flow and Metabolism during the year 2008 against a checklist designed to capture the key attributes relating to study design, statistical analyses, and reporting. A total of 156 original publications were included (animal, in vitro, human). Few studies reported a primary research hypothesis, statement of purpose, or measures to safeguard internal validity (such as randomization, blinding, exclusion or inclusion criteria). Many studies lacked sufficient information regarding methods and results to form a reasonable judgment about their validity. In nearly 20% of studies, statistical tests were either not appropriate or information to allow assessment of appropriateness was lacking. This study identifies a number of factors that should be addressed if the quality of research in basic and translational biomedicine is to be improved. We support the widespread implementation of the ARRIVE (Animal Research Reporting In Vivo Experiments) statement for the reporting of experimental studies in biomedicine, for improving training in proper study design and analysis, and that reviewers and editors adopt a more constructively critical approach in the assessment of manuscripts for publication. PMID:21157472

  11. Evaluation of the validity of the Bolton Index using cone-beam computed tomography (CBCT)

    PubMed Central

    Llamas, José M.; Cibrián, Rosa; Gandía, José L.; Paredes, Vanessa

    2012-01-01

    Aims: To evaluate the reliability and reproducibility of calculating the Bolton Index using cone-beam computed tomography (CBCT), and to compare this with measurements obtained using the 2D Digital Method. Material and Methods: Traditional study models were obtained from 50 patients, which were then digitized in order to be able to measure them using the Digital Method. Likewise, CBCTs of those same patients were undertaken using the Dental Picasso Master 3D® and the images obtained were then analysed using the InVivoDental programme. Results: By determining the regression lines for both measurement methods, as well as the difference between both of their values, the two methods are shown to be comparable, despite the fact that the measurements analysed presented statistically significant differences. Conclusions: The three-dimensional models obtained from the CBCT are as accurate and reproducible as the digital models obtained from the plaster study casts for calculating the Bolton Index. The differences existing between both methods were clinically acceptable. Key words:Tooth-size, digital models, bolton index, CBCT. PMID:22549690

  12. Small studies may overestimate the effect sizes in critical care meta-analyses: a meta-epidemiological study

    PubMed Central

    2013-01-01

    Introduction Small-study effects refer to the fact that trials with limited sample sizes are more likely to report larger beneficial effects than large trials. However, this has never been investigated in critical care medicine. Thus, the present study aimed to examine the presence and extent of small-study effects in critical care medicine. Methods Critical care meta-analyses involving randomized controlled trials and reported mortality as an outcome measure were considered eligible for the study. Component trials were classified as large (≥100 patients per arm) and small (<100 patients per arm) according to their sample sizes. Ratio of odds ratio (ROR) was calculated for each meta-analysis and then RORs were combined using a meta-analytic approach. ROR<1 indicated larger beneficial effect in small trials. Small and large trials were compared in methodological qualities including sequence generating, blinding, allocation concealment, intention to treat and sample size calculation. Results A total of 27 critical care meta-analyses involving 317 trials were included. Of them, five meta-analyses showed statistically significant RORs <1, and other meta-analyses did not reach a statistical significance. Overall, the pooled ROR was 0.60 (95% CI: 0.53 to 0.68); the heterogeneity was moderate with an I2 of 50.3% (chi-squared = 52.30; P = 0.002). Large trials showed significantly better reporting quality than small trials in terms of sequence generating, allocation concealment, blinding, intention to treat, sample size calculation and incomplete follow-up data. Conclusions Small trials are more likely to report larger beneficial effects than large trials in critical care medicine, which could be partly explained by the lower methodological quality in small trials. Caution should be practiced in the interpretation of meta-analyses involving small trials. PMID:23302257

  13. Reporting of Positive Results in Randomized Controlled Trials of Mindfulness-Based Mental Health Interventions.

    PubMed

    Coronado-Montoya, Stephanie; Levis, Alexander W; Kwakkenbos, Linda; Steele, Russell J; Turner, Erick H; Thombs, Brett D

    2016-01-01

    A large proportion of mindfulness-based therapy trials report statistically significant results, even in the context of very low statistical power. The objective of the present study was to characterize the reporting of "positive" results in randomized controlled trials of mindfulness-based therapy. We also assessed mindfulness-based therapy trial registrations for indications of possible reporting bias and reviewed recent systematic reviews and meta-analyses to determine whether reporting biases were identified. CINAHL, Cochrane CENTRAL, EMBASE, ISI, MEDLINE, PsycInfo, and SCOPUS databases were searched for randomized controlled trials of mindfulness-based therapy. The number of positive trials was described and compared to the number that might be expected if mindfulness-based therapy were similarly effective compared to individual therapy for depression. Trial registries were searched for mindfulness-based therapy registrations. CINAHL, Cochrane CENTRAL, EMBASE, ISI, MEDLINE, PsycInfo, and SCOPUS were also searched for mindfulness-based therapy systematic reviews and meta-analyses. 108 (87%) of 124 published trials reported ≥1 positive outcome in the abstract, and 109 (88%) concluded that mindfulness-based therapy was effective, 1.6 times greater than the expected number of positive trials based on effect size d = 0.55 (expected number positive trials = 65.7). Of 21 trial registrations, 13 (62%) remained unpublished 30 months post-trial completion. No trial registrations adequately specified a single primary outcome measure with time of assessment. None of 36 systematic reviews and meta-analyses concluded that effect estimates were overestimated due to reporting biases. The proportion of mindfulness-based therapy trials with statistically significant results may overstate what would occur in practice.

  14. A risk-based statistical investigation of the quantification of polymorphic purity of a pharmaceutical candidate by solid-state 19F NMR.

    PubMed

    Barry, Samantha J; Pham, Tran N; Borman, Phil J; Edwards, Andrew J; Watson, Simon A

    2012-01-27

    The DMAIC (Define, Measure, Analyse, Improve and Control) framework and associated statistical tools have been applied to both identify and reduce variability observed in a quantitative (19)F solid-state NMR (SSNMR) analytical method. The method had been developed to quantify levels of an additional polymorph (Form 3) in batches of an active pharmaceutical ingredient (API), where Form 1 is the predominant polymorph. In order to validate analyses of the polymorphic form, a single batch of API was used as a standard each time the method was used. The level of Form 3 in this standard was observed to gradually increase over time, the effect not being immediately apparent due to method variability. In order to determine the cause of this unexpected increase and to reduce method variability, a risk-based statistical investigation was performed to identify potential factors which could be responsible for these effects. Factors identified by the risk assessment were investigated using a series of designed experiments to gain a greater understanding of the method. The increase of the level of Form 3 in the standard was primarily found to correlate with the number of repeat analyses, an effect not previously reported in SSNMR literature. Differences in data processing (phasing and linewidth) were found to be responsible for the variability in the method. After implementing corrective actions the variability was reduced such that the level of Form 3 was within an acceptable range of ±1% ww(-1) in fresh samples of API. Copyright © 2011. Published by Elsevier B.V.

  15. Perception that "everything requires a lot of effort": transcultural SCL-25 item validation.

    PubMed

    Moreau, Nicolas; Hassan, Ghayda; Rousseau, Cécile; Chenguiti, Khalid

    2009-09-01

    This brief report illustrates how the migration context can affect specific item validity of mental health measures. The SCL-25 was administered to 432 recently settled immigrants (220 Haitian and 212 Arabs). We performed descriptive analyses, as well as Infit and Outfit statistics analyses using WINSTEPS Rasch Measurement Software based on Item Response Theory. The participants' comments about the item You feel everything requires a lot of effort in the SCL-25 were also qualitatively analyzed. Results revealed that the item You feel everything requires a lot of effort is an outlier and does not adjust in an expected and valid fashion with its cluster items, as it is over-endorsed by Haitian and Arab healthy participants. Our study thus shows that, in transcultural mental health research, the cultural and migratory contexts may interact and significantly influence the meaning of some symptom items and consequently, the validity of symptom scales.

  16. Rainfall: State of the Science

    NASA Astrophysics Data System (ADS)

    Testik, Firat Y.; Gebremichael, Mekonnen

    Rainfall: State of the Science offers the most up-to-date knowledge on the fundamental and practical aspects of rainfall. Each chapter, self-contained and written by prominent scientists in their respective fields, provides three forms of information: fundamental principles, detailed overview of current knowledge and description of existing methods, and emerging techniques and future research directions. The book discusses • Rainfall microphysics: raindrop morphodynamics, interactions, size distribution, and evolution • Rainfall measurement and estimation: ground-based direct measurement (disdrometer and rain gauge), weather radar rainfall estimation, polarimetric radar rainfall estimation, and satellite rainfall estimation • Statistical analyses: intensity-duration-frequency curves, frequency analysis of extreme events, spatial analyses, simulation and disaggregation, ensemble approach for radar rainfall uncertainty, and uncertainty analysis of satellite rainfall products The book is tailored to be an indispensable reference for researchers, practitioners, and graduate students who study any aspect of rainfall or utilize rainfall information in various science and engineering disciplines.

  17. A lower bound on the number of cosmic ray events required to measure source catalogue correlations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dolci, Marco; Romero-Wolf, Andrew; Wissel, Stephanie, E-mail: marco.dolci@polito.it, E-mail: Andrew.Romero-Wolf@jpl.nasa.gov, E-mail: swissel@calpoly.edu

    2016-10-01

    Recent analyses of cosmic ray arrival directions have resulted in evidence for a positive correlation with active galactic nuclei positions that has weak significance against an isotropic source distribution. In this paper, we explore the sample size needed to measure a highly statistically significant correlation to a parent source catalogue. We compare several scenarios for the directional scattering of ultra-high energy cosmic rays given our current knowledge of the galactic and intergalactic magnetic fields. We find significant correlations are possible for a sample of >1000 cosmic ray protons with energies above 60 EeV.

  18. Statistical Exposé of a Multiple-Compartment Anaerobic Reactor Treating Domestic Wastewater.

    PubMed

    Pfluger, Andrew R; Hahn, Martha J; Hering, Amanda S; Munakata-Marr, Junko; Figueroa, Linda

    2018-06-01

      Mainstream anaerobic treatment of domestic wastewater is a promising energy-generating treatment strategy; however, such reactors operated in colder regions are not well characterized. Performance data from a pilot-scale, multiple-compartment anaerobic reactor taken over 786 days were subjected to comprehensive statistical analyses. Results suggest that chemical oxygen demand (COD) was a poor proxy for organics in anaerobic systems as oxygen demand from dissolved inorganic material, dissolved methane, and colloidal material influence dissolved and particulate COD measurements. Additionally, univariate and functional boxplots were useful in visualizing variability in contaminant concentrations and identifying statistical outliers. Further, significantly different dissolved organic removal and methane production was observed between operational years, suggesting that anaerobic reactor systems may not achieve steady-state performance within one year. Last, modeling multiple-compartment reactor systems will require data collected over at least two years to capture seasonal variations of the major anaerobic microbial functions occurring within each reactor compartment.

  19. A phylogenetic transform enhances analysis of compositional microbiota data.

    PubMed

    Silverman, Justin D; Washburne, Alex D; Mukherjee, Sayan; David, Lawrence A

    2017-02-15

    Surveys of microbial communities (microbiota), typically measured as relative abundance of species, have illustrated the importance of these communities in human health and disease. Yet, statistical artifacts commonly plague the analysis of relative abundance data. Here, we introduce the PhILR transform, which incorporates microbial evolutionary models with the isometric log-ratio transform to allow off-the-shelf statistical tools to be safely applied to microbiota surveys. We demonstrate that analyses of community-level structure can be applied to PhILR transformed data with performance on benchmarks rivaling or surpassing standard tools. Additionally, by decomposing distance in the PhILR transformed space, we identified neighboring clades that may have adapted to distinct human body sites. Decomposing variance revealed that covariation of bacterial clades within human body sites increases with phylogenetic relatedness. Together, these findings illustrate how the PhILR transform combines statistical and phylogenetic models to overcome compositional data challenges and enable evolutionary insights relevant to microbial communities.

  20. OTD Observations of Continental US Ground and Cloud Flashes

    NASA Technical Reports Server (NTRS)

    Koshak, William

    2007-01-01

    Lightning optical flash parameters (e.g., radiance, area, duration, number of optical groups, and number of optical events) derived from almost five years of Optical Transient Detector (OTD) data are analyzed. Hundreds of thousands of OTD flashes occurring over the continental US are categorized according to flash type (ground or cloud flash) using US National Lightning Detection Network TM (NLDN) data. The statistics of the optical characteristics of the ground and cloud flashes are inter-compared on an overall basis, and as a function of ground flash polarity. A standard two-distribution hypothesis test is used to inter-compare the population means of a given lightning parameter for the two flash types. Given the differences in the statistics of the optical characteristics, it is suggested that statistical analyses (e.g., Bayesian Inference) of the space-based optical measurements might make it possible to successfully discriminate ground and cloud flashes a reasonable percentage of the time.

  1. A Retrospective Survey of Research Design and Statistical Analyses in Selected Chinese Medical Journals in 1998 and 2008

    PubMed Central

    Jin, Zhichao; Yu, Danghui; Zhang, Luoman; Meng, Hong; Lu, Jian; Gao, Qingbin; Cao, Yang; Ma, Xiuqiang; Wu, Cheng; He, Qian; Wang, Rui; He, Jia

    2010-01-01

    Background High quality clinical research not only requires advanced professional knowledge, but also needs sound study design and correct statistical analyses. The number of clinical research articles published in Chinese medical journals has increased immensely in the past decade, but study design quality and statistical analyses have remained suboptimal. The aim of this investigation was to gather evidence on the quality of study design and statistical analyses in clinical researches conducted in China for the first decade of the new millennium. Methodology/Principal Findings Ten (10) leading Chinese medical journals were selected and all original articles published in 1998 (N = 1,335) and 2008 (N = 1,578) were thoroughly categorized and reviewed. A well-defined and validated checklist on study design, statistical analyses, results presentation, and interpretation was used for review and evaluation. Main outcomes were the frequencies of different types of study design, error/defect proportion in design and statistical analyses, and implementation of CONSORT in randomized clinical trials. From 1998 to 2008: The error/defect proportion in statistical analyses decreased significantly ( = 12.03, p<0.001), 59.8% (545/1,335) in 1998 compared to 52.2% (664/1,578) in 2008. The overall error/defect proportion of study design also decreased ( = 21.22, p<0.001), 50.9% (680/1,335) compared to 42.40% (669/1,578). In 2008, design with randomized clinical trials remained low in single digit (3.8%, 60/1,578) with two-third showed poor results reporting (defects in 44 papers, 73.3%). Nearly half of the published studies were retrospective in nature, 49.3% (658/1,335) in 1998 compared to 48.2% (761/1,578) in 2008. Decreases in defect proportions were observed in both results presentation ( = 93.26, p<0.001), 92.7% (945/1,019) compared to 78.2% (1023/1,309) and interpretation ( = 27.26, p<0.001), 9.7% (99/1,019) compared to 4.3% (56/1,309), some serious ones persisted. Conclusions/Significance Chinese medical research seems to have made significant progress regarding statistical analyses, but there remains ample room for improvement regarding study designs. Retrospective clinical studies are the most often used design, whereas randomized clinical trials are rare and often show methodological weaknesses. Urgent implementation of the CONSORT statement is imperative. PMID:20520824

  2. Rasch analysis for psychometric improvement of science attitude rating scales

    NASA Astrophysics Data System (ADS)

    Oon, Pey-Tee; Fan, Xitao

    2017-04-01

    Students' attitude towards science (SAS) is often a subject of investigation in science education research. Survey of rating scale is commonly used in the study of SAS. The present study illustrates how Rasch analysis can be used to provide psychometric information of SAS rating scales. The analyses were conducted on a 20-item SAS scale used in an existing dataset of The Trends in International Mathematics and Science Study (TIMSS) (2011). Data of all the eight-grade participants from Hong Kong and Singapore (N = 9942) were retrieved for analyses. Additional insights from Rasch analysis that are not commonly available from conventional test and item analyses were discussed, such as invariance measurement of SAS, unidimensionality of SAS construct, optimum utilization of SAS rating categories, and item difficulty hierarchy in the SAS scale. Recommendations on how TIMSS items on the measurement of SAS can be better designed were discussed. The study also highlights the importance of using Rasch estimates for statistical parametric tests (e.g. ANOVA, t-test) that are common in science education research for group comparisons.

  3. Does Anxiety Modify the Risk for, or Severity of, Conduct Problems Among Children With Co-Occurring ADHD: Categorical and Dimensional and Analyses.

    PubMed

    Danforth, Jeffrey S; Doerfler, Leonard A; Connor, Daniel F

    2017-08-01

    The goal was to examine whether anxiety modifies the risk for, or severity of, conduct problems in children with ADHD. Assessment included both categorical and dimensional measures of ADHD, anxiety, and conduct problems. Analyses compared conduct problems between children with ADHD features alone versus children with co-occurring ADHD and anxiety features. When assessed by dimensional rating scales, results showed that compared with children with ADHD alone, those children with ADHD co-occurring with anxiety are at risk for more intense conduct problems. When assessment included a Diagnostic and Statistical Manual of Mental Disorders (4th ed.; DSM-IV) diagnosis via the Schedule for Affective Disorders and Schizophrenia for School Age Children-Epidemiologic Version (K-SADS), results showed that compared with children with ADHD alone, those children with ADHD co-occurring with anxiety neither had more intense conduct problems nor were they more likely to be diagnosed with oppositional defiant disorder or conduct disorder. Different methodological measures of ADHD, anxiety, and conduct problem features influenced the outcome of the analyses.

  4. A new principle for the standardization of long paragraphs for reading speed analysis.

    PubMed

    Radner, Wolfgang; Radner, Stephan; Diendorfer, Gabriela

    2016-01-01

    To investigate the reliability, validity, and statistical comparability of long paragraphs that were developed to be equivalent in construction and difficulty. Seven long paragraphs were developed that were equal in syntax, morphology, and number and position of words (111), with the same number of syllables (179) and number of characters (660). For validity analyses, the paragraphs were compared with the mean reading speed of a set of seven sentence optotypes of the RADNER Reading Charts (mean of 7 × 14 = 98 words read). Reliability analyses were performed by calculating the Cronbach's alpha value and the corrected total item correlation. Sixty participants (aged 20-77 years) read the paragraphs and the sentences (distance 40 cm; font: Times New Roman 12 pt). Test items were presented randomly; reading length was measured with a stopwatch. Reliability analysis yielded a Cronbach's alpha value of 0.988. When the long paragraphs were compared in pairwise fashion, significant differences were found in 13 of the 21 pairs (p < 0.05). In two sequences of three paragraphs each and in eight pairs of paragraphs, the paragraphs did not differ significantly, and these paragraph combinations are therefore suitable for comparative research studies. The mean reading speed was 173.34 ± 24.01 words per minute (wpm) for the long paragraphs and 198.26 ± 28.60 wpm for the sentence optotypes. The maximum difference in reading speed was 5.55 % for the long paragraphs and 2.95 % for the short sentence optotypes. The correlation between long paragraphs and sentence optotypes was high (r = 0.9243). Despite good reliability and equivalence in construction and degree of difficulty, a statistically significant difference in reading speed can occur between long paragraphs. Since statistical significance should be dependent only on the persons tested, either standardizing long paragraphs for statistical equality of reading speed measurements or increasing the number of presented paragraphs is recommended for comparative investigations.

  5. Evaluation of the Validity and Response Burden of Patient Self-Report Measures of the Pain Assessment Screening Tool and Outcomes Registry (PASTOR).

    PubMed

    Cook, Karon F; Kallen, Michael A; Buckenmaier, Chester; Flynn, Diane M; Hanling, Steven R; Collins, Teresa S; Joltes, Kristin; Kwon, Kyung; Medina-Torne, Sheila; Nahavandi, Parisa; Suen, Joshua; Gershon, Richard

    2017-07-01

    In 2009, the Army Pain Management Task Force was chartered. On the basis of their findings, the Department of Defense recommended a comprehensive pain management strategy that included development of a standardized pain assessment system that would collect patient-reported outcomes data to inform the patient-provider clinical encounter. The result was the Pain Assessment Screening Tool and Outcomes Registry (PASTOR). The purpose of this study was to assess the validity and response burden of the patient-reported outcome measures in PASTOR. Data for analyses were collected from 681 individuals who completed PASTOR at baseline and follow-up as part of their routine clinical care. The survey tool included self-report measures of pain severity and pain interference (measured using the National Institutes of Health Patient-Reported Outcome Measurement Information System [PROMIS] and the Defense and Veterans Pain Rating scale). PROMIS measures of pain correlates also were administered. Validation analyses included estimation of score associations among measures, comparison of scores of known groups, responsiveness, ceiling and floor effects, and response burden. Results of psychometric testing provided substantial evidence for the validity of PASTOR self-report measures in this population. Expected associations among scores largely supported the concurrent validity of the measures. Scores effectively distinguished among respondents on the basis of their self-reported impressions of general health. PROMIS measures were administered using computer adaptive testing and each, on average, required less than 1 minute to administer. Statistical and graphical analyses demonstrated the responsiveness of PASTOR measures over time. Reprint & Copyright © 2017 Association of Military Surgeons of the U.S.

  6. @neurIST complex information processing toolchain for the integrated management of cerebral aneurysms

    PubMed Central

    Villa-Uriol, M. C.; Berti, G.; Hose, D. R.; Marzo, A.; Chiarini, A.; Penrose, J.; Pozo, J.; Schmidt, J. G.; Singh, P.; Lycett, R.; Larrabide, I.; Frangi, A. F.

    2011-01-01

    Cerebral aneurysms are a multi-factorial disease with severe consequences. A core part of the European project @neurIST was the physical characterization of aneurysms to find candidate risk factors associated with aneurysm rupture. The project investigated measures based on morphological, haemodynamic and aneurysm wall structure analyses for more than 300 cases of ruptured and unruptured aneurysms, extracting descriptors suitable for statistical studies. This paper deals with the unique challenges associated with this task, and the implemented solutions. The consistency of results required by the subsequent statistical analyses, given the heterogeneous image data sources and multiple human operators, was met by a highly automated toolchain combined with training. A testimonial of the successful automation is the positive evaluation of the toolchain by over 260 clinicians during various hands-on workshops. The specification of the analyses required thorough investigations of modelling and processing choices, discussed in a detailed analysis protocol. Finally, an abstract data model governing the management of the simulation-related data provides a framework for data provenance and supports future use of data and toolchain. This is achieved by enabling the easy modification of the modelling approaches and solution details through abstract problem descriptions, removing the need of repetition of manual processing work. PMID:22670202

  7. Global atmospheric circulation statistics, 1000-1 mb

    NASA Technical Reports Server (NTRS)

    Randel, William J.

    1992-01-01

    The atlas presents atmospheric general circulation statistics derived from twelve years (1979-90) of daily National Meteorological Center (NMC) operational geopotential height analyses; it is an update of a prior atlas using data over 1979-1986. These global analyses are available on pressure levels covering 1000-1 mb (approximately 0-50 km). The geopotential grids are a combined product of the Climate Analysis Center (which produces analyses over 70-1 mb) and operational NMC analyses (over 1000-100 mb). Balance horizontal winds and hydrostatic temperatures are derived from the geopotential fields.

  8. SPSS and SAS programs for generalizability theory analyses.

    PubMed

    Mushquash, Christopher; O'Connor, Brian P

    2006-08-01

    The identification and reduction of measurement errors is a major challenge in psychological testing. Most investigators rely solely on classical test theory for assessing reliability, whereas most experts have long recommended using generalizability theory instead. One reason for the common neglect of generalizability theory is the absence of analytic facilities for this purpose in popular statistical software packages. This article provides a brief introduction to generalizability theory, describes easy to use SPSS, SAS, and MATLAB programs for conducting the recommended analyses, and provides an illustrative example, using data (N = 329) for the Rosenberg Self-Esteem Scale. Program output includes variance components, relative and absolute errors and generalizability coefficients, coefficients for D studies, and graphs of D study results.

  9. Chasing the peak: optimal statistics for weak shear analyses

    NASA Astrophysics Data System (ADS)

    Smit, Merijn; Kuijken, Konrad

    2018-01-01

    Context. Weak gravitational lensing analyses are fundamentally limited by the intrinsic distribution of galaxy shapes. It is well known that this distribution of galaxy ellipticity is non-Gaussian, and the traditional estimation methods, explicitly or implicitly assuming Gaussianity, are not necessarily optimal. Aims: We aim to explore alternative statistics for samples of ellipticity measurements. An optimal estimator needs to be asymptotically unbiased, efficient, and robust in retaining these properties for various possible sample distributions. We take the non-linear mapping of gravitational shear and the effect of noise into account. We then discuss how the distribution of individual galaxy shapes in the observed field of view can be modeled by fitting Fourier modes to the shear pattern directly. This allows scientific analyses using statistical information of the whole field of view, instead of locally sparse and poorly constrained estimates. Methods: We simulated samples of galaxy ellipticities, using both theoretical distributions and data for ellipticities and noise. We determined the possible bias Δe, the efficiency η and the robustness of the least absolute deviations, the biweight, and the convex hull peeling (CHP) estimators, compared to the canonical weighted mean. Using these statistics for regression, we have shown the applicability of direct Fourier mode fitting. Results: We find an improved performance of all estimators, when iteratively reducing the residuals after de-shearing the ellipticity samples by the estimated shear, which removes the asymmetry in the ellipticity distributions. We show that these estimators are then unbiased in the absence of noise, and decrease noise bias by more than 30%. Our results show that the CHP estimator distribution is skewed, but still centered around the underlying shear, and its bias least affected by noise. We find the least absolute deviations estimator to be the most efficient estimator in almost all cases, except in the Gaussian case, where it's still competitive (0.83 < η < 5.1) and therefore robust. These results hold when fitting Fourier modes, where amplitudes of variation in ellipticity are determined to the order of 10-3. Conclusions: The peak of the ellipticity distribution is a direct tracer of the underlying shear and unaffected by noise, and we have shown that estimators that are sensitive to a central cusp perform more efficiently, potentially reducing uncertainties by more 0% and significantly decreasing noise bias. These results become increasingly important, as survey sizes increase and systematic issues in shape measurements decrease.

  10. Secondary Analysis of National Longitudinal Transition Study 2 Data

    ERIC Educational Resources Information Center

    Hicks, Tyler A.; Knollman, Greg A.

    2015-01-01

    This review examines published secondary analyses of National Longitudinal Transition Study 2 (NLTS2) data, with a primary focus upon statistical objectives, paradigms, inferences, and methods. Its primary purpose was to determine which statistical techniques have been common in secondary analyses of NLTS2 data. The review begins with an…

  11. A Nonparametric Geostatistical Method For Estimating Species Importance

    Treesearch

    Andrew J. Lister; Rachel Riemann; Michael Hoppus

    2001-01-01

    Parametric statistical methods are not always appropriate for conducting spatial analyses of forest inventory data. Parametric geostatistical methods such as variography and kriging are essentially averaging procedures, and thus can be affected by extreme values. Furthermore, non normal distributions violate the assumptions of analyses in which test statistics are...

  12. "Who Was 'Shadow'?" The Computer Knows: Applying Grammar-Program Statistics in Content Analyses to Solve Mysteries about Authorship.

    ERIC Educational Resources Information Center

    Ellis, Barbara G.; Dick, Steven J.

    1996-01-01

    Employs the statistics-documentation portion of a word-processing program's grammar-check feature together with qualitative analyses to determine that Henry Watterson, long-time editor of the "Louisville Courier-Journal," was probably the South's famed Civil War correspondent "Shadow." (TB)

  13. Analysis of measured data of human body based on error correcting frequency

    NASA Astrophysics Data System (ADS)

    Jin, Aiyan; Peipei, Gao; Shang, Xiaomei

    2014-04-01

    Anthropometry is to measure all parts of human body surface, and the measured data is the basis of analysis and study of the human body, establishment and modification of garment size and formulation and implementation of online clothing store. In this paper, several groups of the measured data are gained, and analysis of data error is gotten by analyzing the error frequency and using analysis of variance method in mathematical statistics method. Determination of the measured data accuracy and the difficulty of measured parts of human body, further studies of the causes of data errors, and summarization of the key points to minimize errors possibly are also mentioned in the paper. This paper analyses the measured data based on error frequency, and in a way , it provides certain reference elements to promote the garment industry development.

  14. Exploration of time-course combinations of outcome scales for use in a global test of stroke recovery.

    PubMed

    Goldie, Fraser C; Fulton, Rachael L; Dawson, Jesse; Bluhmki, Erich; Lees, Kennedy R

    2014-08-01

    Clinical trials for acute ischemic stroke treatment require large numbers of participants and are expensive to conduct. Methods that enhance statistical power are therefore desirable. We explored whether this can be achieved by a measure incorporating both early and late measures of outcome (e.g. seven-day NIH Stroke Scale combined with 90-day modified Rankin scale). We analyzed sensitivity to treatment effect, using proportional odds logistic regression for ordinal scales and generalized estimating equation method for global outcomes, with all analyses adjusted for baseline severity and age. We ran simulations to assess relations between sample size and power for ordinal scales and corresponding global outcomes. We used R version 2·12·1 (R Development Core Team. R Foundation for Statistical Computing, Vienna, Austria) for simulations and SAS 9·2 (SAS Institute Inc., Cary, NC, USA) for all other analyses. Each scale considered for combination was sensitive to treatment effect in isolation. The mRS90 and NIHSS90 had adjusted odds ratio of 1·56 and 1·62, respectively. Adjusted odds ratio for global outcomes of the combination of mRS90 with NIHSS7 and NIHSS90 with NIHSS7 were 1·69 and 1·73, respectively. The smallest sample sizes required to generate statistical power ≥80% for mRS90, NIHSS7, and global outcomes of mRS90 and NIHSS7 combined and NIHSS90 and NIHSS7 combined were 500, 490, 400, and 380, respectively. When data concerning both early and late outcomes are combined into a global measure, there is increased sensitivity to treatment effect compared with solitary ordinal scales. This delivers a 20% reduction in required sample size at 80% power. Combining early with late outcomes merits further consideration. © 2013 The Authors. International Journal of Stroke © 2013 World Stroke Organization.

  15. Selection and Reporting of Statistical Methods to Assess Reliability of a Diagnostic Test: Conformity to Recommended Methods in a Peer-Reviewed Journal

    PubMed Central

    Park, Ji Eun; Han, Kyunghwa; Sung, Yu Sub; Chung, Mi Sun; Koo, Hyun Jung; Yoon, Hee Mang; Choi, Young Jun; Lee, Seung Soo; Kim, Kyung Won; Shin, Youngbin; An, Suah; Cho, Hyo-Min

    2017-01-01

    Objective To evaluate the frequency and adequacy of statistical analyses in a general radiology journal when reporting a reliability analysis for a diagnostic test. Materials and Methods Sixty-three studies of diagnostic test accuracy (DTA) and 36 studies reporting reliability analyses published in the Korean Journal of Radiology between 2012 and 2016 were analyzed. Studies were judged using the methodological guidelines of the Radiological Society of North America-Quantitative Imaging Biomarkers Alliance (RSNA-QIBA), and COnsensus-based Standards for the selection of health Measurement INstruments (COSMIN) initiative. DTA studies were evaluated by nine editorial board members of the journal. Reliability studies were evaluated by study reviewers experienced with reliability analysis. Results Thirty-one (49.2%) of the 63 DTA studies did not include a reliability analysis when deemed necessary. Among the 36 reliability studies, proper statistical methods were used in all (5/5) studies dealing with dichotomous/nominal data, 46.7% (7/15) of studies dealing with ordinal data, and 95.2% (20/21) of studies dealing with continuous data. Statistical methods were described in sufficient detail regarding weighted kappa in 28.6% (2/7) of studies and regarding the model and assumptions of intraclass correlation coefficient in 35.3% (6/17) and 29.4% (5/17) of studies, respectively. Reliability parameters were used as if they were agreement parameters in 23.1% (3/13) of studies. Reproducibility and repeatability were used incorrectly in 20% (3/15) of studies. Conclusion Greater attention to the importance of reporting reliability, thorough description of the related statistical methods, efforts not to neglect agreement parameters, and better use of relevant terminology is necessary. PMID:29089821

  16. Reliability and statistical power analysis of cortical and subcortical FreeSurfer metrics in a large sample of healthy elderly.

    PubMed

    Liem, Franziskus; Mérillat, Susan; Bezzola, Ladina; Hirsiger, Sarah; Philipp, Michel; Madhyastha, Tara; Jäncke, Lutz

    2015-03-01

    FreeSurfer is a tool to quantify cortical and subcortical brain anatomy automatically and noninvasively. Previous studies have reported reliability and statistical power analyses in relatively small samples or only selected one aspect of brain anatomy. Here, we investigated reliability and statistical power of cortical thickness, surface area, volume, and the volume of subcortical structures in a large sample (N=189) of healthy elderly subjects (64+ years). Reliability (intraclass correlation coefficient) of cortical and subcortical parameters is generally high (cortical: ICCs>0.87, subcortical: ICCs>0.95). Surface-based smoothing increases reliability of cortical thickness maps, while it decreases reliability of cortical surface area and volume. Nevertheless, statistical power of all measures benefits from smoothing. When aiming to detect a 10% difference between groups, the number of subjects required to test effects with sufficient power over the entire cortex varies between cortical measures (cortical thickness: N=39, surface area: N=21, volume: N=81; 10mm smoothing, power=0.8, α=0.05). For subcortical regions this number is between 16 and 76 subjects, depending on the region. We also demonstrate the advantage of within-subject designs over between-subject designs. Furthermore, we publicly provide a tool that allows researchers to perform a priori power analysis and sensitivity analysis to help evaluate previously published studies and to design future studies with sufficient statistical power. Copyright © 2014 Elsevier Inc. All rights reserved.

  17. Inconsistency between direct and indirect comparisons of competing interventions: meta-epidemiological study

    PubMed Central

    Xiong, Tengbin; Parekh-Bhurke, Sheetal; Loke, Yoon K; Sutton, Alex J; Eastwood, Alison J; Holland, Richard; Chen, Yen-Fu; Glenny, Anne-Marie; Deeks, Jonathan J; Altman, Doug G

    2011-01-01

    Objective To investigate the agreement between direct and indirect comparisons of competing healthcare interventions. Design Meta-epidemiological study based on sample of meta-analyses of randomised controlled trials. Data sources Cochrane Database of Systematic Reviews and PubMed. Inclusion criteria Systematic reviews that provided sufficient data for both direct comparison and independent indirect comparisons of two interventions on the basis of a common comparator and in which the odds ratio could be used as the outcome statistic. Main outcome measure Inconsistency measured by the difference in the log odds ratio between the direct and indirect methods. Results The study included 112 independent trial networks (including 1552 trials with 478 775 patients in total) that allowed both direct and indirect comparison of two interventions. Indirect comparison had already been explicitly done in only 13 of the 85 Cochrane reviews included. The inconsistency between the direct and indirect comparison was statistically significant in 16 cases (14%, 95% confidence interval 9% to 22%). The statistically significant inconsistency was associated with fewer trials, subjectively assessed outcomes, and statistically significant effects of treatment in either direct or indirect comparisons. Owing to considerable inconsistency, many (14/39) of the statistically significant effects by direct comparison became non-significant when the direct and indirect estimates were combined. Conclusions Significant inconsistency between direct and indirect comparisons may be more prevalent than previously observed. Direct and indirect estimates should be combined in mixed treatment comparisons only after adequate assessment of the consistency of the evidence. PMID:21846695

  18. Analysis of data collected from right and left limbs: Accounting for dependence and improving statistical efficiency in musculoskeletal research.

    PubMed

    Stewart, Sarah; Pearson, Janet; Rome, Keith; Dalbeth, Nicola; Vandal, Alain C

    2018-01-01

    Statistical techniques currently used in musculoskeletal research often inefficiently account for paired-limb measurements or the relationship between measurements taken from multiple regions within limbs. This study compared three commonly used analysis methods with a mixed-models approach that appropriately accounted for the association between limbs, regions, and trials and that utilised all information available from repeated trials. Four analysis were applied to an existing data set containing plantar pressure data, which was collected for seven masked regions on right and left feet, over three trials, across three participant groups. Methods 1-3 averaged data over trials and analysed right foot data (Method 1), data from a randomly selected foot (Method 2), and averaged right and left foot data (Method 3). Method 4 used all available data in a mixed-effects regression that accounted for repeated measures taken for each foot, foot region and trial. Confidence interval widths for the mean differences between groups for each foot region were used as a criterion for comparison of statistical efficiency. Mean differences in pressure between groups were similar across methods for each foot region, while the confidence interval widths were consistently smaller for Method 4. Method 4 also revealed significant between-group differences that were not detected by Methods 1-3. A mixed effects linear model approach generates improved efficiency and power by producing more precise estimates compared to alternative approaches that discard information in the process of accounting for paired-limb measurements. This approach is recommended in generating more clinically sound and statistically efficient research outputs. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Statistical methods and errors in family medicine articles between 2010 and 2014-Suez Canal University, Egypt: A cross-sectional study.

    PubMed

    Nour-Eldein, Hebatallah

    2016-01-01

    With limited statistical knowledge of most physicians it is not uncommon to find statistical errors in research articles. To determine the statistical methods and to assess the statistical errors in family medicine (FM) research articles that were published between 2010 and 2014. This was a cross-sectional study. All 66 FM research articles that were published over 5 years by FM authors with affiliation to Suez Canal University were screened by the researcher between May and August 2015. Types and frequencies of statistical methods were reviewed in all 66 FM articles. All 60 articles with identified inferential statistics were examined for statistical errors and deficiencies. A comprehensive 58-item checklist based on statistical guidelines was used to evaluate the statistical quality of FM articles. Inferential methods were recorded in 62/66 (93.9%) of FM articles. Advanced analyses were used in 29/66 (43.9%). Contingency tables 38/66 (57.6%), regression (logistic, linear) 26/66 (39.4%), and t-test 17/66 (25.8%) were the most commonly used inferential tests. Within 60 FM articles with identified inferential statistics, no prior sample size 19/60 (31.7%), application of wrong statistical tests 17/60 (28.3%), incomplete documentation of statistics 59/60 (98.3%), reporting P value without test statistics 32/60 (53.3%), no reporting confidence interval with effect size measures 12/60 (20.0%), use of mean (standard deviation) to describe ordinal/nonnormal data 8/60 (13.3%), and errors related to interpretation were mainly for conclusions without support by the study data 5/60 (8.3%). Inferential statistics were used in the majority of FM articles. Data analysis and reporting statistics are areas for improvement in FM research articles.

  20. Statistical methods and errors in family medicine articles between 2010 and 2014-Suez Canal University, Egypt: A cross-sectional study

    PubMed Central

    Nour-Eldein, Hebatallah

    2016-01-01

    Background: With limited statistical knowledge of most physicians it is not uncommon to find statistical errors in research articles. Objectives: To determine the statistical methods and to assess the statistical errors in family medicine (FM) research articles that were published between 2010 and 2014. Methods: This was a cross-sectional study. All 66 FM research articles that were published over 5 years by FM authors with affiliation to Suez Canal University were screened by the researcher between May and August 2015. Types and frequencies of statistical methods were reviewed in all 66 FM articles. All 60 articles with identified inferential statistics were examined for statistical errors and deficiencies. A comprehensive 58-item checklist based on statistical guidelines was used to evaluate the statistical quality of FM articles. Results: Inferential methods were recorded in 62/66 (93.9%) of FM articles. Advanced analyses were used in 29/66 (43.9%). Contingency tables 38/66 (57.6%), regression (logistic, linear) 26/66 (39.4%), and t-test 17/66 (25.8%) were the most commonly used inferential tests. Within 60 FM articles with identified inferential statistics, no prior sample size 19/60 (31.7%), application of wrong statistical tests 17/60 (28.3%), incomplete documentation of statistics 59/60 (98.3%), reporting P value without test statistics 32/60 (53.3%), no reporting confidence interval with effect size measures 12/60 (20.0%), use of mean (standard deviation) to describe ordinal/nonnormal data 8/60 (13.3%), and errors related to interpretation were mainly for conclusions without support by the study data 5/60 (8.3%). Conclusion: Inferential statistics were used in the majority of FM articles. Data analysis and reporting statistics are areas for improvement in FM research articles. PMID:27453839

  1. The added value of ordinal analysis in clinical trials: an example in traumatic brain injury.

    PubMed

    Roozenbeek, Bob; Lingsma, Hester F; Perel, Pablo; Edwards, Phil; Roberts, Ian; Murray, Gordon D; Maas, Andrew Ir; Steyerberg, Ewout W

    2011-01-01

    In clinical trials, ordinal outcome measures are often dichotomized into two categories. In traumatic brain injury (TBI) the 5-point Glasgow outcome scale (GOS) is collapsed into unfavourable versus favourable outcome. Simulation studies have shown that exploiting the ordinal nature of the GOS increases chances of detecting treatment effects. The objective of this study is to quantify the benefits of ordinal analysis in the real-life situation of a large TBI trial. We used data from the CRASH trial that investigated the efficacy of corticosteroids in TBI patients (n = 9,554). We applied two techniques for ordinal analysis: proportional odds analysis and the sliding dichotomy approach, where the GOS is dichotomized at different cut-offs according to baseline prognostic risk. These approaches were compared to dichotomous analysis. The information density in each analysis was indicated by a Wald statistic. All analyses were adjusted for baseline characteristics. Dichotomous analysis of the six-month GOS showed a non-significant treatment effect (OR = 1.09, 95% CI 0.98 to 1.21, P = 0.096). Ordinal analysis with proportional odds regression or sliding dichotomy showed highly statistically significant treatment effects (OR 1.15, 95% CI 1.06 to 1.25, P = 0.0007 and 1.19, 95% CI 1.08 to 1.30, P = 0.0002), with 2.05-fold and 2.56-fold higher information density compared to the dichotomous approach respectively. Analysis of the CRASH trial data confirmed that ordinal analysis of outcome substantially increases statistical power. We expect these results to hold for other fields of critical care medicine that use ordinal outcome measures and recommend that future trials adopt ordinal analyses. This will permit detection of smaller treatment effects.

  2. Exploring patient support by breast care nurses and geographical residence as moderators of the unmet needs and self-efficacy of Australian women with breast cancer: Results from a cross-sectional, nationwide survey.

    PubMed

    Ahern, Tracey; Gardner, Anne; Courtney, Mary

    2016-08-01

    This study investigated whether use of services of a breast care nurse (BCN) at any time during treatment for breast cancer led to reduced unmet needs and increased self-efficacy among women with breast cancer. A secondary aim was to analyse comparisons between urban and rural and remote dwellers. Participants were Australian women who completed treatment for breast cancer at least 6 months before the survey date, recruited through two national databases of women diagnosed with breast cancer. The cross-sectional online survey consisted of two well validated measures, the SCNS-SF34 and the CASE-Cancer Scale. Statistical data were analysed using SPSS, with chi-square used to measure statistical significance. A total of 902 participants responded to the survey. Unmet needs in the psychological domain were most prominent. Respondents who used the services of a BCN were significantly less likely to report unmet needs regarding tiredness, anxiety; future outlook; feelings about death and dying; patient care and support from medical staff; and provision of health systems and information. Scores of self-efficacy showed women using the services of a BCN had significantly higher self-efficacy when seeking and obtaining information (ρ ≤ 0.001) and understanding and participating in care (ρ = 0.032). Urban dwellers were more likely to have choice of health care service, but overall neither unmet needs nor perceived self-efficacy varied statistically significantly by remoteness. Women with breast cancer experience a range of unmet needs; however those using BCN services demonstrated positive outcomes in terms of decreased unmet needs and increased self-efficacy. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Attachment to Life: Psychometric Analyses of the Valuation of Life Scale and Differences Among Older Adults

    PubMed Central

    Gitlin, Laura N.; Parisi, Jeanine; Huang, Jin; Winter, Laraine; Roth, David L.

    2016-01-01

    Purpose of study: Examine psychometric properties of Lawton’s Valuation of Life (VOL) scale, a measure of an older adults’ assessment of the perceived value of their lives; and whether ratings differ by race (White, Black/African American) and sex. Design and Methods: The 13-item VOL scale was administered at baseline in 2 separate randomized trials (Advancing Better Living for Elders, ABLE; Get Busy Get Better, GBGB) for a total of 527 older adults. Principal component analyses were applied to a subset of ABLE data (subsample 1) and confirmatory factor analyses were conducted on remaining data (subsample 2 and GBGB). Once the factor structure was identified and confirmed, 2 subscales were created, corresponding to optimism and engagement. Convergent validity of total and subscale scores were examined using measures of depressive symptoms, social support, control-oriented strategies, mastery, and behavioral activation. For discriminant validity, indices of health status, physical function, financial strain, cognitive status, and number of falls were examined. Results: Trial samples (ABLE vs. GBGB) differed by age, race, marital status, education, and employment. Principal component analysis on ABLE subsample 1 (n = 156) yielded two factors subsequently confirmed in confirmatory factor analyses on ABLE subsample 2 (n = 163) and GBGB sample (N = 208) separately. Adequate fit was found for the 2-factor model. Correlational analyses supported strong convergent and discriminant validity. Some statistically significant race and sex differences in subscale scores were found. Implications: VOL measures subjective appraisals of perceived value of life. Consisting of two interrelated subscales, it offers an efficient approach to ascertain personal attributions. PMID:26874189

  4. Trends and variability of cloud fraction cover in the Arctic, 1982-2009

    NASA Astrophysics Data System (ADS)

    Boccolari, Mauro; Parmiggiani, Flavio

    2018-05-01

    Climatology, trends and variability of cloud fraction cover (CFC) data over the Arctic (north of 70°N), were analysed over the 1982-2009 period. Data, available from the Climate Monitoring Satellite Application Facility (CM SAF), are derived from satellite measurements by AVHRR. Climatological means confirm permanent high CFC values over the Atlantic sector during all the year and during summer over the eastern Arctic Ocean. Lower values are found in the rest of the analysed area especially over Greenland and the Canadian Archipelago, nearly continuously during all the months. These results are confirmed by CFC trends and variability. Statistically significant trends were found during all the months over the Greenland Sea, particularly during the winter season (negative, less than -5 % dec -1) and over the Beaufort Sea in spring (positive, more than +5 % dec -1). CFC variability, investigated by the Empirical Orthogonal Functions, shows a substantial "non-variability" in the Northern Atlantic Ocean. Statistically significant correlations between CFC principal components elements and both the Pacific Decadal Oscillation index and Pacific North America patterns are found.

  5. Coloc-stats: a unified web interface to perform colocalization analysis of genomic features.

    PubMed

    Simovski, Boris; Kanduri, Chakravarthi; Gundersen, Sveinung; Titov, Dmytro; Domanska, Diana; Bock, Christoph; Bossini-Castillo, Lara; Chikina, Maria; Favorov, Alexander; Layer, Ryan M; Mironov, Andrey A; Quinlan, Aaron R; Sheffield, Nathan C; Trynka, Gosia; Sandve, Geir K

    2018-06-05

    Functional genomics assays produce sets of genomic regions as one of their main outputs. To biologically interpret such region-sets, researchers often use colocalization analysis, where the statistical significance of colocalization (overlap, spatial proximity) between two or more region-sets is tested. Existing colocalization analysis tools vary in the statistical methodology and analysis approaches, thus potentially providing different conclusions for the same research question. As the findings of colocalization analysis are often the basis for follow-up experiments, it is helpful to use several tools in parallel and to compare the results. We developed the Coloc-stats web service to facilitate such analyses. Coloc-stats provides a unified interface to perform colocalization analysis across various analytical methods and method-specific options (e.g. colocalization measures, resolution, null models). Coloc-stats helps the user to find a method that supports their experimental requirements and allows for a straightforward comparison across methods. Coloc-stats is implemented as a web server with a graphical user interface that assists users with configuring their colocalization analyses. Coloc-stats is freely available at https://hyperbrowser.uio.no/coloc-stats/.

  6. Simultaneous assessment of phase chemistry, phase abundance and bulk chemistry with statistical electron probe micro-analyses: Application to cement clinkers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilson, William; Krakowiak, Konrad J.; Ulm, Franz-Josef, E-mail: ulm@mit.edu

    2014-01-15

    According to recent developments in cement clinker engineering, the optimization of chemical substitutions in the main clinker phases offers a promising approach to improve both reactivity and grindability of clinkers. Thus, monitoring the chemistry of the phases may become part of the quality control at the cement plants, along with the usual measurements of the abundance of the mineralogical phases (quantitative X-ray diffraction) and the bulk chemistry (X-ray fluorescence). This paper presents a new method to assess these three complementary quantities with a single experiment. The method is based on electron microprobe spot analyses, performed over a grid located onmore » a representative surface of the sample and interpreted with advanced statistical tools. This paper describes the method and the experimental program performed on industrial clinkers to establish the accuracy in comparison to conventional methods. -- Highlights: •A new method of clinker characterization •Combination of electron probe technique with cluster analysis •Simultaneous assessment of phase abundance, composition and bulk chemistry •Experimental validation performed on industrial clinkers.« less

  7. Practice-based evidence study design for comparative effectiveness research.

    PubMed

    Horn, Susan D; Gassaway, Julie

    2007-10-01

    To describe a new, rigorous, comprehensive practice-based evidence for clinical practice improvement (PBE-CPI) study methodology, and compare its features, advantages, and disadvantages to those of randomized controlled trials and sophisticated statistical methods for comparative effectiveness research. PBE-CPI incorporates natural variation within data from routine clinical practice to determine what works, for whom, when, and at what cost. It uses the knowledge of front-line caregivers, who develop study questions and define variables as part of a transdisciplinary team. Its comprehensive measurement framework provides a basis for analyses of significant bivariate and multivariate associations between treatments and outcomes, controlling for patient differences, such as severity of illness. PBE-CPI studies can uncover better practices more quickly than randomized controlled trials or sophisticated statistical methods, while achieving many of the same advantages. We present examples of actionable findings from PBE-CPI studies in postacute care settings related to comparative effectiveness of medications, nutritional support approaches, incontinence products, physical therapy activities, and other services. Outcomes improved when practices associated with better outcomes in PBE-CPI analyses were adopted in practice.

  8. New instrument for measuring student beliefs about physics and learning physics: The Colorado Learning Attitudes about Science Survey

    NASA Astrophysics Data System (ADS)

    Adams, W. K.; Perkins, K. K.; Podolefsky, N. S.; Dubson, M.; Finkelstein, N. D.; Wieman, C. E.

    2006-06-01

    The Colorado Learning Attitudes about Science Survey (CLASS) is a new instrument designed to measure student beliefs about physics and about learning physics. This instrument extends previous work by probing additional aspects of student beliefs and by using wording suitable for students in a wide variety of physics courses. The CLASS has been validated using interviews, reliability studies, and extensive statistical analyses of responses from over 5000 students. In addition, a new methodology for determining useful and statistically robust categories of student beliefs has been developed. This paper serves as the foundation for an extensive study of how student beliefs impact and are impacted by their educational experiences. For example, this survey measures the following: that most teaching practices cause substantial drops in student scores; that a student’s likelihood of becoming a physics major correlates with their “Personal Interest” score; and that, for a majority of student populations, women’s scores in some categories, including “Personal Interest” and “Real World Connections,” are significantly different from men’s scores.

  9. Nutritional and food insecurity of construction workers.

    PubMed

    de Lima Brasil, Evi Clayton; de Araújo, Lindemberg Medeiros; de Toledo Vianna, Rodrigo Pinheiro

    2016-06-27

    Construction workers have intensive contact with their workplace and are possibly susceptible to Nutritional and Food Insecurity. This paper assessed the Food Security status, diet and anthropometric measures of workers in the Construction Industry living in the city of João Pessoa, PB. This cross-sectional study included 59 workers housed at construction sites. The workers were given the Brazilian Scale for Measuring Food Insecurity and Nutrition, had anthropometric measures taken and completed the Diet Quality Index, comparing their eating at the construction site and at home. Statistical analyses described the mean, standard deviation, frequency and Pearson correlations. Food Insecurity was reported by 71.2% of the workers, and 69.5% were overweight. The mean values of the Healthy Eating Index suggested that the workers' diets were in need of modification. There were statistically significant inverse associations among the Healthy Eating Index and Body Mass Index, waist circumference, percentage of total fat and cholesterol. Values obtained using the Scale showed Food Insecurity coupled with high excess weight and dietary inadequacies, revealing that these workers are at risk for health problems.

  10. Accuracy of Physical Self-Description Among Chronic Exercisers and Non-Exercisers.

    PubMed

    Berning, Joseph M; DeBeliso, Mark; Sevene, Trish G; Adams, Kent J; Salmon, Paul; Stamford, Bryant A

    2014-11-06

    This study addressed the role of chronic exercise to enhance physical self-description as measured by self-estimated percent body fat. Accuracy of physical self-description was determined in normal-weight, regularly exercising and non-exercising males with similar body mass index (BMI)'s and females with similar BMI's (n=42 males and 45 females of which 23 males and 23 females met criteria to be considered chronic exercisers). Statistical analyses were conducted to determine the degree of agreement between self-estimated percent body fat and actual laboratory measurements (hydrostatic weighing). Three statistical techniques were employed: Pearson correlation coefficients, Bland and Altman plots, and regression analysis. Agreement between measured and self-estimated percent body fat was superior for males and females who exercised chronically, compared to non-exercisers. The clinical implications are as follows. Satisfaction with one's body can be influenced by several factors, including self-perceived body composition. Dissatisfaction can contribute to maladaptive and destructive weight management behaviors. The present study suggests that regular exercise provides a basis for more positive weight management behaviors by enhancing the accuracy of self-assessed body composition.

  11. Angiogenesis and lymphangiogenesis as prognostic factors after therapy in patients with cervical cancer

    PubMed Central

    Makarewicz, Roman; Kopczyńska, Ewa; Marszałek, Andrzej; Goralewska, Alina; Kardymowicz, Hanna

    2012-01-01

    Aim of the study This retrospective study attempts to evaluate the influence of serum vascular endothelial growth factor C (VEGF-C), microvessel density (MVD) and lymphatic vessel density (LMVD) on the result of tumour treatment in women with cervical cancer. Material and methods The research was carried out in a group of 58 patients scheduled for brachytherapy for cervical cancer. All women were patients of the Department and University Hospital of Oncology and Brachytherapy, Collegium Medicum in Bydgoszcz of Nicolaus Copernicus University in Toruń. VEGF-C was determined by means of a quantitative sandwich enzyme immunoassay using a human antibody VEGF-C ELISA produced by Bender MedSystem, enzyme-linked immunosorbent detecting the activity of human VEGF-C in body fluids. The measure for the intensity of angiogenesis and lymphangiogenesis in immunohistochemical reactions is the number of blood vessels within the tumour. Statistical analysis was done using Statistica 6.0 software (StatSoft, Inc. 2001). The Cox proportional hazards model was used for univariate and multivariate analyses. Univariate analysis of overall survival was performed as outlined by Kaplan and Meier. In all statistical analyses p < 0.05 (marked red) was taken as significant. Results In 51 patients who showed up for follow-up examination, the influence of the factors of angiogenesis, lymphangiogenesis, patients’ age and the level of haemoglobin at the end of treatment were assessed. Selected variables, such as patients’ age, lymph vessel density (LMVD), microvessel density (MVD) and the level of haemoglobin (Hb) before treatment were analysed by means of Cox logical regression as potential prognostic factors for lymph node invasion. The observed differences were statistically significant for haemoglobin level before treatment and the platelet number after treatment. The study revealed the following prognostic factors: lymph node status, FIGO stage, and kind of treatment. No statistically significant influence of angiogenic and lymphangiogenic factors on the prognosis was found. Conclusion Angiogenic and lymphangiogenic factors have no value in predicting response to radiotherapy in cervical cancer patients. PMID:23788848

  12. High precision mass measurements for wine metabolomics

    PubMed Central

    Roullier-Gall, Chloé; Witting, Michael; Gougeon, Régis D.; Schmitt-Kopplin, Philippe

    2014-01-01

    An overview of the critical steps for the non-targeted Ultra-High Performance Liquid Chromatography coupled with Quadrupole Time-of-Flight Mass Spectrometry (UPLC-Q-ToF-MS) analysis of wine chemistry is given, ranging from the study design, data preprocessing and statistical analyses, to markers identification. UPLC-Q-ToF-MS data was enhanced by the alignment of exact mass data from FTICR-MS, and marker peaks were identified using UPLC-Q-ToF-MS2. In combination with multivariate statistical tools and the annotation of peaks with metabolites from relevant databases, this analytical process provides a fine description of the chemical complexity of wines, as exemplified in the case of red (Pinot noir) and white (Chardonnay) wines from various geographic origins in Burgundy. PMID:25431760

  13. [Pathogenetic therapy of mastopathies in the prevention of breast cancer].

    PubMed

    Iaritsyn, S S; Sidorenko, L N

    1979-01-01

    The breast cancer morbidity among the population of the city of Leningrad has been analysed. It was shown that there is a tendency to the increased number of breast cancer patients. In this respect attention is given to the prophylactic measures, accomplished in Leningrad City oncological dyspensary. As proved statistically, the pathogenetic therapy of mastopathy is a factor contributing to less risk of malignant transformation. For the statistical analysis the authors used the data of 132 breast cancer patients; previously operated upon for local fibroadenomatosis, and the data of 259 control patients. It was found that among the patients with fibroadenomatosis who subsequently developed cancer of the mammary gland, the proportion of untreated patients was 2.8 times as much as in the control group.

  14. High precision mass measurements for wine metabolomics

    NASA Astrophysics Data System (ADS)

    Roullier-Gall, Chloé; Witting, Michael; Gougeon, Régis; Schmitt-Kopplin, Philippe

    2014-11-01

    An overview of the critical steps for the non-targeted Ultra-High Performance Liquid Chromatography coupled with Quadrupole Time-of-Flight Mass Spectrometry (UPLC-Q-ToF-MS) analysis of wine chemistry is given, ranging from the study design, data preprocessing and statistical analyses, to markers identification. UPLC-Q-ToF-MS data was enhanced by the alignment of exact mass data from FTICR-MS, and marker peaks were identified using UPLC-Q-ToF-MS². In combination with multivariate statistical tools and the annotation of peaks with metabolites from relevant databases, this analytical process provides a fine description of the chemical complexity of wines, as exemplified in the case of red (Pinot noir) and white (Chardonnay) wines from various geographic origins in Burgundy.

  15. Atherosclerosis imaging using 3D black blood TSE SPACE vs 2D TSE

    PubMed Central

    Wong, Stephanie K; Mobolaji-Iawal, Motunrayo; Arama, Leron; Cambe, Joy; Biso, Sylvia; Alie, Nadia; Fayad, Zahi A; Mani, Venkatesh

    2014-01-01

    AIM: To compare 3D Black Blood turbo spin echo (TSE) sampling perfection with application-optimized contrast using different flip angle evolution (SPACE) vs 2D TSE in evaluating atherosclerotic plaques in multiple vascular territories. METHODS: The carotid, aortic, and femoral arterial walls of 16 patients at risk for cardiovascular or atherosclerotic disease were studied using both 3D black blood magnetic resonance imaging SPACE and conventional 2D multi-contrast TSE sequences using a consolidated imaging approach in the same imaging session. Qualitative and quantitative analyses were performed on the images. Agreement of morphometric measurements between the two imaging sequences was assessed using a two-sample t-test, calculation of the intra-class correlation coefficient and by the method of linear regression and Bland-Altman analyses. RESULTS: No statistically significant qualitative differences were found between the 3D SPACE and 2D TSE techniques for images of the carotids and aorta. For images of the femoral arteries, however, there were statistically significant differences in all four qualitative scores between the two techniques. Using the current approach, 3D SPACE is suboptimal for femoral imaging. However, this may be due to coils not being optimized for femoral imaging. Quantitatively, in our study, higher mean total vessel area measurements for the 3D SPACE technique across all three vascular beds were observed. No significant differences in lumen area for both the right and left carotids were observed between the two techniques. Overall, a significant-correlation existed between measures obtained between the two approaches. CONCLUSION: Qualitative and quantitative measurements between 3D SPACE and 2D TSE techniques are comparable. 3D-SPACE may be a feasible approach in the evaluation of cardiovascular patients. PMID:24876923

  16. Comparison of the Complior Analyse device with Sphygmocor and Complior SP for pulse wave velocity and central pressure assessment.

    PubMed

    Stea, Francesco; Bozec, Erwan; Millasseau, Sandrine; Khettab, Hakim; Boutouyrie, Pierre; Laurent, Stéphane

    2014-04-01

    The Complior device (Alam Medical, France) was used in epidemiological studies which established pulse wave velocity (PWV) as a cardiovascular risk marker. Central pressure is related, but complementary to PWV and also associated to cardiovascular outcomes. The new Complior Analyse measures both PWV and central blood pressure during the same acquisition. The aim of this study was to compare PWV values from Complior Analyse with the previous Complior SP (PWVcs) and with Sphygmocor (PWVscr; AtCor, Australia), and to compare central systolic pressure from Complior Analyse and Sphygmocor. Peripheral and central pressures and PWV were measured with the three devices in 112 patients. PWV measurements from Complior Analyse were analysed using two foot-detection algorithms (PWVca_it and PWVca_cs). Both radial (ao-SBPscr) and carotid (car-SBPscr) approaches from Sphygmocor were compared to carotid Complior Analyse measurements (car-SBPca). The same distance and same calibrating pressures were used for all devices. PWVca_it was strongly correlated to PWVscr (R(2) = 0.93, P < 0.001) with a difference of 0.0 ± 0.7  m/s. PWVca_cs was also correlated to PWVcs (R(2) = 0.90, P < 0.001) with a difference of 0.1 ± 0.7  m/s. Central systolic pressures were strongly correlated. The difference between car-SBPca and ao-SBPscr was 3.1 ± 4.2  mmHg (P < 0.001), statistically equivalent to the difference between car-SBPscr and ao-SBPscr (3.9 ± 5.8  mmHg, P < 0.001), whilst the difference between car-SBPca and car-SBPscr was negligible (-0.7 ± 5.6  mmHg, P = NS). The new Complior Analyse device provides equivalent results for PWV and central pressure values to the Sphygmocor and Complior SP. It reaches Association for the Advancement of Medical Instrumentation standard for central blood pressure and grades as excellent for PWV on the Artery Society criteria. It can be interchanged with existing devices.

  17. Elucidating the association between the self-harm inventory and several borderline personality measures in an inpatient psychiatric sample.

    PubMed

    Sellbom, Martin; Sansone, Randy A; Songer, Douglas A

    2017-09-01

    The current study evaluated the utility of the self-harm inventory (SHI) as a proxy for and screening measure of borderline personality disorder (BPD) using several diagnostic and statistical manual of mental disorders (DSM)-based BPD measures as criteria. We used a sample of 145 psychiatric inpatients, who completed the SHI and a series of well-validated, DSM-based self-report measures of BPD. Using a series of latent trait and latent class analyses, we found that the SHI was substantially associated with a latent construct representing BPD, as well as differentiated latent classes of 'high' vs. 'low' BPD, with good accuracy. The SHI can serve as proxy for and a good screening measure for BPD, but future research needs to replicate these findings using structured interview-based measurement of BPD.

  18. Inferential Statistics in "Language Teaching Research": A Review and Ways Forward

    ERIC Educational Resources Information Center

    Lindstromberg, Seth

    2016-01-01

    This article reviews all (quasi)experimental studies appearing in the first 19 volumes (1997-2015) of "Language Teaching Research" (LTR). Specifically, it provides an overview of how statistical analyses were conducted in these studies and of how the analyses were reported. The overall conclusion is that there has been a tight adherence…

  19. Behavioural laterality as a predictor of health in captive Caribbean flamingos (Phoenicopterus ruber): an exploratory analysis.

    PubMed

    Anderson, Matthew J; Ialeggio, Donna M

    2014-01-01

    The present study sought to explore the possibility that lateral behaviour in captive Caribbean flamingos (Phoenicopterus ruber) housed at the Philadelphia Zoo (Philadelphia, PA) could be used to predict a variety of physiological measures of health obtained via complete blood counts (CBC) and plasma biochemistry analyses that were performed as part of the flock's annual physical examination. Consistent with previous research, evidence of rightward lateral neck-resting preferences were obtained, no evidence was found for the existence of leg stance preferences, and neck-resting and leg stance preferences were shown to be unrelated. Both lateral neck-resting preferences and lateral support leg preference were shown to be related to a variety of measures from the CBC and plasma biochemistry analyses. While several general trends emerged in regards to the CBC variables, the relationships between the lateral behaviours and those variables generated via plasma biochemistry analyses proved to be fewer and somewhat less consistent. Birds with rightward neck-resting preferences and birds with leftward support leg preferences generally appeared to be healthier and less stressed according to the CBC measures; however, the validity of lateral leg stance preference as a predictor of health and wellbeing is questionable given the lack of statistically significant leg stance preferences.

  20. Study of the atmospheric conditions affecting infrared astronomical measurements at White Mountain, California

    NASA Technical Reports Server (NTRS)

    Field, G. B.

    1974-01-01

    Measurements are described of atmospheric conditions affecting astronomical observations at White Mountain, California. Measurements were made at more than 1400 times spaced over more than 170 days at the Summit Laboratory and a small number of days at the Barcroft Laboratory. The recorded quantities were ten micron sky noise and precipitable water vapor, plus wet and dry bulb temperatures, wind speed and direction, brightness of the sky near the sun, fisheye lens photographs of the sky, description of cloud cover and other observable parameters, color photographs of air pollution astronomical seeing, and occasional determinations of the visible light brightness of the night sky. Measurements of some of these parameters have been made for over twenty years at the Barcroft and Crooked Creek Laboratories, and statistical analyses were made of them. These results and interpretations are given. The bulk of the collected data are statistically analyzed, and disposition of the detailed data is described. Most of the data are available in machine readable form. A detailed discussion of the techniques proposed for operation at White Mountain is given, showing how to cope with the mountain and climatic problems.

  1. Reliability and validity of MicroScribe-3DXL system in comparison with radiographic cephalometric system: Angular measurements.

    PubMed

    Barmou, Maher M; Hussain, Saba F; Abu Hassan, Mohamed I

    2018-06-01

    The aim of the study was to assess the reliability and validity of cephalometric variables from MicroScribe-3DXL. Seven cephalometric variables (facial angle, ANB, maxillary depth, U1/FH, FMA, IMPA, FMIA) were measured by a dentist in 60 Malay subjects (30 males and 30 females) with class I occlusion and balanced face. Two standard images were taken for each subject with conventional cephalometric radiography and MicroScribe-3DXL. All the images were traced and analysed. SPSS version 2.0 was used for statistical analysis with P-value was set at P<0.05. The results revealed a significant statistic difference in four measurements (U1/FH, FMA, IMPA, FMIA) with P-value range (0.00 to 0.03). The difference in the measurements was considered clinically acceptable. The overall reliability of MicroScribe-3DXL was 92.7% and its validity was 91.8%. The MicroScribe-3DXL is reliable and valid to most of the cephalometric variables with the advantages of saving time and cost. This is a promising device to assist in diverse areas in dental practice and research. Copyright © 2018. Published by Elsevier Masson SAS.

  2. [Clinical research XXIII. From clinical judgment to meta-analyses].

    PubMed

    Rivas-Ruiz, Rodolfo; Castelán-Martínez, Osvaldo D; Pérez-Rodríguez, Marcela; Palacios-Cruz, Lino; Noyola-Castillo, Maura E; Talavera, Juan O

    2014-01-01

    Systematic reviews (SR) are studies made in order to ask clinical questions based on original articles. Meta-analysis (MTA) is the mathematical analysis of SR. These analyses are divided in two groups, those which evaluate the measured results of quantitative variables (for example, the body mass index -BMI-) and those which evaluate qualitative variables (for example, if a patient is alive or dead, or if he is healing or not). Quantitative variables generally use the mean difference analysis and qualitative variables can be performed using several calculations: odds ratio (OR), relative risk (RR), absolute risk reduction (ARR) and hazard ratio (HR). These analyses are represented through forest plots which allow the evaluation of each individual study, as well as the heterogeneity between studies and the overall effect of the intervention. These analyses are mainly based on Student's t test and chi-squared. To take appropriate decisions based on the MTA, it is important to understand the characteristics of statistical methods in order to avoid misinterpretations.

  3. Cross-sectional associations between air pollution and chronic bronchitis: an ESCAPE meta-analysis across five cohorts.

    PubMed

    Cai, Yutong; Schikowski, Tamara; Adam, Martin; Buschka, Anna; Carsin, Anne-Elie; Jacquemin, Benedicte; Marcon, Alessandro; Sanchez, Margaux; Vierkötter, Andrea; Al-Kanaani, Zaina; Beelen, Rob; Birk, Matthias; Brunekreef, Bert; Cirach, Marta; Clavel-Chapelon, Françoise; Declercq, Christophe; de Hoogh, Kees; de Nazelle, Audrey; Ducret-Stich, Regina E; Valeria Ferretti, Virginia; Forsberg, Bertil; Gerbase, Margaret W; Hardy, Rebecca; Heinrich, Joachim; Hoek, Gerard; Jarvis, Debbie; Keidel, Dirk; Kuh, Diana; Nieuwenhuijsen, Mark J; Ragettli, Martina S; Ranzi, Andrea; Rochat, Thierry; Schindler, Christian; Sugiri, Dorothea; Temam, Sofia; Tsai, Ming-Yi; Varraso, Raphaëlle; Kauffmann, Francine; Krämer, Ursula; Sunyer, Jordi; Künzli, Nino; Probst-Hensch, Nicole; Hansell, Anna L

    2014-11-01

    This study aimed to assess associations of outdoor air pollution on prevalence of chronic bronchitis symptoms in adults in five cohort studies (Asthma-E3N, ECRHS, NSHD, SALIA, SAPALDIA) participating in the European Study of Cohorts for Air Pollution Effects (ESCAPE) project. Annual average particulate matter (PM(10), PM(2.5), PM(absorbance), PM(coarse)), NO(2), nitrogen oxides (NO(x)) and road traffic measures modelled from ESCAPE measurement campaigns 2008-2011 were assigned to home address at most recent assessments (1998-2011). Symptoms examined were chronic bronchitis (cough and phlegm for ≥3 months of the year for ≥2 years), chronic cough (with/without phlegm) and chronic phlegm (with/without cough). Cohort-specific cross-sectional multivariable logistic regression analyses were conducted using common confounder sets (age, sex, smoking, interview season, education), followed by meta-analysis. 15 279 and 10 537 participants respectively were included in the main NO(2) and PM analyses at assessments in 1998-2011. Overall, there were no statistically significant associations with any air pollutant or traffic exposure. Sensitivity analyses including in asthmatics only, females only or using back-extrapolated NO(2) and PM10 for assessments in 1985-2002 (ECRHS, NSHD, SALIA, SAPALDIA) did not alter conclusions. In never-smokers, all associations were positive, but reached statistical significance only for chronic phlegm with PM(coarse) OR 1.31 (1.05 to 1.64) per 5 µg/m(3) increase and PM(10) with similar effect size. Sensitivity analyses of older cohorts showed increased risk of chronic cough with PM(2.5abs) (black carbon) exposures. Results do not show consistent associations between chronic bronchitis symptoms and current traffic-related air pollution in adult European populations. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  4. Statistical Analyses of Marine Mammal Occurrence, Habitat Associations and Interactions with Ocean Dynamic Features

    DTIC Science & Technology

    2006-03-30

    Sirena campaigns) have been successfully conducted in the northwestern Mediterranean Sea since 1999. Six sea trials have been conducted in the...trials ( Sirena campaigns) have been successfully conducted in the northwestern Mediterranean Sea since 1999. Six sea trials have been conducted in the...oceanographic measurements in the canyon region. 3-13 Aug 22 Aug- 6 Sep󈧏 23 Sep - 7 Oct ,- Figure 1: Sirena Sea Trials: yearly multi-platform at-sea

  5. Psychological Analyses of Courageous Performance in Military Personnel

    DTIC Science & Technology

    1986-11-01

    schedule HR heart rate IBI inter- beat interval N number of subjects NS not statistically significant P probability PCA principal components analysis RAQ...tones in the range of 400 to 600 Hz, set at a level of 60 dB, transmitted for 1 sec binaurally through earphones from a commercial oscillator. The...because of interference on the recording trace. Cardiac activity was measured in terms of heart rate (HR). The number of beats /minute was estimared by

  6. Aerosols Observations with a new lidar station in Punta Arenas, Chile

    NASA Astrophysics Data System (ADS)

    Barja, Boris; Zamorano, Felix; Ristori, Pablo; Otero, Lidia; Quel, Eduardo; Sugimoto, Nobuo; Shimizu, Atsushi; Santana, Jorge

    2018-04-01

    A tropospheric lidar system was installed in Punta Arenas, Chile (53.13°S, 70.88°W) in September 2016 under the collaboration project SAVERNET (Chile, Japan and Argentina) to monitor the atmosphere. Statistical analyses of the clouds and aerosols behavior and some cases of dust detected with lidar, at these high southern latitude and cold environment regions during three months (austral spring) are discussed using information from satellite, modelling and solar radiation ground measurements.

  7. GreekLex 2: A comprehensive lexical database with part-of-speech, syllabic, phonological, and stress information

    PubMed Central

    van Heuven, Walter J. B.; Pitchford, Nicola J.; Ledgeway, Timothy

    2017-01-01

    Databases containing lexical properties on any given orthography are crucial for psycholinguistic research. In the last ten years, a number of lexical databases have been developed for Greek. However, these lack important part-of-speech information. Furthermore, the need for alternative procedures for calculating syllabic measurements and stress information, as well as combination of several metrics to investigate linguistic properties of the Greek language are highlighted. To address these issues, we present a new extensive lexical database of Modern Greek (GreekLex 2) with part-of-speech information for each word and accurate syllabification and orthographic information predictive of stress, as well as several measurements of word similarity and phonetic information. The addition of detailed statistical information about Greek part-of-speech, syllabification, and stress neighbourhood allowed novel analyses of stress distribution within different grammatical categories and syllabic lengths to be carried out. Results showed that the statistical preponderance of stress position on the pre-final syllable that is reported for Greek language is dependent upon grammatical category. Additionally, analyses showed that a proportion higher than 90% of the tokens in the database would be stressed correctly solely by relying on stress neighbourhood information. The database and the scripts for orthographic and phonological syllabification as well as phonetic transcription are available at http://www.psychology.nottingham.ac.uk/greeklex/. PMID:28231303

  8. Refined elasticity sampling for Monte Carlo-based identification of stabilizing network patterns.

    PubMed

    Childs, Dorothee; Grimbs, Sergio; Selbig, Joachim

    2015-06-15

    Structural kinetic modelling (SKM) is a framework to analyse whether a metabolic steady state remains stable under perturbation, without requiring detailed knowledge about individual rate equations. It provides a representation of the system's Jacobian matrix that depends solely on the network structure, steady state measurements, and the elasticities at the steady state. For a measured steady state, stability criteria can be derived by generating a large number of SKMs with randomly sampled elasticities and evaluating the resulting Jacobian matrices. The elasticity space can be analysed statistically in order to detect network positions that contribute significantly to the perturbation response. Here, we extend this approach by examining the kinetic feasibility of the elasticity combinations created during Monte Carlo sampling. Using a set of small example systems, we show that the majority of sampled SKMs would yield negative kinetic parameters if they were translated back into kinetic models. To overcome this problem, a simple criterion is formulated that mitigates such infeasible models. After evaluating the small example pathways, the methodology was used to study two steady states of the neuronal TCA cycle and the intrinsic mechanisms responsible for their stability or instability. The findings of the statistical elasticity analysis confirm that several elasticities are jointly coordinated to control stability and that the main source for potential instabilities are mutations in the enzyme alpha-ketoglutarate dehydrogenase. © The Author 2015. Published by Oxford University Press.

  9. GreekLex 2: A comprehensive lexical database with part-of-speech, syllabic, phonological, and stress information.

    PubMed

    Kyparissiadis, Antonios; van Heuven, Walter J B; Pitchford, Nicola J; Ledgeway, Timothy

    2017-01-01

    Databases containing lexical properties on any given orthography are crucial for psycholinguistic research. In the last ten years, a number of lexical databases have been developed for Greek. However, these lack important part-of-speech information. Furthermore, the need for alternative procedures for calculating syllabic measurements and stress information, as well as combination of several metrics to investigate linguistic properties of the Greek language are highlighted. To address these issues, we present a new extensive lexical database of Modern Greek (GreekLex 2) with part-of-speech information for each word and accurate syllabification and orthographic information predictive of stress, as well as several measurements of word similarity and phonetic information. The addition of detailed statistical information about Greek part-of-speech, syllabification, and stress neighbourhood allowed novel analyses of stress distribution within different grammatical categories and syllabic lengths to be carried out. Results showed that the statistical preponderance of stress position on the pre-final syllable that is reported for Greek language is dependent upon grammatical category. Additionally, analyses showed that a proportion higher than 90% of the tokens in the database would be stressed correctly solely by relying on stress neighbourhood information. The database and the scripts for orthographic and phonological syllabification as well as phonetic transcription are available at http://www.psychology.nottingham.ac.uk/greeklex/.

  10. The mammalian bony labyrinth reconsidered, introducing a comprehensive geometric morphometric approach

    PubMed Central

    Gunz, Philipp; Ramsier, Marissa; Kuhrig, Melanie; Hublin, Jean-Jacques; Spoor, Fred

    2012-01-01

    The bony labyrinth in the temporal bone houses the sensory systems of balance and hearing. While the overall structure of the semicircular canals and cochlea is similar across therian mammals, their detailed morphology varies even among closely related groups. As such, the shape of the labyrinth carries valuable functional and phylogenetic information. Here we introduce a new, semilandmark-based three-dimensional geometric morphometric approach to shape analysis of the labyrinth, as a major improvement upon previous metric studies based on linear measurements and angles. We first provide a detailed, step-by-step description of the measurement protocol. Subsequently, we test our approach using a geographically diverse sample of 50 recent modern humans and 30 chimpanzee specimens belonging to Pan troglodytes troglodytes and P. t. verus. Our measurement protocol can be applied to CT scans of different spatial resolutions because it primarily quantifies the midline skeleton of the bony labyrinth. Accurately locating the lumen centre of the semicircular canals and the cochlea is not affected by the partial volume and thresholding effects that can make the comparison of the outer border problematic. After virtually extracting the bony labyrinth from CT scans of the temporal bone, we computed its midline skeleton by thinning the encased volume. On the resulting medial axes of the semicircular canals and cochlea we placed a sequence of semilandmarks. After Procrustes superimposition, the shape coordinates were analysed using multivariate statistics. We found statistically significant shape differences between humans and chimpanzees which corroborate previous analyses of the labyrinth based on traditional measurements. As the geometric relationship among the semilandmark coordinates was preserved throughout the analysis, we were able to quantify and visualize even small-scale shape differences. Notably, our approach made it possible to detect and visualize subtle, yet statistically significant (P = 0.009), differences between two chimpanzee subspecies in the shape of their semicircular canals. The ability to discriminate labyrinth shape at the subspecies level demonstrates that the approach presented here has great potential in future taxonomic studies of fossil specimens. PMID:22404255

  11. Local Geographic Variation of Public Services Inequality: Does the Neighborhood Scale Matter?

    PubMed Central

    Wei, Chunzhu; Cabrera-Barona, Pablo; Blaschke, Thomas

    2016-01-01

    This study aims to explore the effect of the neighborhood scale when estimating public services inequality based on the aggregation of social, environmental, and health-related indicators. Inequality analyses were carried out at three neighborhood scales: the original census blocks and two aggregated neighborhood units generated by the spatial “k”luster analysis by the tree edge removal (SKATER) algorithm and the self-organizing map (SOM) algorithm. Then, we combined a set of health-related public services indicators with the geographically weighted principal components analyses (GWPCA) and the principal components analyses (PCA) to measure the public services inequality across all multi-scale neighborhood units. Finally, a statistical test was applied to evaluate the scale effects in inequality measurements by combining all available field survey data. We chose Quito as the case study area. All of the aggregated neighborhood units performed better than the original census blocks in terms of the social indicators extracted from a field survey. The SKATER and SOM algorithms can help to define the neighborhoods in inequality analyses. Moreover, GWPCA performs better than PCA in multivariate spatial inequality estimation. Understanding the scale effects is essential to sustain a social neighborhood organization, which, in turn, positively affects social determinants of public health and public quality of life. PMID:27706072

  12. Associations between DSM-5 section III personality traits and the Minnesota Multiphasic Personality Inventory 2-Restructured Form (MMPI-2-RF) scales in a psychiatric patient sample.

    PubMed

    Anderson, Jaime L; Sellbom, Martin; Ayearst, Lindsay; Quilty, Lena C; Chmielewski, Michael; Bagby, R Michael

    2015-09-01

    Our aim in the current study was to evaluate the convergence between Diagnostic and Statistical Manual of Mental Disorders, fifth edition (DSM-5) Section III dimensional personality traits, as operationalized via the Personality Inventory for DSM-5 (PID-5), and Minnesota Multiphasic Personality Inventory 2-Restructured Form (MMPI-2-RF) scale scores in a psychiatric patient sample. We used a sample of 346 (171 men, 175 women) patients who were recruited through a university-affiliated psychiatric facility in Toronto, Canada. We estimated zero-order correlations between the PID-5 and MMPI-2-RF substantive scale scores, as well as a series of exploratory structural equation modeling (ESEM) analyses to examine how these scales converged in multivariate latent space. Results generally showed empirical convergence between the scales of these two measures that were thematically meaningful and in accordance with conceptual expectations. Correlation analyses showed significant associations between conceptually expected scales, and the highest associations tended to be between scales that were theoretically related. ESEM analyses generated evidence for distinct internalizing, externalizing, and psychoticism factors across all analyses. These findings indicate convergence between these two measures and help further elucidate the associations between dysfunctional personality traits and general psychopathology. (c) 2015 APA, all rights reserved.

  13. A simple technique investigating baseline heterogeneity helped to eliminate potential bias in meta-analyses.

    PubMed

    Hicks, Amy; Fairhurst, Caroline; Torgerson, David J

    2018-03-01

    To perform a worked example of an approach that can be used to identify and remove potentially biased trials from meta-analyses via the analysis of baseline variables. True randomisation produces treatment groups that differ only by chance; therefore, a meta-analysis of a baseline measurement should produce no overall difference and zero heterogeneity. A meta-analysis from the British Medical Journal, known to contain significant heterogeneity and imbalance in baseline age, was chosen. Meta-analyses of baseline variables were performed and trials systematically removed, starting with those with the largest t-statistic, until the I 2 measure of heterogeneity became 0%, then the outcome meta-analysis repeated with only the remaining trials as a sensitivity check. We argue that heterogeneity in a meta-analysis of baseline variables should not exist, and therefore removing trials which contribute to heterogeneity from a meta-analysis will produce a more valid result. In our example none of the overall outcomes changed when studies contributing to heterogeneity were removed. We recommend routine use of this technique, using age and a second baseline variable predictive of outcome for the particular study chosen, to help eliminate potential bias in meta-analyses. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. Spatial analyses of benthic habitats to define coral reef ecosystem regions and potential biogeographic boundaries along a latitudinal gradient.

    PubMed

    Walker, Brian K

    2012-01-01

    Marine organism diversity typically attenuates latitudinally from tropical to colder climate regimes. Since the distribution of many marine species relates to certain habitats and depth regimes, mapping data provide valuable information in the absence of detailed ecological data that can be used to identify and spatially quantify smaller scale (10 s km) coral reef ecosystem regions and potential physical biogeographic barriers. This study focused on the southeast Florida coast due to a recognized, but understudied, tropical to subtropical biogeographic gradient. GIS spatial analyses were conducted on recent, accurate, shallow-water (0-30 m) benthic habitat maps to identify and quantify specific regions along the coast that were statistically distinct in the number and amount of major benthic habitat types. Habitat type and width were measured for 209 evenly-spaced cross-shelf transects. Evaluation of groupings from a cluster analysis at 75% similarity yielded five distinct regions. The number of benthic habitats and their area, width, distance from shore, distance from each other, and LIDAR depths were calculated in GIS and examined to determine regional statistical differences. The number of benthic habitats decreased with increasing latitude from 9 in the south to 4 in the north and many of the habitat metrics statistically differed between regions. Three potential biogeographic barriers were found at the Boca, Hillsboro, and Biscayne boundaries, where specific shallow-water habitats were absent further north; Middle Reef, Inner Reef, and oceanic seagrass beds respectively. The Bahamas Fault Zone boundary was also noted where changes in coastal morphologies occurred that could relate to subtle ecological changes. The analyses defined regions on a smaller scale more appropriate to regional management decisions, hence strengthening marine conservation planning with an objective, scientific foundation for decision making. They provide a framework for similar regional analyses elsewhere.

  15. Spatial Analyses of Benthic Habitats to Define Coral Reef Ecosystem Regions and Potential Biogeographic Boundaries along a Latitudinal Gradient

    PubMed Central

    Walker, Brian K.

    2012-01-01

    Marine organism diversity typically attenuates latitudinally from tropical to colder climate regimes. Since the distribution of many marine species relates to certain habitats and depth regimes, mapping data provide valuable information in the absence of detailed ecological data that can be used to identify and spatially quantify smaller scale (10 s km) coral reef ecosystem regions and potential physical biogeographic barriers. This study focused on the southeast Florida coast due to a recognized, but understudied, tropical to subtropical biogeographic gradient. GIS spatial analyses were conducted on recent, accurate, shallow-water (0–30 m) benthic habitat maps to identify and quantify specific regions along the coast that were statistically distinct in the number and amount of major benthic habitat types. Habitat type and width were measured for 209 evenly-spaced cross-shelf transects. Evaluation of groupings from a cluster analysis at 75% similarity yielded five distinct regions. The number of benthic habitats and their area, width, distance from shore, distance from each other, and LIDAR depths were calculated in GIS and examined to determine regional statistical differences. The number of benthic habitats decreased with increasing latitude from 9 in the south to 4 in the north and many of the habitat metrics statistically differed between regions. Three potential biogeographic barriers were found at the Boca, Hillsboro, and Biscayne boundaries, where specific shallow-water habitats were absent further north; Middle Reef, Inner Reef, and oceanic seagrass beds respectively. The Bahamas Fault Zone boundary was also noted where changes in coastal morphologies occurred that could relate to subtle ecological changes. The analyses defined regions on a smaller scale more appropriate to regional management decisions, hence strengthening marine conservation planning with an objective, scientific foundation for decision making. They provide a framework for similar regional analyses elsewhere. PMID:22276204

  16. SOCR Analyses - an Instructional Java Web-based Statistical Analysis Toolkit.

    PubMed

    Chu, Annie; Cui, Jenny; Dinov, Ivo D

    2009-03-01

    The Statistical Online Computational Resource (SOCR) designs web-based tools for educational use in a variety of undergraduate courses (Dinov 2006). Several studies have demonstrated that these resources significantly improve students' motivation and learning experiences (Dinov et al. 2008). SOCR Analyses is a new component that concentrates on data modeling and analysis using parametric and non-parametric techniques supported with graphical model diagnostics. Currently implemented analyses include commonly used models in undergraduate statistics courses like linear models (Simple Linear Regression, Multiple Linear Regression, One-Way and Two-Way ANOVA). In addition, we implemented tests for sample comparisons, such as t-test in the parametric category; and Wilcoxon rank sum test, Kruskal-Wallis test, Friedman's test, in the non-parametric category. SOCR Analyses also include several hypothesis test models, such as Contingency tables, Friedman's test and Fisher's exact test.The code itself is open source (http://socr.googlecode.com/), hoping to contribute to the efforts of the statistical computing community. The code includes functionality for each specific analysis model and it has general utilities that can be applied in various statistical computing tasks. For example, concrete methods with API (Application Programming Interface) have been implemented in statistical summary, least square solutions of general linear models, rank calculations, etc. HTML interfaces, tutorials, source code, activities, and data are freely available via the web (www.SOCR.ucla.edu). Code examples for developers and demos for educators are provided on the SOCR Wiki website.In this article, the pedagogical utilization of the SOCR Analyses is discussed, as well as the underlying design framework. As the SOCR project is on-going and more functions and tools are being added to it, these resources are constantly improved. The reader is strongly encouraged to check the SOCR site for most updated information and newly added models.

  17. Single awakening salivary measurements provide reliable estimates of morning cortisol levels in pregnant women.

    PubMed

    Vlenterie, Richelle; Roeleveld, Nel; van Gelder, Marleen M H J

    2016-12-01

    Mood disorders during pregnancy have been associated with adverse effects on maternal as well as fetal health. Since mood, anxiety, and stress disorders are related with elevated cortisol levels, salivary cortisol may be a useful biomarker. Although multiple samples are generally recommended, a single measurement of awakening salivary cortisol could be a simpler and more cost-effective method to determine whether women have elevated morning cortisol levels during a specific period of pregnancy. Therefore, the aim of this validation study among 177 women in the PRIDE Study was to examine whether one awakening salivary cortisol measurement will suffice to classify pregnant women as having normal or elevated cortisol levels compared to awakening salivary cortisol measurements on three consecutive working days. We calculated intraclass correlation coefficients (ICC) and Cohen's kappa statistics (κ) overall as well as in sub-analyses within strata based on maternal age, level of education, net household income, pre-pregnancy BMI, parity, complications during pregnancy, caffeine consumption, gestational week of sampling, and awakening time. The mean cortisol concentrations were 8.98ng/ml (SD 5.32) for day one, 8.62ng/ml (SD 4.55) for day two, and 8.39ng/ml (SD 4.58) for day three. The overall ICC was 0.86 (95% CI 0.82-0.89) while the κ was 0.75 (95% CI 0.64-0.86). For the ICCs calculated within sub-analyses, a maximum difference of 0.11 was observed between the strata. For the κ statistics, most strata did not differ more than 0.12, except for pre-pregnancy BMI, severe nausea, and extreme fatigue with differences up to 0.22. In conclusion, one awakening salivary cortisol measurement is as reliable for the classification of pregnant women into normal and elevated morning cortisol levels as salivary cortisol measurements on three consecutive working days. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Data precision of X-ray fluorescence (XRF) scanning of discrete samples with the ITRAX XRF core-scanner exemplified on loess-paleosol samples

    NASA Astrophysics Data System (ADS)

    Profe, Jörn; Ohlendorf, Christian

    2017-04-01

    XRF-scanning is the state-of-the-art technique for geochemical analyses in marine and lacustrine sedimentology for more than a decade. However, little attention has been paid to data precision and technical limitations so far. Using homogenized, dried and powdered samples (certified geochemical reference standards and samples from a lithologically-contrasting loess-paleosol sequence) minimizes many adverse effects that influence the XRF-signal when analyzing wet sediment cores. This allows the investigation of data precision under ideal conditions and documents a new application of the XRF core-scanner technology at the same time. Reliable interpretations of XRF results require data precision evaluation of single elements as a function of X-ray tube, measurement time, sample compaction and quality of peak fitting. Ten-fold measurement of each sample constitutes data precision. Data precision of XRF measurements theoretically obeys Poisson statistics. Fe and Ca exhibit largest deviations from Poisson statistics. The same elements show the least mean relative standard deviations in the range from 0.5% to 1%. This represents the technical limit of data precision achievable by the installed detector. Measurement times ≥ 30 s reveal mean relative standard deviations below 4% for most elements. The quality of peak fitting is only relevant for elements with overlapping fluorescence lines such as Ba, Ti and Mn or for elements with low concentrations such as Y, for example. Differences in sample compaction are marginal and do not change mean relative standard deviation considerably. Data precision is in the range reported for geochemical reference standards measured by conventional techniques. Therefore, XRF scanning of discrete samples provide a cost- and time-efficient alternative to conventional multi-element analyses. As best trade-off between economical operation and data quality, we recommend a measurement time of 30 s resulting in a total scan time of 30 minutes for 30 samples.

  19. Learning physics concepts as a function of colloquial language usage

    NASA Astrophysics Data System (ADS)

    Maier, Steven J.

    Data from two sections of college introductory, algebra-based physics courses (n1 = 139, n2 = 91) were collected using three separate instruments to investigate the relationships between reasoning ability, conceptual gain and colloquial language usage. To obtain a measure of reasoning ability, Lawson's Classroom Test of Scientific Reasoning Ability (TSR) was administered once near mid-term for each sample. The Force Concept Inventory (FCI) was administered at the beginning and at the end of the term for pre- and post-test measures. Pre- and post-test data from the Mechanics Language Usage instrument were also collected in conjunction with FCI data collection at the beginning and end of the term. The MLU was developed specifically for this study prior to data collection, and results of a pilot test to establish validity and reliability are reported. T-tests were performed on the data collected to compare the means from each sample. In addition, correlations among the measures were investigated between the samples separately and combined. Results from these investigations served as justification for combining the samples into a single sample of 230 for performing further statistical analyses. The primary objective of this study was to determine if scientific reasoning ability (a function of developmental stage) and conceptual gains in Newtonian mechanics predict students' usages of "force" as measured by the MLU. Regression analyses were performed to evaluate these mediated relationships among TSR and FCI performance as a predictor of MLU performance. Statistically significant correlations and relationships existed among several of the measures, which are discussed at length in the body of the narrative. The findings of this research are that although there exists a discernable relationship between reasoning ability and conceptual change, more work needs to be done to establish improved quantitative measures of the role language usage has in developing understandings of course content.

  20. Statistical analysis plan for the family-led rehabilitation after stroke in India (ATTEND) trial: A multicenter randomized controlled trial of a new model of stroke rehabilitation compared to usual care.

    PubMed

    Billot, Laurent; Lindley, Richard I; Harvey, Lisa A; Maulik, Pallab K; Hackett, Maree L; Murthy, Gudlavalleti Vs; Anderson, Craig S; Shamanna, Bindiganavale R; Jan, Stephen; Walker, Marion; Forster, Anne; Langhorne, Peter; Verma, Shweta J; Felix, Cynthia; Alim, Mohammed; Gandhi, Dorcas Bc; Pandian, Jeyaraj Durai

    2017-02-01

    Background In low- and middle-income countries, few patients receive organized rehabilitation after stroke, yet the burden of chronic diseases such as stroke is increasing in these countries. Affordable models of effective rehabilitation could have a major impact. The ATTEND trial is evaluating a family-led caregiver delivered rehabilitation program after stroke. Objective To publish the detailed statistical analysis plan for the ATTEND trial prior to trial unblinding. Methods Based upon the published registration and protocol, the blinded steering committee and management team, led by the trial statistician, have developed a statistical analysis plan. The plan has been informed by the chosen outcome measures, the data collection forms and knowledge of key baseline data. Results The resulting statistical analysis plan is consistent with best practice and will allow open and transparent reporting. Conclusions Publication of the trial statistical analysis plan reduces potential bias in trial reporting, and clearly outlines pre-specified analyses. Clinical Trial Registrations India CTRI/2013/04/003557; Australian New Zealand Clinical Trials Registry ACTRN1261000078752; Universal Trial Number U1111-1138-6707.

  1. Basic Diagnosis and Prediction of Persistent Contrail Occurrence using High-resolution Numerical Weather Analyses/Forecasts and Logistic Regression. Part I: Effects of Random Error

    NASA Technical Reports Server (NTRS)

    Duda, David P.; Minnis, Patrick

    2009-01-01

    Straightforward application of the Schmidt-Appleman contrail formation criteria to diagnose persistent contrail occurrence from numerical weather prediction data is hindered by significant bias errors in the upper tropospheric humidity. Logistic models of contrail occurrence have been proposed to overcome this problem, but basic questions remain about how random measurement error may affect their accuracy. A set of 5000 synthetic contrail observations is created to study the effects of random error in these probabilistic models. The simulated observations are based on distributions of temperature, humidity, and vertical velocity derived from Advanced Regional Prediction System (ARPS) weather analyses. The logistic models created from the simulated observations were evaluated using two common statistical measures of model accuracy, the percent correct (PC) and the Hanssen-Kuipers discriminant (HKD). To convert the probabilistic results of the logistic models into a dichotomous yes/no choice suitable for the statistical measures, two critical probability thresholds are considered. The HKD scores are higher when the climatological frequency of contrail occurrence is used as the critical threshold, while the PC scores are higher when the critical probability threshold is 0.5. For both thresholds, typical random errors in temperature, relative humidity, and vertical velocity are found to be small enough to allow for accurate logistic models of contrail occurrence. The accuracy of the models developed from synthetic data is over 85 percent for both the prediction of contrail occurrence and non-occurrence, although in practice, larger errors would be anticipated.

  2. Analysis of elemental concentration censored distributions in breast malignant and breast benign neoplasm tissues

    NASA Astrophysics Data System (ADS)

    Kubala-Kukuś, A.; Banaś, D.; Braziewicz, J.; Góźdź, S.; Majewska, U.; Pajek, M.

    2007-07-01

    The total reflection X-ray fluorescence method was applied to study the trace element concentrations in human breast malignant and breast benign neoplasm tissues taken from the women who were patients of Holycross Cancer Centre in Kielce (Poland). These investigations were mainly focused on the development of new possibilities of cancer diagnosis and therapy monitoring. This systematic comparative study was based on relatively large (˜ 100) population studied, namely 26 samples of breast malignant and 68 samples of breast benign neoplasm tissues. The concentrations, being in the range from a few ppb to 0.1%, were determined for thirteen elements (from P to Pb). The results were carefully analysed to investigate the concentration distribution of trace elements in the studied samples. The measurements of concentration of trace elements by total reflection X-ray fluorescence were limited, however, by the detection limit of the method. It was observed that for more than 50% of elements determined, the concentrations were not measured in all samples. These incomplete measurements were treated within the statistical concept called left-random censoring and for the estimation of the mean value and median of censored concentration distributions, the Kaplan-Meier estimator was used. For comparison of concentrations in two populations, the log-rank test was applied, which allows to compare the censored total reflection X-ray fluorescence data. Found statistically significant differences are discussed in more details. It is noted that described data analysis procedures should be the standard tool to analyze the censored concentrations of trace elements analysed by X-ray fluorescence methods.

  3. Assessing the suitability of summary data for two-sample Mendelian randomization analyses using MR-Egger regression: the role of the I2 statistic.

    PubMed

    Bowden, Jack; Del Greco M, Fabiola; Minelli, Cosetta; Davey Smith, George; Sheehan, Nuala A; Thompson, John R

    2016-12-01

    : MR-Egger regression has recently been proposed as a method for Mendelian randomization (MR) analyses incorporating summary data estimates of causal effect from multiple individual variants, which is robust to invalid instruments. It can be used to test for directional pleiotropy and provides an estimate of the causal effect adjusted for its presence. MR-Egger regression provides a useful additional sensitivity analysis to the standard inverse variance weighted (IVW) approach that assumes all variants are valid instruments. Both methods use weights that consider the single nucleotide polymorphism (SNP)-exposure associations to be known, rather than estimated. We call this the `NO Measurement Error' (NOME) assumption. Causal effect estimates from the IVW approach exhibit weak instrument bias whenever the genetic variants utilized violate the NOME assumption, which can be reliably measured using the F-statistic. The effect of NOME violation on MR-Egger regression has yet to be studied. An adaptation of the I2 statistic from the field of meta-analysis is proposed to quantify the strength of NOME violation for MR-Egger. It lies between 0 and 1, and indicates the expected relative bias (or dilution) of the MR-Egger causal estimate in the two-sample MR context. We call it IGX2 . The method of simulation extrapolation is also explored to counteract the dilution. Their joint utility is evaluated using simulated data and applied to a real MR example. In simulated two-sample MR analyses we show that, when a causal effect exists, the MR-Egger estimate of causal effect is biased towards the null when NOME is violated, and the stronger the violation (as indicated by lower values of IGX2 ), the stronger the dilution. When additionally all genetic variants are valid instruments, the type I error rate of the MR-Egger test for pleiotropy is inflated and the causal effect underestimated. Simulation extrapolation is shown to substantially mitigate these adverse effects. We demonstrate our proposed approach for a two-sample summary data MR analysis to estimate the causal effect of low-density lipoprotein on heart disease risk. A high value of IGX2 close to 1 indicates that dilution does not materially affect the standard MR-Egger analyses for these data. : Care must be taken to assess the NOME assumption via the IGX2 statistic before implementing standard MR-Egger regression in the two-sample summary data context. If IGX2 is sufficiently low (less than 90%), inferences from the method should be interpreted with caution and adjustment methods considered. © The Author 2016. Published by Oxford University Press on behalf of the International Epidemiological Association.

  4. Hydrologic Data from the Study of Acidic Contamination in the Miami Wash-Pinal Creek Area, Arizona, Water Years 1997-2004

    USGS Publications Warehouse

    Konieczki, A.D.; Brown, J.G.; Parker, J.T.C.

    2008-01-01

    Since 1984, hydrologic data have been collected as part of a U.S. Geological Survey study of the occurrence and movement of acidic contamination in the aquifer and streams of the Pinal Creek drainage basin near Globe, Arizona. Ground-water data from that study are presented for water years 1997 through 2004 and include location, construction information, site plans, water levels, chemical and physical field measurements, and selected chemical analyses of water samples for 31 project wells. Hydrographs of depth to ground water are also included. Surface-water data for four sites are also presented and include selected chemical analyses of water samples. Monthly precipitation data and long-term precipitation statistics are presented for two sites. Chemical analyses of samples collected from the stream and shallow ground water in the perennial reach of Pinal Creek are also included.

  5. Measurement of change in health status with Rasch models.

    PubMed

    Anselmi, Pasquale; Vidotto, Giulio; Bettinardi, Ornella; Bertolotti, Giorgio

    2015-02-07

    The traditional approach to the measurement of change presents important drawbacks (no information at individual level, ordinal scores, variance of the measurement instrument across time points), which Rasch models overcome. The article aims to illustrate the features of the measurement of change with Rasch models. To illustrate the measurement of change using Rasch models, the quantitative data of a longitudinal study of heart-surgery patients (N = 98) were used. The scale "Perception of Positive Change" was used as an example of measurement instrument. All patients underwent cardiac rehabilitation, individual psychological intervention, and educational intervention. Nineteen patients also attended progressive muscle relaxation group trainings. The scale was administered before and after the interventions. Three Rasch approaches were used. Two separate analyses were run on the data from the two time points to test the invariance of the instrument. An analysis was run on the stacked data from both time points to measure change in a common frame of reference. Results of the latter analysis were compared with those of an analysis that removed the influence of local dependency on patient measures. Statistics t, χ(2) and F were used for comparing the patient and item measures estimated in the Rasch analyses (a-priori α = .05). Infit, Outfit, R and item Strata were used for investigating Rasch model fit, reliability, and validity of the instrument. Data of all 98 patients were included in the analyses. The instrument was reliable, valid, and substantively unidimensional (Infit, Outfit < 2 for all items, R = .84, item Strata range = 3.93-6.07). Changes in the functioning of the instrument occurred across the two time, which prevented the use of the two separate analyses to unambiguously measure change. Local dependency had a negligible effect on patient measures (p ≥ .8674). Thirteen patients improved, whereas 3 worsened. The patients who attended the relaxation group trainings did not report greater improvement than those who did not (p = .1007). Rasch models represent a valid framework for the measurement of change and a useful complement to traditional approaches.

  6. Association between market concentration of hospitals and patient health gain following hip replacement surgery.

    PubMed

    Feng, Yan; Pistollato, Michele; Charlesworth, Anita; Devlin, Nancy; Propper, Carol; Sussex, Jon

    2015-01-01

    To assess the association between market concentration of hospitals (as a proxy for competition) and patient-reported health gains after elective primary hip replacement surgery. Patient Reported Outcome Measures data linked to NHS Hospital Episode Statistics in England in 2011/12 were used to analyse the association between market concentration of hospitals measured by the Herfindahl-Hirschman Index (HHI) and health gains for 337 hospitals. The association between market concentration and patient gain in health status measured by the change in Oxford Hip Score (OHS) after primary hip replacement surgery was not statistically significant at the 5% level both for the average patient and for those with more than average severity of hip disease (OHS worse than average). For 12,583 (49.1%) patients with an OHS before hip replacement surgery better than the mean, a one standard deviation increase in the HHI, equivalent to a reduction of about one hospital in the local market, was associated with a 0.104 decrease in patients' self-reported improvement in OHS after surgery, but this was not statistically significant at the 5% level. Hospital market concentration (as a proxy for competition) appears to have no significant influence (at the 5% level) on the outcome of elective primary hip replacement. The generalizability of this finding needs to be investigated. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  7. Human movement stochastic variability leads to diagnostic biomarkers In Autism Spectrum Disorders (ASD)

    NASA Astrophysics Data System (ADS)

    Wu, Di; Torres, Elizabeth B.; Jose, Jorge V.

    2015-03-01

    ASD is a spectrum of neurodevelopmental disorders. The high heterogeneity of the symptoms associated with the disorder impedes efficient diagnoses based on human observations. Recent advances with high-resolution MEM wearable sensors enable accurate movement measurements that may escape the naked eye. It calls for objective metrics to extract physiological relevant information from the rapidly accumulating data. In this talk we'll discuss the statistical analysis of movement data continuously collected with high-resolution sensors at 240Hz. We calculated statistical properties of speed fluctuations within the millisecond time range that closely correlate with the subjects' cognitive abilities. We computed the periodicity and synchronicity of the speed fluctuations' from their power spectrum and ensemble averaged two-point cross-correlation function. We built a two-parameter phase space from the temporal statistical analyses of the nearest neighbor fluctuations that provided a quantitative biomarker for ASD and adult normal subjects and further classified ASD severity. We also found age related developmental statistical signatures and potential ASD parental links in our movement dynamical studies. Our results may have direct clinical applications.

  8. New Measures Assessing Predictors of Academic Persistence for Historically Underrepresented Racial/Ethnic Undergraduates in Science

    PubMed Central

    Byars-Winston, Angela; Rogers, Jenna; Branchaw, Janet; Pribbenow, Christine; Hanke, Ryan; Pfund, Christine

    2016-01-01

    An important step in broadening participation of historically underrepresented (HU) racial/ethnic groups in the sciences is the creation of measures validated with these groups that will allow for greater confidence in the results of investigations into factors that predict their persistence. This study introduces new measures of theoretically derived factors emanating from social cognitive and social identity theories associated with persistence for HU racial/ethnic groups in science disciplines. The purpose of this study was to investigate: 1) the internal reliability and factor analyses for measures of research-related self-efficacy beliefs, sources of self-efficacy, outcome expectations, and science identity; and 2) potential group differences in responses to the measures, examining the main and interaction effects of gender and race/ethnicity. Survey data came from a national sample of 688 undergraduate students in science majors who were primarily black/African American and Hispanic/Latino/a with a 2:1 ratio of females to males. Analyses yielded acceptable validity statistics and race × gender group differences were observed in mean responses to several measures. Implications for broadening participation of HU groups in the sciences are discussed regarding future tests of predictive models of student persistence and training programs to consider cultural diversity factors in their design. PMID:27521235

  9. The classification of secondary colorectal liver cancer in human biopsy samples using angular dispersive x-ray diffraction and multivariate analysis

    NASA Astrophysics Data System (ADS)

    Theodorakou, Chrysoula; Farquharson, Michael J.

    2009-08-01

    The motivation behind this study is to assess whether angular dispersive x-ray diffraction (ADXRD) data, processed using multivariate analysis techniques, can be used for classifying secondary colorectal liver cancer tissue and normal surrounding liver tissue in human liver biopsy samples. The ADXRD profiles from a total of 60 samples of normal liver tissue and colorectal liver metastases were measured using a synchrotron radiation source. The data were analysed for 56 samples using nonlinear peak-fitting software. Four peaks were fitted to all of the ADXRD profiles, and the amplitude, area, amplitude and area ratios for three of the four peaks were calculated and used for the statistical and multivariate analysis. The statistical analysis showed that there are significant differences between all the peak-fitting parameters and ratios between the normal and the diseased tissue groups. The technique of soft independent modelling of class analogy (SIMCA) was used to classify normal liver tissue and colorectal liver metastases resulting in 67% of the normal tissue samples and 60% of the secondary colorectal liver tissue samples being classified correctly. This study has shown that the ADXRD data of normal and secondary colorectal liver cancer are statistically different and x-ray diffraction data analysed using multivariate analysis have the potential to be used as a method of tissue classification.

  10. diffHic: a Bioconductor package to detect differential genomic interactions in Hi-C data.

    PubMed

    Lun, Aaron T L; Smyth, Gordon K

    2015-08-19

    Chromatin conformation capture with high-throughput sequencing (Hi-C) is a technique that measures the in vivo intensity of interactions between all pairs of loci in the genome. Most conventional analyses of Hi-C data focus on the detection of statistically significant interactions. However, an alternative strategy involves identifying significant changes in the interaction intensity (i.e., differential interactions) between two or more biological conditions. This is more statistically rigorous and may provide more biologically relevant results. Here, we present the diffHic software package for the detection of differential interactions from Hi-C data. diffHic provides methods for read pair alignment and processing, counting into bin pairs, filtering out low-abundance events and normalization of trended or CNV-driven biases. It uses the statistical framework of the edgeR package to model biological variability and to test for significant differences between conditions. Several options for the visualization of results are also included. The use of diffHic is demonstrated with real Hi-C data sets. Performance against existing methods is also evaluated with simulated data. On real data, diffHic is able to successfully detect interactions with significant differences in intensity between biological conditions. It also compares favourably to existing software tools on simulated data sets. These results suggest that diffHic is a viable approach for differential analyses of Hi-C data.

  11. Influence of family environment on language outcomes in children with myelomeningocele.

    PubMed

    Vachha, B; Adams, R

    2005-09-01

    Previously, our studies demonstrated language differences impacting academic performance among children with myelomeningocele and shunted hydrocephalus (MMSH). This follow-up study considers the environmental facilitators within families (achievement orientation, intellectual-cultural orientation, active recreational orientation, independence) among a cohort of children with MMSH and their relationship to language performance. Fifty-eight monolingual, English-speaking children (36 females; mean age: 10.1 years; age range: 7-16 years) with MMSH were evaluated. Exclusionary criteria were prior shunt infection; seizure or shunt malfunction within the previous 3 months; uncorrected visual or auditory impairments; prior diagnoses of mental retardation or attention deficit disorder. The Comprehensive Assessment of Spoken Language (CASL) and the Wechsler Abbreviated Scale of Intelligence (WASI) were administered individually to all participants. The CASL Measures four subsystems: lexical, syntactic, supralinguistic and pragmatic. Parents completed the Family Environment Scale (FES) questionnaire and provided background demographic information. Spearman correlation analyses and partial correlation analyses were performed. Mean intelligence scores for the MMSH group: full scale IQ 92.2 (SD = 11.9). The CASL revealed statistically significant difficulty for supralinguistic and pragmatic (or social) language tasks. FES scores fell within the average range for the group. Spearman correlation and partial correlation analyses revealed statistically significant positive relationships for the FES 'intellectual-cultural orientation' variable and performance within the four language subsystems. Socio-economic status (SES) characteristics were analyzed and did not discriminate language performance when the intellectual-cultural orientation factor was taken into account. The role of family facilitators on language skills in children with MMSH has not previously been described. The relationship between language performance and the families' value on intellectual/cultural activities seems both statistically and intuitively sound. Focused interest in the integration of family values and practices should assist developmental specialists in supporting families and children within their most natural environment.

  12. Best practices for measuring students' attitudes toward learning science.

    PubMed

    Lovelace, Matthew; Brickman, Peggy

    2013-01-01

    Science educators often characterize the degree to which tests measure different facets of college students' learning, such as knowing, applying, and problem solving. A casual survey of scholarship of teaching and learning research studies reveals that many educators also measure how students' attitudes influence their learning. Students' science attitudes refer to their positive or negative feelings and predispositions to learn science. Science educators use attitude measures, in conjunction with learning measures, to inform the conclusions they draw about the efficacy of their instructional interventions. The measurement of students' attitudes poses similar but distinct challenges as compared with measurement of learning, such as determining validity and reliability of instruments and selecting appropriate methods for conducting statistical analyses. In this review, we will describe techniques commonly used to quantify students' attitudes toward science. We will also discuss best practices for the analysis and interpretation of attitude data.

  13. Best Practices for Measuring Students’ Attitudes toward Learning Science

    PubMed Central

    Lovelace, Matthew; Brickman, Peggy

    2013-01-01

    Science educators often characterize the degree to which tests measure different facets of college students’ learning, such as knowing, applying, and problem solving. A casual survey of scholarship of teaching and learning research studies reveals that many educators also measure how students’ attitudes influence their learning. Students’ science attitudes refer to their positive or negative feelings and predispositions to learn science. Science educators use attitude measures, in conjunction with learning measures, to inform the conclusions they draw about the efficacy of their instructional interventions. The measurement of students’ attitudes poses similar but distinct challenges as compared with measurement of learning, such as determining validity and reliability of instruments and selecting appropriate methods for conducting statistical analyses. In this review, we will describe techniques commonly used to quantify students’ attitudes toward science. We will also discuss best practices for the analysis and interpretation of attitude data. PMID:24297288

  14. Maximal use of kinematic information for the extraction of the mass of the top quark in single-lepton tt bar events at DO

    NASA Astrophysics Data System (ADS)

    Estrada Vigil, Juan Cruz

    The mass of the top (t) quark has been measured in the lepton+jets channel of tt¯ final states studied by the DØ and CDF experiments at Fermilab using data from Run I of the Tevatron pp¯ collider. The result published by DØ is 173.3 +/- 5.6(stat) +/- 5.5(syst) GeV. We present a different method to perform this measurement using the existing data. The new technique uses all available kinematic information in an event, and provides a significantly smaller statistical uncertainty than achieved in previous analyses. The preliminary results presented in this thesis indicate a statistical uncertainty for the extracted mass of the top quark of 3.5 GeV, which represents a significant improvement over the previous value of 5.6 GeV. The method of analysis is very general, and may be particularly useful in situations where there is a small signal and a large background.

  15. Effect of censoring trace-level water-quality data on trend-detection capability

    USGS Publications Warehouse

    Gilliom, R.J.; Hirsch, R.M.; Gilroy, E.J.

    1984-01-01

    Monte Carlo experiments were used to evaluate whether trace-level water-quality data that are routinely censored (not reported) contain valuable information for trend detection. Measurements are commonly censored if they fall below a level associated with some minimum acceptable level of reliability (detection limit). Trace-level organic data were simulated with best- and worst-case estimates of measurement uncertainty, various concentrations and degrees of linear trend, and different censoring rules. The resulting classes of data were subjected to a nonparametric statistical test for trend. For all classes of data evaluated, trends were most effectively detected in uncensored data as compared to censored data even when the data censored were highly unreliable. Thus, censoring data at any concentration level may eliminate valuable information. Whether or not valuable information for trend analysis is, in fact, eliminated by censoring of actual rather than simulated data depends on whether the analytical process is in statistical control and bias is predictable for a particular type of chemical analyses.

  16. A Vignette (User's Guide) for “An R Package for Statistical ...

    EPA Pesticide Factsheets

    StatCharrms is a graphical user front-end for ease of use in analyzing data generated from OCSPP 890.2200, Medaka Extended One Generation Reproduction Test (MEOGRT) and OCSPP 890.2300, Larval Amphibian Gonad Development Assay (LAGDA). The analyses StatCharrms is capable of performing are: Rao-Scott adjusted Cochran-Armitage test for trend By Slices (RSCABS), a Standard Cochran-Armitage test for trend By Slices (SCABS), mixed effects Cox proportional model, Jonckheere-Terpstra step down trend test, Dunn test, one way ANOVA, weighted ANOVA, mixed effects ANOVA, repeated measures ANOVA, and Dunnett test. This document provides a User’s Manual (termed a Vignette by the Comprehensive R Archive Network (CRAN)) for the previously created R-code tool called StatCharrms (Statistical analysis of Chemistry, Histopathology, and Reproduction endpoints using Repeated measures and Multi-generation Studies). The StatCharrms R-code has been publically available directly from EPA staff since the approval of OCSPP 890.2200 and 890.2300, and now is available publically available at the CRAN.

  17. The validation of a swimming turn wall-contact-time measurement system: a touchpad application reliability study.

    PubMed

    Brackley, Victoria; Ball, Kevin; Tor, Elaine

    2018-05-12

    The effectiveness of the swimming turn is highly influential to overall performance in competitive swimming. The push-off or wall contact, within the turn phase, is directly involved in determining the speed the swimmer leaves the wall. Therefore, it is paramount to develop reliable methods to measure the wall-contact-time during the turn phase for training and research purposes. The aim of this study was to determine the concurrent validity and reliability of the Pool Pad App to measure wall-contact-time during the freestyle and backstroke tumble turn. The wall-contact-times of nine elite and sub-elite participants were recorded during their regular training sessions. Concurrent validity statistics included the standardised typical error estimate, linear analysis and effect sizes while the intraclass correlating coefficient (ICC) was used for the reliability statistics. The standardised typical error estimate resulted in a moderate Cohen's d effect size with an R 2 value of 0.80 and the ICC between the Pool Pad and 2D video footage was 0.89. Despite these measurement differences, the results from this concurrent validity and reliability analyses demonstrated that the Pool Pad is suitable for measuring wall-contact-time during the freestyle and backstroke tumble turn within a training environment.

  18. [Death certificate data in France: Production process and main types of analyses].

    PubMed

    Rey, G

    2016-10-01

    Mortality data, by the unambiguity of their definition and understanding by all stakeholders, and completeness of registration, are a cornerstone of public health statistics in France and in most industrialized countries. This article describes the data production process, and the main types of possible analyses. Data production is composed of different stages: death certification by a medical doctor on paper or electronic (using a web application) format, data transmission to Inserm, capture and coding of information. The encoding of the information follows the WHO recommendations of the International Classification of Diseases ([ICD], 10th revision used since 2000). It is carried out using an automatic coding software, called Iris, developed in an international consortium. The coding aims, first, at assigning an ICD code to all nosologic entities encountered on the certificate, and then at selecting the underlying cause of death. The latter is the main information used for statistical analyses. Three main types of analysis emerge in the literature: the exploitation of data on the death certificate only, ecological analyses (studies of associations between variables measured across groups) and analysis from data individually linked to other databases. Many public health issues can be addressed with these various analyses. Several developments in the production process are being implemented: the deployment of electronic certification, increased automation of the death certificate information processing and durable and complete record linkage with health insurance and hospitalisation data. They could soon be deeply expanding the scope of possible uses of causes of death data. Copyright © 2016 Société Nationale Française de Médecine Interne (SNFMI). Published by Elsevier SAS. All rights reserved.

  19. Bootstrap versus Statistical Effect Size Corrections: A Comparison with Data from the Finding Embedded Figures Test.

    ERIC Educational Resources Information Center

    Thompson, Bruce; Melancon, Janet G.

    Effect sizes have been increasingly emphasized in research as more researchers have recognized that: (1) all parametric analyses (t-tests, analyses of variance, etc.) are correlational; (2) effect sizes have played an important role in meta-analytic work; and (3) statistical significance testing is limited in its capacity to inform scientific…

  20. Comments on `A Cautionary Note on the Interpretation of EOFs'.

    NASA Astrophysics Data System (ADS)

    Behera, Swadhin K.; Rao, Suryachandra A.; Saji, Hameed N.; Yamagata, Toshio

    2003-04-01

    The misleading aspect of the statistical analyses used in Dommenget and Latif, which raises concerns on some of the reported climate modes, is demonstrated. Adopting simple statistical techniques, the physical existence of the Indian Ocean dipole mode is shown and then the limitations of varimax and regression analyses in capturing the climate mode are discussed.

  1. A quality control circle process to improve implementation effect of prevention measures for high-risk patients.

    PubMed

    Feng, Haixia; Li, Guohong; Xu, Cuirong; Ju, Changping; Suo, Peiheng

    2017-12-01

    The aim of the study was to analyse the influence of prevention measures on pressure injuries for high-risk patients and to establish the most appropriate methods of implementation. Nurses assessed patients using a checklist and factors influencing the prevention of a pressure injury determined by brain storming. A specific series of measures was drawn up and an estimate of risk of pressure injury determined using the Braden Scale, analysis of nursing documents, implementation of prevention measures for pressure sores and awareness of the system both before and after carrying out a quality control circle (QCC) process. The overall scores of implementation of prevention measures ranged from 74.86 ± 14.24 to 87.06 ± 17.04, a result that was statistically significant (P < 0.0025). The Braden Scale scores ranged from 8.53 ± 3.21 to 13.48 ± 3.57. The nursing document scores ranged from 7.67 ± 3.98 to 10.12 ± 1.63; prevention measure scores ranged from 11.48 ± 4.18 to 13.96 ± 3.92. Differences in all of the above results are statistically significant (P < 0.05). Implementation of a QCC can standardise and improve the prevention measures for patients who are vulnerable to pressure sores and is of practical importance to their prevention and control. © 2017 Medicalhelplines.com Inc and John Wiley & Sons Ltd.

  2. Engineering studies related to geodetic and oceanographic remote sensing using short pulse techniques

    NASA Technical Reports Server (NTRS)

    Miller, L. S.; Brown, G. S.; Hayne, G. S.

    1973-01-01

    For the Skylab S-193 radar altimeter, data processing flow charts and identification of calibration requirements and problem areas for defined S-193 altimeter experiments are presented. An analysis and simulation of the relationship between one particular S-193 measurement and the parameter of interest for determining the sea surface scattering cross-section are considered. For the GEOS-C radar altimeter, results are presented for system analyses pertaining to signal-to-noise ratio, pulse compression threshold behavior, altimeter measurement variance characteristics, desirability of onboard averaging, tracker bandwidth considerations, and statistical character of the altimeter data in relation to harmonic analysis properties of the geodetic signal.

  3. Human speckle perception threshold for still images from a laser projection system.

    PubMed

    Roelandt, Stijn; Meuret, Youri; Jacobs, An; Willaert, Koen; Janssens, Peter; Thienpont, Hugo; Verschaffelt, Guy

    2014-10-06

    We study the perception of speckle by human observers in a laser projector based on a 40 persons survey. The speckle contrast is first objectively measured making use of a well-defined speckle measurement method. We statistically analyse the results of the user quality scores, revealing that the speckle perception is not only influenced by the speckle contrast settings of the projector, but it is also strongly influenced by the type of image shown. Based on the survey, we derive a speckle contrast threshold for which speckle can be seen, and separately we investigate a speckle disturbance limit that is tolerated by the majority of test persons.

  4. The Marburg-Münster Affective Disorders Cohort Study (MACS): A quality assurance protocol for MR neuroimaging data.

    PubMed

    Vogelbacher, Christoph; Möbius, Thomas W D; Sommer, Jens; Schuster, Verena; Dannlowski, Udo; Kircher, Tilo; Dempfle, Astrid; Jansen, Andreas; Bopp, Miriam H A

    2018-05-15

    Large, longitudinal, multi-center MR neuroimaging studies require comprehensive quality assurance (QA) protocols for assessing the general quality of the compiled data, indicating potential malfunctions in the scanning equipment, and evaluating inter-site differences that need to be accounted for in subsequent analyses. We describe the implementation of a QA protocol for functional magnet resonance imaging (fMRI) data based on the regular measurement of an MRI phantom and an extensive variety of currently published QA statistics. The protocol is implemented in the MACS (Marburg-Münster Affective Disorders Cohort Study, http://for2107.de/), a two-center research consortium studying the neurobiological foundations of affective disorders. Between February 2015 and October 2016, 1214 phantom measurements have been acquired using a standard fMRI protocol. Using 444 healthy control subjects which have been measured between 2014 and 2016 in the cohort, we investigate the extent of between-site differences in contrast to the dependence on subject-specific covariates (age and sex) for structural MRI, fMRI, and diffusion tensor imaging (DTI) data. We show that most of the presented QA statistics differ severely not only between the two scanners used for the cohort but also between experimental settings (e.g. hardware and software changes), demonstrate that some of these statistics depend on external variables (e.g. time of day, temperature), highlight their strong dependence on proper handling of the MRI phantom, and show how the use of a phantom holder may balance this dependence. Site effects, however, do not only exist for the phantom data, but also for human MRI data. Using T1-weighted structural images, we show that total intracranial (TIV), grey matter (GMV), and white matter (WMV) volumes significantly differ between the MR scanners, showing large effect sizes. Voxel-based morphometry (VBM) analyses show that these structural differences observed between scanners are most pronounced in the bilateral basal ganglia, thalamus, and posterior regions. Using DTI data, we also show that fractional anisotropy (FA) differs between sites in almost all regions assessed. When pooling data from multiple centers, our data show that it is a necessity to account not only for inter-site differences but also for hardware and software changes of the scanning equipment. Also, the strong dependence of the QA statistics on the reliable placement of the MRI phantom shows that the use of a phantom holder is recommended to reduce the variance of the QA statistics and thus to increase the probability of detecting potential scanner malfunctions. Copyright © 2018 Elsevier Inc. All rights reserved.

  5. Consensus building for interlaboratory studies, key comparisons, and meta-analysis

    NASA Astrophysics Data System (ADS)

    Koepke, Amanda; Lafarge, Thomas; Possolo, Antonio; Toman, Blaza

    2017-06-01

    Interlaboratory studies in measurement science, including key comparisons, and meta-analyses in several fields, including medicine, serve to intercompare measurement results obtained independently, and typically produce a consensus value for the common measurand that blends the values measured by the participants. Since interlaboratory studies and meta-analyses reveal and quantify differences between measured values, regardless of the underlying causes for such differences, they also provide so-called ‘top-down’ evaluations of measurement uncertainty. Measured values are often substantially over-dispersed by comparison with their individual, stated uncertainties, thus suggesting the existence of yet unrecognized sources of uncertainty (dark uncertainty). We contrast two different approaches to take dark uncertainty into account both in the computation of consensus values and in the evaluation of the associated uncertainty, which have traditionally been preferred by different scientific communities. One inflates the stated uncertainties by a multiplicative factor. The other adds laboratory-specific ‘effects’ to the value of the measurand. After distinguishing what we call recipe-based and model-based approaches to data reductions in interlaboratory studies, we state six guiding principles that should inform such reductions. These principles favor model-based approaches that expose and facilitate the critical assessment of validating assumptions, and give preeminence to substantive criteria to determine which measurement results to include, and which to exclude, as opposed to purely statistical considerations, and also how to weigh them. Following an overview of maximum likelihood methods, three general purpose procedures for data reduction are described in detail, including explanations of how the consensus value and degrees of equivalence are computed, and the associated uncertainty evaluated: the DerSimonian-Laird procedure; a hierarchical Bayesian procedure; and the Linear Pool. These three procedures have been implemented and made widely accessible in a Web-based application (NIST Consensus Builder). We illustrate principles, statistical models, and data reduction procedures in four examples: (i) the measurement of the Newtonian constant of gravitation; (ii) the measurement of the half-lives of radioactive isotopes of caesium and strontium; (iii) the comparison of two alternative treatments for carotid artery stenosis; and (iv) a key comparison where the measurand was the calibration factor of a radio-frequency power sensor.

  6. Trends in selected streamflow statistics at 19 long-term streamflow-gaging stations indicative of outflows from Texas to Arkansas, Louisiana, Galveston Bay, and the Gulf of Mexico, 1922-2009

    USGS Publications Warehouse

    Barbie, Dana L.; Wehmeyer, Loren L.

    2012-01-01

    Trends in selected streamflow statistics during 1922-2009 were evaluated at 19 long-term streamflow-gaging stations considered indicative of outflows from Texas to Arkansas, Louisiana, Galveston Bay, and the Gulf of Mexico. The U.S. Geological Survey, in cooperation with the Texas Water Development Board, evaluated streamflow data from streamflow-gaging stations with more than 50 years of record that were active as of 2009. The outflows into Arkansas and Louisiana were represented by 3 streamflow-gaging stations, and outflows into the Gulf of Mexico, including Galveston Bay, were represented by 16 streamflow-gaging stations. Monotonic trend analyses were done using the following three streamflow statistics generated from daily mean values of streamflow: (1) annual mean daily discharge, (2) annual maximum daily discharge, and (3) annual minimum daily discharge. The trend analyses were based on the nonparametric Kendall's Tau test, which is useful for the detection of monotonic upward or downward trends with time. A total of 69 trend analyses by Kendall's Tau were computed - 19 periods of streamflow multiplied by the 3 streamflow statistics plus 12 additional trend analyses because the periods of record for 2 streamflow-gaging stations were divided into periods representing pre- and post-reservoir impoundment. Unless otherwise described, each trend analysis used the entire period of record for each streamflow-gaging station. The monotonic trend analysis detected 11 statistically significant downward trends, 37 instances of no trend, and 21 statistically significant upward trends. One general region studied, which seemingly has relatively more upward trends for many of the streamflow statistics analyzed, includes the rivers and associated creeks and bayous to Galveston Bay in the Houston metropolitan area. Lastly, the most western river basins considered (the Nueces and Rio Grande) had statistically significant downward trends for many of the streamflow statistics analyzed.

  7. Assessment of sampling stability in ecological applications of discriminant analysis

    USGS Publications Warehouse

    Williams, B.K.; Titus, K.

    1988-01-01

    A simulation study was undertaken to assess the sampling stability of the variable loadings in linear discriminant function analysis. A factorial design was used for the factors of multivariate dimensionality, dispersion structure, configuration of group means, and sample size. A total of 32,400 discriminant analyses were conducted, based on data from simulated populations with appropriate underlying statistical distributions. A review of 60 published studies and 142 individual analyses indicated that sample sizes in ecological studies often have met that requirement. However, individual group sample sizes frequently were very unequal, and checks of assumptions usually were not reported. The authors recommend that ecologists obtain group sample sizes that are at least three times as large as the number of variables measured.

  8. Vertical force and torque analysis during mechanical preparation of extracted teeth using hand ProTaper instruments.

    PubMed

    Glavičić, Snježana; Anić, Ivica; Braut, Alen; Miletić, Ivana; Borčić, Josipa

    2011-08-01

    The purpose was to measure and analyse the vertical force and torque developed in the wider and narrower root canals during hand ProTaper instrumentation. Twenty human incisors were divided in two groups. Upper incisors were experimental model for the wide, while the lower incisors for the narrow root canals. Measurements of the force and torque were done by a device constructed for this purpose. Differences between the groups were statistically analysed by Mann-Whitney U-test with the significance level set to P<0.05. Vertical force in the upper incisors ranged 0.25-2.58 N, while in the lower incisors 0.38-6.94 N. Measured torque in the upper incisors ranged 0.53-12.03 Nmm, while in the lower incisor ranged 0.94-10.0 Nmm. Vertical force and torque were higher in the root canals of smaller diameter. The increase in the contact surface results in increase of the vertical force and torque as well in both narrower and wider root canals. © 2010 The Authors. Australian Endodontic Journal © 2010 Australian Society of Endodontology.

  9. Meta-epidemiologic study showed frequent time trends in summary estimates from meta-analyses of diagnostic accuracy studies.

    PubMed

    Cohen, Jérémie F; Korevaar, Daniël A; Wang, Junfeng; Leeflang, Mariska M; Bossuyt, Patrick M

    2016-09-01

    To evaluate changes over time in summary estimates from meta-analyses of diagnostic accuracy studies. We included 48 meta-analyses from 35 MEDLINE-indexed systematic reviews published between September 2011 and January 2012 (743 diagnostic accuracy studies; 344,015 participants). Within each meta-analysis, we ranked studies by publication date. We applied random-effects cumulative meta-analysis to follow how summary estimates of sensitivity and specificity evolved over time. Time trends were assessed by fitting a weighted linear regression model of the summary accuracy estimate against rank of publication. The median of the 48 slopes was -0.02 (-0.08 to 0.03) for sensitivity and -0.01 (-0.03 to 0.03) for specificity. Twelve of 96 (12.5%) time trends in sensitivity or specificity were statistically significant. We found a significant time trend in at least one accuracy measure for 11 of the 48 (23%) meta-analyses. Time trends in summary estimates are relatively frequent in meta-analyses of diagnostic accuracy studies. Results from early meta-analyses of diagnostic accuracy studies should be considered with caution. Copyright © 2016 Elsevier Inc. All rights reserved.

  10. Violent video game effects on aggression, empathy, and prosocial behavior in eastern and western countries: a meta-analytic review.

    PubMed

    Anderson, Craig A; Shibuya, Akiko; Ihori, Nobuko; Swing, Edward L; Bushman, Brad J; Sakamoto, Akira; Rothstein, Hannah R; Saleem, Muniba

    2010-03-01

    Meta-analytic procedures were used to test the effects of violent video games on aggressive behavior, aggressive cognition, aggressive affect, physiological arousal, empathy/desensitization, and prosocial behavior. Unique features of this meta-analytic review include (a) more restrictive methodological quality inclusion criteria than in past meta-analyses; (b) cross-cultural comparisons; (c) longitudinal studies for all outcomes except physiological arousal; (d) conservative statistical controls; (e) multiple moderator analyses; and (f) sensitivity analyses. Social-cognitive models and cultural differences between Japan and Western countries were used to generate theory-based predictions. Meta-analyses yielded significant effects for all 6 outcome variables. The pattern of results for different outcomes and research designs (experimental, cross-sectional, longitudinal) fit theoretical predictions well. The evidence strongly suggests that exposure to violent video games is a causal risk factor for increased aggressive behavior, aggressive cognition, and aggressive affect and for decreased empathy and prosocial behavior. Moderator analyses revealed significant research design effects, weak evidence of cultural differences in susceptibility and type of measurement effects, and no evidence of sex differences in susceptibility. Results of various sensitivity analyses revealed these effects to be robust, with little evidence of selection (publication) bias.

  11. Electronic trigger for capacitive touchscreen and extension of ISO 15781 standard time lag measurements to smartphones

    NASA Astrophysics Data System (ADS)

    Bucher, François-Xavier; Cao, Frédéric; Viard, Clément; Guichard, Frédéric

    2014-03-01

    We present in this paper a novel capacitive device that stimulates the touchscreen interface of a smartphone (or of any imaging device equipped with a capacitive touchscreen) and synchronizes triggering with the DxO LED Universal Timer to measure shooting time lag and shutter lag according to ISO 15781:2013. The device and protocol extend the time lag measurement beyond the standard by including negative shutter lag, a phenomenon that is more and more commonly found in smartphones. The device is computer-controlled, and this feature, combined with measurement algorithms, makes it possible to automatize a large series of captures so as to provide more refined statistical analyses when, for example, the shutter lag of "zero shutter lag" devices is limited by the frame time as our measurements confirm.

  12. Health research needs more comprehensive accessibility measures: integrating time and transport modes from open data.

    PubMed

    Tenkanen, Henrikki; Saarsalmi, Perttu; Järv, Olle; Salonen, Maria; Toivonen, Tuuli

    2016-07-28

    In this paper, we demonstrate why and how both temporality and multimodality should be integrated in health related studies that include accessibility perspective, in this case healthy food accessibility. We provide evidence regarding the importance of using multimodal spatio-temporal accessibility measures when conducting research in urban contexts and propose a methodological approach for integrating different travel modes and temporality to spatial accessibility analyses. We use the Helsinki metropolitan area (Finland) as our case study region to demonstrate the effects of temporality and modality on the results. Spatial analyses were carried out on 250 m statistical grid squares. We measured travel times between the home location of inhabitants and open grocery stores providing healthy food at 5 p.m., 10 p.m., and 1 a.m. using public transportation and private cars. We applied the so-called door-to-door approach for the travel time measurements to obtain more realistic and comparable results between travel modes. The analyses are based on open access data and publicly available open-source tools, thus similar analyses can be conducted in urban regions worldwide. Our results show that both time and mode of transport have a prominent impact on the outcome of the analyses; thus, understanding the realities of accessibility in a city may be very different according to the setting of the analysis used. In terms of travel time, there is clear variation in the results at different times of the day. In terms of travel mode, our results show that when analyzed in a comparable manner, public transport can be an even faster mode than a private car to access healthy food, especially in central areas of the city where the service network is dense and public transportation system is effective. This study demonstrates that time and transport modes are essential components when modeling health-related accessibility in urban environments. Neglecting them from spatial analyses may lead to overly simplified or even erroneous images of the realities of accessibility. Hence, there is a risk that health related planning and decisions based on simplistic accessibility measures might cause unwanted outcomes in terms of inequality among different groups of people.

  13. Decomposition Analyses Applied to a Complex Ultradian Biorhythm: The Oscillating NADH Oxidase Activity of Plasma Membranes Having a Potential Time-Keeping (Clock) Function

    PubMed Central

    Foster, Ken; Anwar, Nasim; Pogue, Rhea; Morré, Dorothy M.; Keenan, T. W.; Morré, D. James

    2003-01-01

    Seasonal decomposition analyses were applied to the statistical evaluation of an oscillating activity for a plasma membrane NADH oxidase activity with a temperature compensated period of 24 min. The decomposition fits were used to validate the cyclic oscillatory pattern. Three measured values, average percentage error (MAPE), a measure of the periodic oscillation, mean average deviation (MAD), a measure of the absolute average deviations from the fitted values, and mean standard deviation (MSD), the measure of standard deviation from the fitted values plus R-squared and the Henriksson-Merton p value were used to evaluate accuracy. Decomposition was carried out by fitting a trend line to the data, then detrending the data if necessary, by subtracting the trend component. The data, with or without detrending, were then smoothed by subtracting a centered moving average of length equal to the period length determined by Fourier analysis. Finally, the time series were decomposed into cyclic and error components. The findings not only validate the periodic nature of the major oscillations but suggest, as well, that the minor intervening fluctuations also recur within each period with a reproducible pattern of recurrence. PMID:19330112

  14. Publication of statistically significant research findings in prosthodontics & implant dentistry in the context of other dental specialties.

    PubMed

    Papageorgiou, Spyridon N; Kloukos, Dimitrios; Petridis, Haralampos; Pandis, Nikolaos

    2015-10-01

    To assess the hypothesis that there is excessive reporting of statistically significant studies published in prosthodontic and implantology journals, which could indicate selective publication. The last 30 issues of 9 journals in prosthodontics and implant dentistry were hand-searched for articles with statistical analyses. The percentages of significant and non-significant results were tabulated by parameter of interest. Univariable/multivariable logistic regression analyses were applied to identify possible predictors of reporting statistically significance findings. The results of this study were compared with similar studies in dentistry with random-effects meta-analyses. From the 2323 included studies 71% of them reported statistically significant results, with the significant results ranging from 47% to 86%. Multivariable modeling identified that geographical area and involvement of statistician were predictors of statistically significant results. Compared to interventional studies, the odds that in vitro and observational studies would report statistically significant results was increased by 1.20 times (OR: 2.20, 95% CI: 1.66-2.92) and 0.35 times (OR: 1.35, 95% CI: 1.05-1.73), respectively. The probability of statistically significant results from randomized controlled trials was significantly lower compared to various study designs (difference: 30%, 95% CI: 11-49%). Likewise the probability of statistically significant results in prosthodontics and implant dentistry was lower compared to other dental specialties, but this result did not reach statistical significant (P>0.05). The majority of studies identified in the fields of prosthodontics and implant dentistry presented statistically significant results. The same trend existed in publications of other specialties in dentistry. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. Arthroscopy for treating temporomandibular joint disorders.

    PubMed

    Currie, Roger

    2011-01-01

    The Cochrane Oral Health Group Trials Register, the Cochrane Central Register of Controlled Trials (CENTRAL), Medline Embase, Lilacs, Allied and Complementary Medicine Database (AMED) and CINAHL databases were searched. In addition the reference lists of the included articles were checked and 14 journals hand searched. Randomised controlled clinical trials (RCT) of arthroscopy for treating TMDs were included. There were no restrictions regarding the language or date of publication. Two review authors independently extracted data, and three review authors independently assessed the risk of bias of included trials. The authors of the selected articles were contacted for additional information. Pooling of trials was only attempted if at least two trials of comparable protocols, with the same conditions and similar outcome measurements were available. Statistical analysis was performed in accordance with the Cochrane Collaboration guidelines. Seven RCTs (n = 349) met the inclusion criteria. All the studies were either at high or unclear risk of bias. Pain was evaluated after six months in two studies. No statistically significant differences were found between the arthroscopy versus nonsurgical groups (standardised mean difference (SMD) = 0.004; 95% confidence interval (CI) - 0.46 to 0.55, P = 0.81). Two studies analysed pain 12 months after surgery (arthroscopy and arthrocentesis) in 81 patients. No statistically significant differences were found (mean difference (MD) = 0.10; 95% CI -1.46 to 1.66, P = 0.90). Three studies analysed the same outcome in patients who had been submitted to arthroscopic surgery or to open surgery and a statistically significant difference was found after 12 months (SMD = 0.45; 95% CI 0.01 to 0.89, P = 0.05) in favour of open surgery.The two studies compared the maximum interincisal opening in six different clinical outcomes (interincisal opening over 35 mm; maximum protrusion over 5 mm; click; crepitation; tenderness on palpation in the TMJ and the jaw muscles 12 months after arthroscopy and open surgery). The outcome measures did not present statistically significant differences (odds ratio (OR) = 1.00; 95% CI 0.45 to 2.21, P = 1.00). Two studies compared the maximum interincisal opening after 12 months of postsurgical follow-up. A statistically significant difference in favour of the arthroscopy group was observed (MD = 5.28; 95% CI 3.46 to 7.10, P < 0.0001).The two studies compared the mandibular function after 12 months of follow-up with 40 patients evaluated. The outcome measure was mandibular functionality (MFIQ). This difference was not statistically significant (MD = 1.58; 95% CI -0.78 to 3.94, P = 0.19). Both arthroscopy and nonsurgical treatments reduced pain after six months. When compared with arthroscopy, open surgery was more effective at reducing pain after 12 months. Nevertheless, there were no differences in mandibular functionality or in other outcomes in clinical evaluations. Arthroscopy led to greater improvement in maximum interincisal opening after 12 months than arthrocentesis; however, there was no difference in pain.

  16. Effects of Whole and Partial Body Exposure to Dry Heat on Certain Performance Measures.

    DTIC Science & Technology

    1981-05-01

    Robert Bachert who assisted with the statistical analyses; and he acknowledges the support of the Lite Mr. George C. Frost for his advice on the...of the study and interpretation of data: Dr. Arthur L Dudycha, Dr. Barry H. Kantowitz, Dr. N. M. Downie, Dr. Ernest J. McCormick, and Dr. Robert D...D., Summers, W. C., & Smedley , D. C., November 1974, Evaluation of a water- cooled helmet liner (AMRL-TR-74-135). Aerospace Medical Research

  17. Emotion Regulation Training for Treating Warfighters with Combat-Related PTSD Using Real-Time fMRI and EEG-Assisted Neurofeedback

    DTIC Science & Technology

    2017-12-01

    response integration . J Abnorm Psychol 92, 276-306. Misaki, M., Phillips, R., Zotev, V., Wong, C.K., Wurfel, B.E., Krueger, F., Feldner, M., Bodurka, J...illustrated schematically in Fig. A1A. The visits were typically scheduled one week apart. Each visit involved a psychological evaluation by a...from multiple tests. Partial correlation analyses were conducted using MATLAB Statistics toolbox. A3. Results A3.1 Psychological measures 11

  18. Biochemical and Genetic Markers in Aggressiveness and Recurrence of Prostate Cancer: Race-Specific Links to Inflammation and Insulin Resistance

    DTIC Science & Technology

    2013-07-01

    as a statistical graphic, and Pearson product moment correlation coefficients as measures of the strength of linear association; 4) performing SNP ...determine if there are differences in single nucleotide polymorphisms ( SNPs ) in selected candidate genes implicated in metabolic syndrome, obesity, chronic...samples for the serum and SNP analyses. We have reached a target of 500 patients at the end of year 2; however, some of the patients turned out to be

  19. Association Study between Lead and Zinc Accumulation at Different Physiological Systems of Cattle by Canonical Correlation and Canonical Correspondence Analyses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karmakar, Partha; Das, Pradip Kumar; Mondal, Seema Sarkar

    2010-10-26

    Pb pollution from automobile exhausts around highways is a persistent problem in India. Pb intoxication in mammalian body is a complex phenomenon which is influence by agonistic and antagonistic interactions of several other heavy metals and micronutrients. An attempt has been made to study the association between Pb and Zn accumulation in different physiological systems of cattles (n = 200) by application of both canonical correlation and canonical correspondence analyses. Pb was estimated from plasma, liver, bone, muscle, kidney, blood and milk where as Zn was measured from all these systems except bone, blood and milk. Both statistical techniques demonstratedmore » that there was a strong association among blood-Pb, liver-Zn, kidney-Zn and muscle-Zn. From observations, it can be assumed that Zn accumulation in cattles' muscle, liver and kidney directs Pb mobilization from those organs which in turn increases Pb pool in blood. It indicates antagonistic activity of Zn to the accumulation of Pb. Although there were some contradictions between the observations obtained from the two different statistical methods, the overall pattern of Pb accumulation in various organs as influenced by Zn were same. It is mainly due to the fact that canonical correlation is actually a special type of canonical correspondence analyses where linear relationship is followed between two groups of variables instead of Gaussian relationship.« less

  20. Association Study between Lead and Zinc Accumulation at Different Physiological Systems of Cattle by Canonical Correlation and Canonical Correspondence Analyses

    NASA Astrophysics Data System (ADS)

    Karmakar, Partha; Das, Pradip Kumar; Mondal, Seema Sarkar; Karmakar, Sougata; Mazumdar, Debasis

    2010-10-01

    Pb pollution from automobile exhausts around highways is a persistent problem in India. Pb intoxication in mammalian body is a complex phenomenon which is influence by agonistic and antagonistic interactions of several other heavy metals and micronutrients. An attempt has been made to study the association between Pb and Zn accumulation in different physiological systems of cattles (n = 200) by application of both canonical correlation and canonical correspondence analyses. Pb was estimated from plasma, liver, bone, muscle, kidney, blood and milk where as Zn was measured from all these systems except bone, blood and milk. Both statistical techniques demonstrated that there was a strong association among blood-Pb, liver-Zn, kidney-Zn and muscle-Zn. From observations, it can be assumed that Zn accumulation in cattles' muscle, liver and kidney directs Pb mobilization from those organs which in turn increases Pb pool in blood. It indicates antagonistic activity of Zn to the accumulation of Pb. Although there were some contradictions between the observations obtained from the two different statistical methods, the overall pattern of Pb accumulation in various organs as influenced by Zn were same. It is mainly due to the fact that canonical correlation is actually a special type of canonical correspondence analyses where linear relationship is followed between two groups of variables instead of Gaussian relationship.

  1. Quantification and Statistical Analysis Methods for Vessel Wall Components from Stained Images with Masson's Trichrome

    PubMed Central

    Hernández-Morera, Pablo; Castaño-González, Irene; Travieso-González, Carlos M.; Mompeó-Corredera, Blanca; Ortega-Santana, Francisco

    2016-01-01

    Purpose To develop a digital image processing method to quantify structural components (smooth muscle fibers and extracellular matrix) in the vessel wall stained with Masson’s trichrome, and a statistical method suitable for small sample sizes to analyze the results previously obtained. Methods The quantification method comprises two stages. The pre-processing stage improves tissue image appearance and the vessel wall area is delimited. In the feature extraction stage, the vessel wall components are segmented by grouping pixels with a similar color. The area of each component is calculated by normalizing the number of pixels of each group by the vessel wall area. Statistical analyses are implemented by permutation tests, based on resampling without replacement from the set of the observed data to obtain a sampling distribution of an estimator. The implementation can be parallelized on a multicore machine to reduce execution time. Results The methods have been tested on 48 vessel wall samples of the internal saphenous vein stained with Masson’s trichrome. The results show that the segmented areas are consistent with the perception of a team of doctors and demonstrate good correlation between the expert judgments and the measured parameters for evaluating vessel wall changes. Conclusion The proposed methodology offers a powerful tool to quantify some components of the vessel wall. It is more objective, sensitive and accurate than the biochemical and qualitative methods traditionally used. The permutation tests are suitable statistical techniques to analyze the numerical measurements obtained when the underlying assumptions of the other statistical techniques are not met. PMID:26761643

  2. Statistical analysis of nonmonotonic dose-response relationships: research design and analysis of nasal cell proliferation in rats exposed to formaldehyde.

    PubMed

    Gaylor, David W; Lutz, Werner K; Conolly, Rory B

    2004-01-01

    Statistical analyses of nonmonotonic dose-response curves are proposed, experimental designs to detect low-dose effects of J-shaped curves are suggested, and sample sizes are provided. For quantal data such as cancer incidence rates, much larger numbers of animals are required than for continuous data such as biomarker measurements. For example, 155 animals per dose group are required to have at least an 80% chance of detecting a decrease from a 20% incidence in controls to an incidence of 10% at a low dose. For a continuous measurement, only 14 animals per group are required to have at least an 80% chance of detecting a change of the mean by one standard deviation of the control group. Experimental designs based on three dose groups plus controls are discussed to detect nonmonotonicity or to estimate the zero equivalent dose (ZED), i.e., the dose that produces a response equal to the average response in the controls. Cell proliferation data in the nasal respiratory epithelium of rats exposed to formaldehyde by inhalation are used to illustrate the statistical procedures. Statistically significant departures from a monotonic dose response were obtained for time-weighted average labeling indices with an estimated ZED at a formaldehyde dose of 5.4 ppm, with a lower 95% confidence limit of 2.7 ppm. It is concluded that demonstration of a statistically significant bi-phasic dose-response curve, together with estimation of the resulting ZED, could serve as a point-of departure in establishing a reference dose for low-dose risk assessment.

  3. DESIGNING ENVIRONMENTAL MONITORING DATABASES FOR STATISTIC ASSESSMENT

    EPA Science Inventory

    Databases designed for statistical analyses have characteristics that distinguish them from databases intended for general use. EMAP uses a probabilistic sampling design to collect data to produce statistical assessments of environmental conditions. In addition to supporting the ...

  4. Comparing Visual and Statistical Analysis of Multiple Baseline Design Graphs.

    PubMed

    Wolfe, Katie; Dickenson, Tammiee S; Miller, Bridget; McGrath, Kathleen V

    2018-04-01

    A growing number of statistical analyses are being developed for single-case research. One important factor in evaluating these methods is the extent to which each corresponds to visual analysis. Few studies have compared statistical and visual analysis, and information about more recently developed statistics is scarce. Therefore, our purpose was to evaluate the agreement between visual analysis and four statistical analyses: improvement rate difference (IRD); Tau-U; Hedges, Pustejovsky, Shadish (HPS) effect size; and between-case standardized mean difference (BC-SMD). Results indicate that IRD and BC-SMD had the strongest overall agreement with visual analysis. Although Tau-U had strong agreement with visual analysis on raw values, it had poorer agreement when those values were dichotomized to represent the presence or absence of a functional relation. Overall, visual analysis appeared to be more conservative than statistical analysis, but further research is needed to evaluate the nature of these disagreements.

  5. Errors in statistical decision making Chapter 2 in Applied Statistics in Agricultural, Biological, and Environmental Sciences

    USDA-ARS?s Scientific Manuscript database

    Agronomic and Environmental research experiments result in data that are analyzed using statistical methods. These data are unavoidably accompanied by uncertainty. Decisions about hypotheses, based on statistical analyses of these data are therefore subject to error. This error is of three types,...

  6. Optimal distribution of integration time for intensity measurements in degree of linear polarization polarimetry.

    PubMed

    Li, Xiaobo; Hu, Haofeng; Liu, Tiegen; Huang, Bingjing; Song, Zhanjie

    2016-04-04

    We consider the degree of linear polarization (DOLP) polarimetry system, which performs two intensity measurements at orthogonal polarization states to estimate DOLP. We show that if the total integration time of intensity measurements is fixed, the variance of the DOLP estimator depends on the distribution of integration time for two intensity measurements. Therefore, by optimizing the distribution of integration time, the variance of the DOLP estimator can be decreased. In this paper, we obtain the closed-form solution of the optimal distribution of integration time in an approximate way by employing Delta method and Lagrange multiplier method. According to the theoretical analyses and real-world experiments, it is shown that the variance of the DOLP estimator can be decreased for any value of DOLP. The method proposed in this paper can effectively decrease the measurement variance and thus statistically improve the measurement accuracy of the polarimetry system.

  7. Introduction to the Special Series: Current Directions for Measuring Parenting Constructs to Inform Prevention Science.

    PubMed

    Lindhiem, Oliver; Shaffer, Anne

    2017-04-01

    Parenting behaviors are multifaceted and dynamic and therefore challenging to quantify. Measurement methods have critical implications for study results, particularly for prevention trials designed to modify parenting behaviors. Although multiple approaches can complement one another and contribute to a more complete understanding of prevention trials, the assumptions and implications of each approach are not always clearly addressed. Greater attention to the measurement of complex constructs such as parenting is needed to advance the field of prevention science. This series examines the challenges of measuring changes in parenting behaviors in the context of prevention trials. All manuscripts in the special series address measurement issues and make practical recommendations for prevention researchers. Manuscripts in this special series include (1) empirical studies that demonstrate novel measurement approaches, (2) re-analyses of prevention trial outcome data directly comparing and contrasting two or more methods, and (3) a statistical primer and practical guide to analyzing proportion data.

  8. Error of the slanted edge method for measuring the modulation transfer function of imaging systems.

    PubMed

    Xie, Xufen; Fan, Hongda; Wang, Hongyuan; Wang, Zebin; Zou, Nianyu

    2018-03-01

    The slanted edge method is a basic approach for measuring the modulation transfer function (MTF) of imaging systems; however, its measurement accuracy is limited in practice. Theoretical analysis of the slanted edge MTF measurement method performed in this paper reveals that inappropriate edge angles and random noise reduce this accuracy. The error caused by edge angles is analyzed using sampling and reconstruction theory. Furthermore, an error model combining noise and edge angles is proposed. We verify the analyses and model with respect to (i) the edge angle, (ii) a statistical analysis of the measurement error, (iii) the full width at half-maximum of a point spread function, and (iv) the error model. The experimental results verify the theoretical findings. This research can be referential for applications of the slanted edge MTF measurement method.

  9. The Large-Scale Structure of Semantic Networks: Statistical Analyses and a Model of Semantic Growth

    ERIC Educational Resources Information Center

    Steyvers, Mark; Tenenbaum, Joshua B.

    2005-01-01

    We present statistical analyses of the large-scale structure of 3 types of semantic networks: word associations, WordNet, and Roget's Thesaurus. We show that they have a small-world structure, characterized by sparse connectivity, short average path lengths between words, and strong local clustering. In addition, the distributions of the number of…

  10. Assessing signal-to-noise in quantitative proteomics: multivariate statistical analysis in DIGE experiments.

    PubMed

    Friedman, David B

    2012-01-01

    All quantitative proteomics experiments measure variation between samples. When performing large-scale experiments that involve multiple conditions or treatments, the experimental design should include the appropriate number of individual biological replicates from each condition to enable the distinction between a relevant biological signal from technical noise. Multivariate statistical analyses, such as principal component analysis (PCA), provide a global perspective on experimental variation, thereby enabling the assessment of whether the variation describes the expected biological signal or the unanticipated technical/biological noise inherent in the system. Examples will be shown from high-resolution multivariable DIGE experiments where PCA was instrumental in demonstrating biologically significant variation as well as sample outliers, fouled samples, and overriding technical variation that would not be readily observed using standard univariate tests.

  11. Evaluation of neutron total and capture cross sections on 99Tc in the unresolved resonance region

    NASA Astrophysics Data System (ADS)

    Iwamoto, Nobuyuki; Katabuchi, Tatsuya

    2017-09-01

    Long-lived fission product Technetium-99 is one of the most important radioisotopes for nuclear transmutation. The reliable nuclear data are indispensable for a wide energy range up to a few MeV, in order to develop environmental load reducing technology. The statistical analyses of resolved resonances were performed by using the truncated Porter-Thomas distribution, coupled-channels optical model, nuclear level density model and Bayes' theorem on conditional probability. The total and capture cross sections were calculated by a nuclear reaction model code CCONE. The resulting cross sections have statistical consistency between the resolved and unresolved resonance regions. The evaluated capture data reproduce those recently measured at ANNRI of J-PARC/MLF above resolved resonance region up to 800 keV.

  12. Use of model calibration to achieve high accuracy in analysis of computer networks

    DOEpatents

    Frogner, Bjorn; Guarro, Sergio; Scharf, Guy

    2004-05-11

    A system and method are provided for creating a network performance prediction model, and calibrating the prediction model, through application of network load statistical analyses. The method includes characterizing the measured load on the network, which may include background load data obtained over time, and may further include directed load data representative of a transaction-level event. Probabilistic representations of load data are derived to characterize the statistical persistence of the network performance variability and to determine delays throughout the network. The probabilistic representations are applied to the network performance prediction model to adapt the model for accurate prediction of network performance. Certain embodiments of the method and system may be used for analysis of the performance of a distributed application characterized as data packet streams.

  13. Differences in Performance Among Test Statistics for Assessing Phylogenomic Model Adequacy.

    PubMed

    Duchêne, David A; Duchêne, Sebastian; Ho, Simon Y W

    2018-05-18

    Statistical phylogenetic analyses of genomic data depend on models of nucleotide or amino acid substitution. The adequacy of these substitution models can be assessed using a number of test statistics, allowing the model to be rejected when it is found to provide a poor description of the evolutionary process. A potentially valuable use of model-adequacy test statistics is to identify when data sets are likely to produce unreliable phylogenetic estimates, but their differences in performance are rarely explored. We performed a comprehensive simulation study to identify test statistics that are sensitive to some of the most commonly cited sources of phylogenetic estimation error. Our results show that, for many test statistics, traditional thresholds for assessing model adequacy can fail to reject the model when the phylogenetic inferences are inaccurate and imprecise. This is particularly problematic when analysing loci that have few variable informative sites. We propose new thresholds for assessing substitution model adequacy and demonstrate their effectiveness in analyses of three phylogenomic data sets. These thresholds lead to frequent rejection of the model for loci that yield topological inferences that are imprecise and are likely to be inaccurate. We also propose the use of a summary statistic that provides a practical assessment of overall model adequacy. Our approach offers a promising means of enhancing model choice in genome-scale data sets, potentially leading to improvements in the reliability of phylogenomic inference.

  14. Refining cost-effectiveness analyses using the net benefit approach and econometric methods: an example from a trial of anti-depressant treatment.

    PubMed

    Sabes-Figuera, Ramon; McCrone, Paul; Kendricks, Antony

    2013-04-01

    Economic evaluation analyses can be enhanced by employing regression methods, allowing for the identification of important sub-groups and to adjust for imperfect randomisation in clinical trials or to analyse non-randomised data. To explore the benefits of combining regression techniques and the standard Bayesian approach to refine cost-effectiveness analyses using data from randomised clinical trials. Data from a randomised trial of anti-depressant treatment were analysed and a regression model was used to explore the factors that have an impact on the net benefit (NB) statistic with the aim of using these findings to adjust the cost-effectiveness acceptability curves. Exploratory sub-samples' analyses were carried out to explore possible differences in cost-effectiveness. Results The analysis found that having suffered a previous similar depression is strongly correlated with a lower NB, independent of the outcome measure or follow-up point. In patients with previous similar depression, adding an selective serotonin reuptake inhibitors (SSRI) to supportive care for mild-to-moderate depression is probably cost-effective at the level used by the English National Institute for Health and Clinical Excellence to make recommendations. This analysis highlights the need for incorporation of econometric methods into cost-effectiveness analyses using the NB approach.

  15. Physiotherapy triage assessment of patients referred for orthopaedic consultation - Long-term follow-up of health-related quality of life, pain-related disability and sick leave.

    PubMed

    Samsson, Karin S; Larsson, Maria E H

    2015-02-01

    The literature indicates that physiotherapy triage assessment can be efficient for patients referred for orthopaedic consultation, however long-term follow up of patient reported outcome measures are not available. To report a long-term evaluation of patient-reported health-related quality of life, pain-related disability, and sick leave after a physiotherapy triage assessment of patients referred for orthopaedic consultation compared with standard practice. Patients referred for orthopaedic consultation (n = 208) were randomised to physiotherapy triage assessment or standard practice. The randomised cohort was analysed on an intention-to-treat (ITT) basis. The patient reported outcome measures EuroQol VAS (self-reported health-state), EuroQol 5D-3L (EQ-5D) and Pain Disability Index (PDI) were assessed at baseline and after 3, 6 and 12 months. EQ VAS was analysed using a repeated measure ANOVA. PDI and EQ-5D were analysed using a marginal logistic regression model. Sick leave was analysed for the 12 months following consultation using a Mann-Whitney U-test. The patients rated a significantly better health-state at 3 after physiotherapy triage assessment [mean difference -5.7 (95% CI -11.1; -0.2); p = 0.04]. There were no other statistically significant differences in perceived health-related quality of life or pain related disability between the groups at any of the follow-ups, or sick leave. This study reports that the long-term follow up of the patient related outcome measures health-related quality of life, pain-related disability and sick leave after physiotherapy triage assessment did not differ from standard practice, indicating the possible benefits of implementation of this model of care. Copyright © 2014 Elsevier Ltd. All rights reserved.

  16. Estimation versus falsification approaches in sport and exercise science.

    PubMed

    Wilkinson, Michael; Winter, Edward M

    2018-05-22

    There has been a recent resurgence in debate about methods for statistical inference in science. The debate addresses statistical concepts and their impact on the value and meaning of analyses' outcomes. In contrast, philosophical underpinnings of approaches and the extent to which analytical tools match philosophical goals of the scientific method have received less attention. This short piece considers application of the scientific method to "what-is-the-influence-of x-on-y" type questions characteristic of sport and exercise science. We consider applications and interpretations of estimation versus falsification based statistical approaches and their value in addressing how much x influences y, and in measurement error and method agreement settings. We compare estimation using magnitude based inference (MBI) with falsification using null hypothesis significance testing (NHST), and highlight the limited value both of falsification and NHST to address problems in sport and exercise science. We recommend adopting an estimation approach, expressing the uncertainty of effects of x on y, and their practical/clinical value against pre-determined effect magnitudes using MBI.

  17. Robust approaches to quantification of margin and uncertainty for sparse data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hund, Lauren; Schroeder, Benjamin B.; Rumsey, Kelin

    Characterizing the tails of probability distributions plays a key role in quantification of margins and uncertainties (QMU), where the goal is characterization of low probability, high consequence events based on continuous measures of performance. When data are collected using physical experimentation, probability distributions are typically fit using statistical methods based on the collected data, and these parametric distributional assumptions are often used to extrapolate about the extreme tail behavior of the underlying probability distribution. In this project, we character- ize the risk associated with such tail extrapolation. Specifically, we conducted a scaling study to demonstrate the large magnitude of themore » risk; then, we developed new methods for communicat- ing risk associated with tail extrapolation from unvalidated statistical models; lastly, we proposed a Bayesian data-integration framework to mitigate tail extrapolation risk through integrating ad- ditional information. We conclude that decision-making using QMU is a complex process that cannot be achieved using statistical analyses alone.« less

  18. A phylogenetic transform enhances analysis of compositional microbiota data

    PubMed Central

    Silverman, Justin D; Washburne, Alex D; Mukherjee, Sayan; David, Lawrence A

    2017-01-01

    Surveys of microbial communities (microbiota), typically measured as relative abundance of species, have illustrated the importance of these communities in human health and disease. Yet, statistical artifacts commonly plague the analysis of relative abundance data. Here, we introduce the PhILR transform, which incorporates microbial evolutionary models with the isometric log-ratio transform to allow off-the-shelf statistical tools to be safely applied to microbiota surveys. We demonstrate that analyses of community-level structure can be applied to PhILR transformed data with performance on benchmarks rivaling or surpassing standard tools. Additionally, by decomposing distance in the PhILR transformed space, we identified neighboring clades that may have adapted to distinct human body sites. Decomposing variance revealed that covariation of bacterial clades within human body sites increases with phylogenetic relatedness. Together, these findings illustrate how the PhILR transform combines statistical and phylogenetic models to overcome compositional data challenges and enable evolutionary insights relevant to microbial communities. DOI: http://dx.doi.org/10.7554/eLife.21887.001 PMID:28198697

  19. The mediating effect of calling on the relationship between medical school students' academic burnout and empathy.

    PubMed

    Chae, Su Jin; Jeong, So Mi; Chung, Yoon-Sok

    2017-09-01

    This study is aimed at identifying the relationships between medical school students' academic burnout, empathy, and calling, and determining whether their calling has a mediating effect on the relationship between academic burnout and empathy. A mixed method study was conducted. One hundred twenty-seven medical students completed a survey. Scales measuring academic burnout, medical students' empathy, and calling were utilized. For statistical analysis, correlation analysis, descriptive statistics analysis, and hierarchical multiple regression analyses were conducted. For qualitative approach, eight medical students participated in a focus group interview. The study found that empathy has a statistically significant, negative correlation with academic burnout, while having a significant, positive correlation with calling. Sense of calling proved to be an effective mediator of the relationship between academic burnout and empathy. This result demonstrates that calling is a key variable that mediates the relationship between medical students' academic burnout and empathy. As such, this study provides baseline data for an education that could improve medical students' empathy skills.

  20. Wildfire cluster detection using space-time scan statistics

    NASA Astrophysics Data System (ADS)

    Tonini, M.; Tuia, D.; Ratle, F.; Kanevski, M.

    2009-04-01

    The aim of the present study is to identify spatio-temporal clusters of fires sequences using space-time scan statistics. These statistical methods are specifically designed to detect clusters and assess their significance. Basically, scan statistics work by comparing a set of events occurring inside a scanning window (or a space-time cylinder for spatio-temporal data) with those that lie outside. Windows of increasing size scan the zone across space and time: the likelihood ratio is calculated for each window (comparing the ratio "observed cases over expected" inside and outside): the window with the maximum value is assumed to be the most probable cluster, and so on. Under the null hypothesis of spatial and temporal randomness, these events are distributed according to a known discrete-state random process (Poisson or Bernoulli), which parameters can be estimated. Given this assumption, it is possible to test whether or not the null hypothesis holds in a specific area. In order to deal with fires data, the space-time permutation scan statistic has been applied since it does not require the explicit specification of the population-at risk in each cylinder. The case study is represented by Florida daily fire detection using the Moderate Resolution Imaging Spectroradiometer (MODIS) active fire product during the period 2003-2006. As result, statistically significant clusters have been identified. Performing the analyses over the entire frame period, three out of the five most likely clusters have been identified in the forest areas, on the North of the country; the other two clusters cover a large zone in the South, corresponding to agricultural land and the prairies in the Everglades. Furthermore, the analyses have been performed separately for the four years to analyze if the wildfires recur each year during the same period. It emerges that clusters of forest fires are more frequent in hot seasons (spring and summer), while in the South areas they are widely present along the whole year. The analysis of fires distribution to evaluate if they are statistically more frequent in some area or/and in some period of the year, can be useful to support fire management and to focus on prevention measures.

  1. Hydrometeorological and statistical analyses of heavy rainfall in Midwestern USA

    NASA Astrophysics Data System (ADS)

    Thorndahl, S.; Smith, J. A.; Krajewski, W. F.

    2012-04-01

    During the last two decades the mid-western states of the United States of America has been largely afflicted by heavy flood producing rainfall. Several of these storms seem to have similar hydrometeorological properties in terms of pattern, track, evolution, life cycle, clustering, etc. which raise the question if it is possible to derive general characteristics of the space-time structures of these heavy storms. This is important in order to understand hydrometeorological features, e.g. how storms evolve and with what frequency we can expect extreme storms to occur. In the literature, most studies of extreme rainfall are based on point measurements (rain gauges). However, with high resolution and quality radar observation periods exceeding more than two decades, it is possible to do long-term spatio-temporal statistical analyses of extremes. This makes it possible to link return periods to distributed rainfall estimates and to study precipitation structures which cause floods. However, doing these statistical frequency analyses of rainfall based on radar observations introduces some different challenges, converting radar reflectivity observations to "true" rainfall, which are not problematic doing traditional analyses on rain gauge data. It is for example difficult to distinguish reflectivity from high intensity rain from reflectivity from other hydrometeors such as hail, especially using single polarization radars which are used in this study. Furthermore, reflectivity from bright band (melting layer) should be discarded and anomalous propagation should be corrected in order to produce valid statistics of extreme radar rainfall. Other challenges include combining observations from several radars to one mosaic, bias correction against rain gauges, range correction, ZR-relationships, etc. The present study analyzes radar rainfall observations from 1996 to 2011 based the American NEXRAD network of radars over an area covering parts of Iowa, Wisconsin, Illinois, and Lake Michigan. The radar observations are processed using Hydro-NEXRAD algorithms in order to produce rainfall estimates with a spatial resolution of 1 km and a temporal resolution of 15 min. The rainfall estimates are bias-corrected on a daily basis using a network of rain gauges. Besides a thorough evaluation of the different challenges in investigating heavy rain as described above the study includes suggestions for frequency analysis methods as well as studies of hydrometeorological features of single events.

  2. Partners or Partners in Crime? The Relationship Between Criminal Associates and Criminogenic Thinking.

    PubMed

    Whited, William H; Wagar, Laura; Mandracchia, Jon T; Morgan, Robert D

    2017-04-01

    Meta-analyses examining the risk factors for recidivism have identified the importance of ties with criminal associates as well as thoughts and attitudes conducive to the continuance of criminal behavior (e.g., criminogenic thinking). Criminologists have theorized that a direct relationship exists between the association with criminal peers and the development of criminogenic thinking. The present study empirically explored the relationship between criminal associates and criminogenic thinking in 595 adult male inmates in the United States. It was hypothesized that the proportion of free time spent with and number of criminal associates would be associated with criminogenic thinking, as measured by two self-report instruments, the Measure of Offender Thinking Styles-Revised (MOTS-R) and the Psychological Inventory of Criminal Thinking Styles (PICTS). Hierarchal linear regression analyses demonstrated that the proportion of free time spent with criminal associates statistically predicted criminogenic thinking when controlling for demographic variables. The implications of these findings on correctional practice (including assessment and intervention) as well as future research are discussed.

  3. Microplate-based filter paper assay to measure total cellulase activity.

    PubMed

    Xiao, Zhizhuang; Storms, Reginald; Tsang, Adrian

    2004-12-30

    The standard filter paper assay (FPA) published by the International Union of Pure and Applied Chemistry (IUPAC) is widely used to determine total cellulase activity. However, the IUPAC method is not suitable for the parallel analyses of large sample numbers. We describe here a microplate-based method for assaying large sample numbers. To achieve this, we reduced the enzymatic reaction volume to 60 microl from the 1.5 ml used in the IUPAC method. The modified 60-microl format FPA can be carried out in 96-well assay plates. Statistical analyses showed that the cellulase activities of commercial cellulases from Trichoderma reesei and Aspergillus species determined with our 60-microl format FPA were not significantly different from the activities measured with the standard FPA. Our results also indicate that the 60-microl format FPA is quantitative and highly reproducible. Moreover, the addition of excess beta-glucosidase increased the sensitivity of the assay by up to 60%. 2004 Wiley Periodicals, Inc.

  4. Association between smoking status and the parameters of vascular structure and function in adults: results from the EVIDENT study.

    PubMed

    Recio-Rodriguez, Jose I; Gomez-Marcos, Manuel A; Patino Alonso, Maria C; Martin-Cantera, Carlos; Ibañez-Jalon, Elisa; Melguizo-Bejar, Amor; Garcia-Ortiz, Luis

    2013-12-01

    The present study analyses the relation between smoking status and the parameters used to assess vascular structure and function. This cross-sectional, multi-centre study involved a random sample of 1553 participants from the EVIDENT study. The smoking status, peripheral augmentation index and ankle-brachial index were measured in all participants. In a small subset of the main population (265 participants), the carotid intima-media thickness and pulse wave velocity were also measured. After controlling for the effect of age, sex and other risk factors, present smokers have higher values of carotid intima-media thickness (p = 0.011). Along the same lines, current smokers have higher values of pulse wave velocity and lower mean values of ankle-brachial index but without statistical significance in both cases. Among the parameters of vascular structure and function analysed, only the IMT shows association with the smoking status, after adjusting for confounders.

  5. Statistical Analyses of Raw Material Data for MTM45-1/CF7442A-36% RW: CMH Cure Cycle

    NASA Technical Reports Server (NTRS)

    Coroneos, Rula; Pai, Shantaram, S.; Murthy, Pappu

    2013-01-01

    This report describes statistical characterization of physical properties of the composite material system MTM45-1/CF7442A, which has been tested and is currently being considered for use on spacecraft structures. This composite system is made of 6K plain weave graphite fibers in a highly toughened resin system. This report summarizes the distribution types and statistical details of the tests and the conditions for the experimental data generated. These distributions will be used in multivariate regression analyses to help determine material and design allowables for similar material systems and to establish a procedure for other material systems. Additionally, these distributions will be used in future probabilistic analyses of spacecraft structures. The specific properties that are characterized are the ultimate strength, modulus, and Poisson??s ratio by using a commercially available statistical package. Results are displayed using graphical and semigraphical methods and are included in the accompanying appendixes.

  6. Worksite Tobacco Prevention: A Randomized, Controlled Trial of Adoption, Dissemination Strategies, and Aggregated Health-Related Outcomes across Companies.

    PubMed

    Friedrich, Verena; Brügger, Adrian; Bauer, Georg F

    2015-01-01

    Evidence based public health requires knowledge about successful dissemination of public health measures. This study analyses (a) the changes in worksite tobacco prevention (TP) in the Canton of Zurich, Switzerland, between 2007 and 2009; (b1) the results of a multistep versus a "brochure only" dissemination strategy; (b2) the results of a monothematic versus a comprehensive dissemination strategy that aim to get companies to adopt TP measures; and (c) whether worksite TP is associated with health-related outcomes. A longitudinal design with randomized control groups was applied. Data on worksite TP and health-related outcomes were gathered by a written questionnaire (baseline n = 1627; follow-up n = 1452) and analysed using descriptive statistics, nonparametric procedures, and ordinal regression models. TP measures at worksites improved slightly between 2007 and 2009. The multistep dissemination was superior to the "brochure only" condition. No significant differences between the monothematic and the comprehensive dissemination strategies were observed. However, improvements in TP measures at worksites were associated with improvements in health-related outcomes. Although dissemination was approached at a mass scale, little change in the advocated adoption of TP measures was observed, suggesting the need for even more aggressive outreach or an acceptance that these channels do not seem to be sufficiently effective.

  7. Polish adaptation of three self-report measures of job stressors: the Interpersonal Conflict at Work Scale, the Quantitative Workload Inventory and the Organizational Constraints Scale.

    PubMed

    Baka, Łukasz; Bazińska, Róża

    2016-01-01

    The objective of the present study was to test the psychometric properties, reliability and validity of three job stressor measures, namely, the Interpersonal Conflict at Work Scale, the Organizational Constraints Scale and the Quantitative Workload Inventory. The study was conducted on two samples (N = 382 and 3368) representing a wide range of occupations. The estimation of internal consistency with Cronbach's α and the test-retest method as well as both exploratory and confirmatory factor analyses were the main statistical methods. The internal consistency of the scales proved satisfactory, ranging from 0.80 to 0.90 for Cronbach's α test and from 0.72 to 0.86 for the test-retest method. The one-dimensional structure of the three measurements was confirmed. The three scales have acceptable fit to the data. The one-factor structures and other psychometric properties of the Polish version of the scales seem to be similar to those found in the US version of the scales. It was also proved that the three job stressors are positively related to all the job strain measures. The Polish versions of the three analysed scales can be used to measure the job stressors in Polish conditions.

  8. Polish adaptation of three self-report measures of job stressors: the Interpersonal Conflict at Work Scale, the Quantitative Workload Inventory and the Organizational Constraints Scale

    PubMed Central

    Baka, Łukasz; Bazińska, Róża

    2016-01-01

    Aim. The objective of the present study was to test the psychometric properties, reliability and validity of three job stressor measures, namely, the Interpersonal Conflict at Work Scale, the Organizational Constraints Scale and the Quantitative Workload Inventory. Method. The study was conducted on two samples (N = 382 and 3368) representing a wide range of occupations. The estimation of internal consistency with Cronbach's α and the test–retest method as well as both exploratory and confirmatory factor analyses were the main statistical methods. Results. The internal consistency of the scales proved satisfactory, ranging from 0.80 to 0.90 for Cronbach's α test and from 0.72 to 0.86 for the test–retest method. The one-dimensional structure of the three measurements was confirmed. The three scales have acceptable fit to the data. The one-factor structures and other psychometric properties of the Polish version of the scales seem to be similar to those found in the US version of the scales. It was also proved that the three job stressors are positively related to all the job strain measures. Conclusions. The Polish versions of the three analysed scales can be used to measure the job stressors in Polish conditions. PMID:26652317

  9. Worksite Tobacco Prevention: A Randomized, Controlled Trial of Adoption, Dissemination Strategies, and Aggregated Health-Related Outcomes across Companies

    PubMed Central

    Friedrich, Verena; Brügger, Adrian; Bauer, Georg F.

    2015-01-01

    Evidence based public health requires knowledge about successful dissemination of public health measures. This study analyses (a) the changes in worksite tobacco prevention (TP) in the Canton of Zurich, Switzerland, between 2007 and 2009; (b1) the results of a multistep versus a “brochure only” dissemination strategy; (b2) the results of a monothematic versus a comprehensive dissemination strategy that aim to get companies to adopt TP measures; and (c) whether worksite TP is associated with health-related outcomes. A longitudinal design with randomized control groups was applied. Data on worksite TP and health-related outcomes were gathered by a written questionnaire (baseline n = 1627; follow-up n = 1452) and analysed using descriptive statistics, nonparametric procedures, and ordinal regression models. TP measures at worksites improved slightly between 2007 and 2009. The multistep dissemination was superior to the “brochure only” condition. No significant differences between the monothematic and the comprehensive dissemination strategies were observed. However, improvements in TP measures at worksites were associated with improvements in health-related outcomes. Although dissemination was approached at a mass scale, little change in the advocated adoption of TP measures was observed, suggesting the need for even more aggressive outreach or an acceptance that these channels do not seem to be sufficiently effective. PMID:26504778

  10. Comparison of Percentage of Syllables Stuttered With Parent-Reported Severity Ratings as a Primary Outcome Measure in Clinical Trials of Early Stuttering Treatment.

    PubMed

    Onslow, Mark; Jones, Mark; O'Brian, Sue; Packman, Ann; Menzies, Ross; Lowe, Robyn; Arnott, Simone; Bridgman, Kate; de Sonneville, Caroline; Franken, Marie-Christine

    2018-04-17

    This report investigates whether parent-reported stuttering severity ratings (SRs) provide similar estimates of effect size as percentage of syllables stuttered (%SS) for randomized trials of early stuttering treatment with preschool children. Data sets from 3 randomized controlled trials of an early stuttering intervention were selected for analyses. Analyses included median changes and 95% confidence intervals per treatment group, Bland-Altman plots, analysis of covariance, and Spearman rho correlations. Both SRs and %SS showed large effect sizes from pretreatment to follow-up, although correlations between the 2 measures were moderate at best. Absolute agreement between the 2 measures improved as percentage reduction of stuttering frequency and severity increased, probably due to innate measurement limitations for participants with low baseline severity. Analysis of covariance for the 3 trials showed consistent results. There is no statistical reason to favor %SS over parent-reported stuttering SRs as primary outcomes for clinical trials of early stuttering treatment. However, there are logistical reasons to favor parent-reported stuttering SRs. We conclude that parent-reported rating of the child's typical stuttering severity for the week or month prior to each assessment is a justifiable alternative to %SS as a primary outcome measure in clinical trials of early stuttering treatment.

  11. Post Hoc Analyses of ApoE Genotype-Defined Subgroups in Clinical Trials.

    PubMed

    Kennedy, Richard E; Cutter, Gary R; Wang, Guoqiao; Schneider, Lon S

    2016-01-01

    Many post hoc analyses of clinical trials in Alzheimer's disease (AD) and mild cognitive impairment (MCI) are in small Phase 2 trials. Subject heterogeneity may lead to statistically significant post hoc results that cannot be replicated in larger follow-up studies. We investigated the extent of this problem using simulation studies mimicking current trial methods with post hoc analyses based on ApoE4 carrier status. We used a meta-database of 24 studies, including 3,574 subjects with mild AD and 1,171 subjects with MCI/prodromal AD, to simulate clinical trial scenarios. Post hoc analyses examined if rates of progression on the Alzheimer's Disease Assessment Scale-cognitive (ADAS-cog) differed between ApoE4 carriers and non-carriers. Across studies, ApoE4 carriers were younger and had lower baseline scores, greater rates of progression, and greater variability on the ADAS-cog. Up to 18% of post hoc analyses for 18-month trials in AD showed greater rates of progression for ApoE4 non-carriers that were statistically significant but unlikely to be confirmed in follow-up studies. The frequency of erroneous conclusions dropped below 3% with trials of 100 subjects per arm. In MCI, rates of statistically significant differences with greater progression in ApoE4 non-carriers remained below 3% unless sample sizes were below 25 subjects per arm. Statistically significant differences for ApoE4 in post hoc analyses often reflect heterogeneity among small samples rather than true differential effect among ApoE4 subtypes. Such analyses must be viewed cautiously. ApoE genotype should be incorporated into the design stage to minimize erroneous conclusions.

  12. Utility of the Personality Inventory for DSM-5-Brief Form (PID-5-BF) in the Measurement of Maladaptive Personality and Psychopathology.

    PubMed

    Anderson, Jaime L; Sellbom, Martin; Salekin, Randall T

    2018-07-01

    The Diagnostic and Statistical Manual of Mental Disorders-Fifth edition ( DSM-5) Personality and Personality Disorders workgroup developed the Personality Inventory for the DSM-5 (PID-5) for the assessment of the alternative trait model for DSM-5. Along with this measure, the American Psychiatric Association published an abbreviated version, the PID-5-Brief form (PID-5-BF). Although this measure is available on the DSM-5 website for use, only two studies have evaluated its psychometric properties and validity and no studies have examined the U.S. version of this measure. The current study evaluated the reliability, factor structure, and construct validity of PID-5-BF scale scores. This included an evaluation of the scales' associations with Section II PDs, a well-validated dimensional measure of personality psychopathology, and broad externalizing and internalizing psychopathology measures. We found support for the reliability of PID-5-BF scales as well as for the factor structure of the measure. Furthermore, a series of correlation and regression analyses showed conceptually expected associations between PID-5-BF and external criterion variables. Finally, we compared the correlations with external criterion measures to those of the full-length PID-5 and PID-5-Short form. Intraclass correlation analyses revealed a comparable pattern of correlations across all three measures, thereby supporting the use of the PID-5-BF as a screening measure of dimensional maladaptive personality traits.

  13. Differences between Subjective Balanced Occlusion and Measurements Reported With T-Scan III

    PubMed Central

    Lila-Krasniqi, Zana; Shala, Kujtim; Krasniqi, Teuta Pustina; Bicaj, Teuta; Ahmedi, Enis; Dula, Linda; Dragusha, Arlinda Tmava; Guguvcevski, Ljuben

    2017-01-01

    BACKGROUND: The aetiology of Temporomandibular disorder is multifactorial, and numerous studies have addressed that occlusion may be of great importance in the pathogenesis of Temporomandibular disorder. AIM: The aim of this study is to determine if any direct relationship exists between balanced occlusion and Temporomandibular disorder and to evaluate the differences between subjective balanced occlusion and measurements reported with T-scan III electronic system. MATERIAL AND METHODS: A total of 54 subjects were divided into three groups, selection based on anamnesis-responded to a Fonseca questionnaire and clinical measurements analysed with electronic system T-scan III. In the I study group were participants with fixed dentures with prosthetic ceramic restorations. In the II study group were symptomatic participants with TMD. In the third control group were healthy participants with full arch dentition that completed a subjective questionnaire that documented the absence of jaw pain, joint noise, locking and subjects without a history of TMD. The occlusal balance was reported subjectively through Fonseca questionnaire and compared with occlusion analysed with electronic system T-scan III. RESULTS: For attributive data were used percentage of the structure. Differences in P < 0.05 were considered significant. After distributing attributive data of occlusal balance subjectively reported and compared with measurements analysed with electronic system T-scan III were found significant difference P < 0.001 in all three groups. CONCLUSION: In our study, it was concluded that there were statistically significant differences of balanced occlusion in all three groups. Also it was concluded that subjective data are not exact with measurements reported with electronic device T-scan III. PMID:28932311

  14. Analysis of Trace Siderophile Elements at High Spatial Resolution Using Laser Ablation ICP-MS

    NASA Astrophysics Data System (ADS)

    Campbell, A. J.; Humayun, M.

    2006-05-01

    Laser ablation inductively coupled plasma mass spectometry is an increasingly important method of performing spatially resolved trace element analyses. Over the last several years we have applied this technique to measure siderophile element distributions at the ppm level in a variety of natural and synthetic samples, especially metallic phases in meteorites and experimental run products intended for trace element partitioning studies. These samples frequently require trace element analyses to be made at a finer spatial resolution (25 microns or better) than is frequently attained using LA-ICP-MS. In this presentation we review analytical protocols that were developed to optimize the LA-ICP-MS measurements for high spatial resolution. Particular attention is paid to the trade-offs involving sensitivity, ablation pit depth and diameter, background levels, and number of elements measured. To maximize signal/background ratios and avoid difficulties associated with ablating to depths greater than the ablation pit diameter, measurement involved integration of rapidly varying, transient but well-behaved signals. The abundances of platinum group elements and other siderophile elements in ferrous metals were calibrated against well-characterized standards, including iron meteorites and NIST certified steels. The calibrations can be set against the known abundance of an independently determined element, but normalization to 100 percent can also be employed, and was more useful in many circumstances. Evaluation of uncertainties incorporated counting statistics as well as a measure of instrumental uncertainty, determined by replicate analyses of the standards. These methods have led to a number of insights into the formation and chemical processing of metal in the early solar system.

  15. Validation of Aura Microwave Limb Sounder stratospheric water vapor measurements by the NOAA frost point hygrometer.

    PubMed

    Hurst, Dale F; Lambert, Alyn; Read, William G; Davis, Sean M; Rosenlof, Karen H; Hall, Emrys G; Jordan, Allen F; Oltmans, Samuel J

    2014-02-16

    Differences between stratospheric water vapor measurements by NOAA frost point hygrometers (FPHs) and the Aura Microwave Limb Sounder (MLS) are evaluated for the period August 2004 through December 2012 at Boulder, Colorado, Hilo, Hawaii, and Lauder, New Zealand. Two groups of MLS profiles coincident with the FPH soundings at each site are identified using unique sets of spatiotemporal criteria. Before evaluating the differences between coincident FPH and MLS profiles, each FPH profile is convolved with the MLS averaging kernels for eight pressure levels from 100 to 26 hPa (~16 to 25 km) to reduce its vertical resolution to that of the MLS water vapor retrievals. The mean FPH - MLS differences at every pressure level (100 to 26 hPa) are well within the combined measurement uncertainties of the two instruments. However, the mean differences at 100 and 83 hPa are statistically significant and negative, ranging from -0.46 ± 0.22 ppmv (-10.3 ± 4.8%) to -0.10 ± 0.05 ppmv (-2.2 ± 1.2%). Mean differences at the six pressure levels from 68 to 26 hPa are on average 0.8% (0.04 ppmv), and only a few are statistically significant. The FPH - MLS differences at each site are examined for temporal trends using weighted linear regression analyses. The vast majority of trends determined here are not statistically significant, and most are smaller than the minimum trends detectable in this analysis. Except at 100 and 83 hPa, the average agreement between MLS retrievals and FPH measurements of stratospheric water vapor is better than 1%.

  16. Constructing three emotion knowledge tests from the invariant measurement approach

    PubMed Central

    Prieto, Gerardo; Burin, Debora I.

    2017-01-01

    Background Psychological constructionist models like the Conceptual Act Theory (CAT) postulate that complex states such as emotions are composed of basic psychological ingredients that are more clearly respected by the brain than basic emotions. The objective of this study was the construction and initial validation of Emotion Knowledge measures from the CAT frame by means of an invariant measurement approach, the Rasch Model (RM). Psychological distance theory was used to inform item generation. Methods Three EK tests—emotion vocabulary (EV), close emotional situations (CES) and far emotional situations (FES)—were constructed and tested with the RM in a community sample of 100 females and 100 males (age range: 18–65), both separately and conjointly. Results It was corroborated that data-RM fit was sufficient. Then, the effect of type of test and emotion on Rasch-modelled item difficulty was tested. Significant effects of emotion on EK item difficulty were found, but the only statistically significant difference was that between “happiness” and the remaining emotions; neither type of test, nor interaction effects on EK item difficulty were statistically significant. The testing of gender differences was carried out after corroborating that differential item functioning (DIF) would not be a plausible alternative hypothesis for the results. No statistically significant sex-related differences were found out in EV, CES, FES, or total EK. However, the sign of d indicate that female participants were consistently better than male ones, a result that will be of interest for future meta-analyses. Discussion The three EK tests are ready to be used as components of a higher-level measurement process. PMID:28929013

  17. Validation of Aura Microwave Limb Sounder stratospheric water vapor measurements by the NOAA frost point hygrometer

    NASA Astrophysics Data System (ADS)

    Hurst, Dale F.; Lambert, Alyn; Read, William G.; Davis, Sean M.; Rosenlof, Karen H.; Hall, Emrys G.; Jordan, Allen F.; Oltmans, Samuel J.

    2014-02-01

    Differences between stratospheric water vapor measurements by NOAA frost point hygrometers (FPHs) and the Aura Microwave Limb Sounder (MLS) are evaluated for the period August 2004 through December 2012 at Boulder, Colorado, Hilo, Hawaii, and Lauder, New Zealand. Two groups of MLS profiles coincident with the FPH soundings at each site are identified using unique sets of spatiotemporal criteria. Before evaluating the differences between coincident FPH and MLS profiles, each FPH profile is convolved with the MLS averaging kernels for eight pressure levels from 100 to 26 hPa (~16 to 25 km) to reduce its vertical resolution to that of the MLS water vapor retrievals. The mean FPH - MLS differences at every pressure level (100 to 26 hPa) are well within the combined measurement uncertainties of the two instruments. However, the mean differences at 100 and 83 hPa are statistically significant and negative, ranging from -0.46 ± 0.22 ppmv (-10.3 ± 4.8%) to -0.10 ± 0.05 ppmv (-2.2 ± 1.2%). Mean differences at the six pressure levels from 68 to 26 hPa are on average 0.8% (0.04 ppmv), and only a few are statistically significant. The FPH - MLS differences at each site are examined for temporal trends using weighted linear regression analyses. The vast majority of trends determined here are not statistically significant, and most are smaller than the minimum trends detectable in this analysis. Except at 100 and 83 hPa, the average agreement between MLS retrievals and FPH measurements of stratospheric water vapor is better than 1%.

  18. A Primer on Receiver Operating Characteristic Analysis and Diagnostic Efficiency Statistics for Pediatric Psychology: We Are Ready to ROC

    PubMed Central

    2014-01-01

    Objective To offer a practical demonstration of receiver operating characteristic (ROC) analyses, diagnostic efficiency statistics, and their application to clinical decision making using a popular parent checklist to assess for potential mood disorder. Method Secondary analyses of data from 589 families seeking outpatient mental health services, completing the Child Behavior Checklist and semi-structured diagnostic interviews. Results Internalizing Problems raw scores discriminated mood disorders significantly better than did age- and gender-normed T scores, or an Affective Problems score. Internalizing scores <8 had a diagnostic likelihood ratio <0.3, and scores >30 had a diagnostic likelihood ratio of 7.4. Conclusions This study illustrates a series of steps in defining a clinical problem, operationalizing it, selecting a valid study design, and using ROC analyses to generate statistics that support clinical decisions. The ROC framework offers important advantages for clinical interpretation. Appendices include sample scripts using SPSS and R to check assumptions and conduct ROC analyses. PMID:23965298

  19. A primer on receiver operating characteristic analysis and diagnostic efficiency statistics for pediatric psychology: we are ready to ROC.

    PubMed

    Youngstrom, Eric A

    2014-03-01

    To offer a practical demonstration of receiver operating characteristic (ROC) analyses, diagnostic efficiency statistics, and their application to clinical decision making using a popular parent checklist to assess for potential mood disorder. Secondary analyses of data from 589 families seeking outpatient mental health services, completing the Child Behavior Checklist and semi-structured diagnostic interviews. Internalizing Problems raw scores discriminated mood disorders significantly better than did age- and gender-normed T scores, or an Affective Problems score. Internalizing scores <8 had a diagnostic likelihood ratio <0.3, and scores >30 had a diagnostic likelihood ratio of 7.4. This study illustrates a series of steps in defining a clinical problem, operationalizing it, selecting a valid study design, and using ROC analyses to generate statistics that support clinical decisions. The ROC framework offers important advantages for clinical interpretation. Appendices include sample scripts using SPSS and R to check assumptions and conduct ROC analyses.

  20. Distinguishing Mediational Models and Analyses in Clinical Psychology: Atemporal Associations Do Not Imply Causation.

    PubMed

    Winer, E Samuel; Cervone, Daniel; Bryant, Jessica; McKinney, Cliff; Liu, Richard T; Nadorff, Michael R

    2016-09-01

    A popular way to attempt to discern causality in clinical psychology is through mediation analysis. However, mediation analysis is sometimes applied to research questions in clinical psychology when inferring causality is impossible. This practice may soon increase with new, readily available, and easy-to-use statistical advances. Thus, we here provide a heuristic to remind clinical psychological scientists of the assumptions of mediation analyses. We describe recent statistical advances and unpack assumptions of causality in mediation, underscoring the importance of time in understanding mediational hypotheses and analyses in clinical psychology. Example analyses demonstrate that statistical mediation can occur despite theoretical mediation being improbable. We propose a delineation of mediational effects derived from cross-sectional designs into the terms temporal and atemporal associations to emphasize time in conceptualizing process models in clinical psychology. The general implications for mediational hypotheses and the temporal frameworks from within which they may be drawn are discussed. © 2016 Wiley Periodicals, Inc.

Top