Sample records for statistical analysis consisted

  1. Image-analysis library

    NASA Technical Reports Server (NTRS)

    1980-01-01

    MATHPAC image-analysis library is collection of general-purpose mathematical and statistical routines and special-purpose data-analysis and pattern-recognition routines for image analysis. MATHPAC library consists of Linear Algebra, Optimization, Statistical-Summary, Densities and Distribution, Regression, and Statistical-Test packages.

  2. Evaluating the consistency of gene sets used in the analysis of bacterial gene expression data.

    PubMed

    Tintle, Nathan L; Sitarik, Alexandra; Boerema, Benjamin; Young, Kylie; Best, Aaron A; Dejongh, Matthew

    2012-08-08

    Statistical analyses of whole genome expression data require functional information about genes in order to yield meaningful biological conclusions. The Gene Ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) are common sources of functionally grouped gene sets. For bacteria, the SEED and MicrobesOnline provide alternative, complementary sources of gene sets. To date, no comprehensive evaluation of the data obtained from these resources has been performed. We define a series of gene set consistency metrics directly related to the most common classes of statistical analyses for gene expression data, and then perform a comprehensive analysis of 3581 Affymetrix® gene expression arrays across 17 diverse bacteria. We find that gene sets obtained from GO and KEGG demonstrate lower consistency than those obtained from the SEED and MicrobesOnline, regardless of gene set size. Despite the widespread use of GO and KEGG gene sets in bacterial gene expression data analysis, the SEED and MicrobesOnline provide more consistent sets for a wide variety of statistical analyses. Increased use of the SEED and MicrobesOnline gene sets in the analysis of bacterial gene expression data may improve statistical power and utility of expression data.

  3. Notes on numerical reliability of several statistical analysis programs

    USGS Publications Warehouse

    Landwehr, J.M.; Tasker, Gary D.

    1999-01-01

    This report presents a benchmark analysis of several statistical analysis programs currently in use in the USGS. The benchmark consists of a comparison between the values provided by a statistical analysis program for variables in the reference data set ANASTY and their known or calculated theoretical values. The ANASTY data set is an amendment of the Wilkinson NASTY data set that has been used in the statistical literature to assess the reliability (computational correctness) of calculated analytical results.

  4. A Frequency Domain Approach to Pretest Analysis Model Correlation and Model Updating for the Mid-Frequency Range

    DTIC Science & Technology

    2009-02-01

    range of modal analysis and the high frequency region of statistical energy analysis , is referred to as the mid-frequency range. The corresponding...frequency range of modal analysis and the high frequency region of statistical energy analysis , is referred to as the mid-frequency range. The...predictions. The averaging process is consistent with the averaging done in statistical energy analysis for stochastic systems. The FEM will always

  5. Statistical analysis of weigh-in-motion data for bridge design in Vermont.

    DOT National Transportation Integrated Search

    2014-10-01

    This study investigates the suitability of the HL-93 live load model recommended by AASHTO LRFD Specifications : for its use in the analysis and design of bridges in Vermont. The method of approach consists in performing a : statistical analysis of w...

  6. An Analysis of Research Methods and Statistical Techniques Used by Doctoral Dissertation at the Education Sciences in Turkey

    ERIC Educational Resources Information Center

    Karadag, Engin

    2010-01-01

    To assess research methods and analysis of statistical techniques employed by educational researchers, this study surveyed unpublished doctoral dissertation from 2003 to 2007. Frequently used research methods consisted of experimental research; a survey; a correlational study; and a case study. Descriptive statistics, t-test, ANOVA, factor…

  7. PV System Component Fault and Failure Compilation and Analysis.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klise, Geoffrey Taylor; Lavrova, Olga; Gooding, Renee Lynne

    This report describes data collection and analysis of solar photovoltaic (PV) equipment events, which consist of faults and fa ilures that occur during the normal operation of a distributed PV system or PV power plant. We present summary statistics from locations w here maintenance data is being collected at various intervals, as well as reliability statistics gathered from that da ta, consisting of fault/failure distributions and repair distributions for a wide range of PV equipment types.

  8. Vibratory Response and Acoustical Radiation of a Water-Loaded, Turbulence-Excited Plate-Cavity System--Option 6

    DTIC Science & Technology

    1975-07-01

    Statistical Energy Analysis MAJOR ASSUMPTIONS AND LIMITATIONS . Simply supported panel it contidarad to ba vibrating freely in a mode consisting of e...Shells: Statistical Energy Analysis . Modal Coupling and Nonresonant Transmission. Univ Houston, Dept Mech Eng Tech Report 21 (Aug 1970); also J...Oscillators. J. Acoust. Soc. Am., Vol. 34, No. 5 (May 1962). 14. Ungar, E.E., Fundamentals of Statistical Energy Analysis of Vibrating Systems, Tech

  9. Model Performance Evaluation and Scenario Analysis (MPESA) Tutorial

    EPA Science Inventory

    This tool consists of two parts: model performance evaluation and scenario analysis (MPESA). The model performance evaluation consists of two components: model performance evaluation metrics and model diagnostics. These metrics provides modelers with statistical goodness-of-fit m...

  10. Performance Analysis of Live-Virtual-Constructive and Distributed Virtual Simulations: Defining Requirements in Terms of Temporal Consistency

    DTIC Science & Technology

    2009-12-01

    events. Work associated with aperiodic tasks have the same statistical behavior and the same timing requirements. The timing deadlines are soft. • Sporadic...answers, but it is possible to calculate how precise the estimates are. Simulation-based performance analysis of a model includes a statistical ...to evaluate all pos- sible states in a timely manner. This is the principle reason for resorting to simulation and statistical analysis to evaluate

  11. A Bifactor Approach to Model Multifaceted Constructs in Statistical Mediation Analysis

    ERIC Educational Resources Information Center

    Gonzalez, Oscar; MacKinnon, David P.

    2018-01-01

    Statistical mediation analysis allows researchers to identify the most important mediating constructs in the causal process studied. Identifying specific mediators is especially relevant when the hypothesized mediating construct consists of multiple related facets. The general definition of the construct and its facets might relate differently to…

  12. Outcomes Definitions and Statistical Tests in Oncology Studies: A Systematic Review of the Reporting Consistency.

    PubMed

    Rivoirard, Romain; Duplay, Vianney; Oriol, Mathieu; Tinquaut, Fabien; Chauvin, Franck; Magne, Nicolas; Bourmaud, Aurelie

    2016-01-01

    Quality of reporting for Randomized Clinical Trials (RCTs) in oncology was analyzed in several systematic reviews, but, in this setting, there is paucity of data for the outcomes definitions and consistency of reporting for statistical tests in RCTs and Observational Studies (OBS). The objective of this review was to describe those two reporting aspects, for OBS and RCTs in oncology. From a list of 19 medical journals, three were retained for analysis, after a random selection: British Medical Journal (BMJ), Annals of Oncology (AoO) and British Journal of Cancer (BJC). All original articles published between March 2009 and March 2014 were screened. Only studies whose main outcome was accompanied by a corresponding statistical test were included in the analysis. Studies based on censored data were excluded. Primary outcome was to assess quality of reporting for description of primary outcome measure in RCTs and of variables of interest in OBS. A logistic regression was performed to identify covariates of studies potentially associated with concordance of tests between Methods and Results parts. 826 studies were included in the review, and 698 were OBS. Variables were described in Methods section for all OBS studies and primary endpoint was clearly detailed in Methods section for 109 RCTs (85.2%). 295 OBS (42.2%) and 43 RCTs (33.6%) had perfect agreement for reported statistical test between Methods and Results parts. In multivariable analysis, variable "number of included patients in study" was associated with test consistency: aOR (adjusted Odds Ratio) for third group compared to first group was equal to: aOR Grp3 = 0.52 [0.31-0.89] (P value = 0.009). Variables in OBS and primary endpoint in RCTs are reported and described with a high frequency. However, statistical tests consistency between methods and Results sections of OBS is not always noted. Therefore, we encourage authors and peer reviewers to verify consistency of statistical tests in oncology studies.

  13. Outcomes Definitions and Statistical Tests in Oncology Studies: A Systematic Review of the Reporting Consistency

    PubMed Central

    Rivoirard, Romain; Duplay, Vianney; Oriol, Mathieu; Tinquaut, Fabien; Chauvin, Franck; Magne, Nicolas; Bourmaud, Aurelie

    2016-01-01

    Background Quality of reporting for Randomized Clinical Trials (RCTs) in oncology was analyzed in several systematic reviews, but, in this setting, there is paucity of data for the outcomes definitions and consistency of reporting for statistical tests in RCTs and Observational Studies (OBS). The objective of this review was to describe those two reporting aspects, for OBS and RCTs in oncology. Methods From a list of 19 medical journals, three were retained for analysis, after a random selection: British Medical Journal (BMJ), Annals of Oncology (AoO) and British Journal of Cancer (BJC). All original articles published between March 2009 and March 2014 were screened. Only studies whose main outcome was accompanied by a corresponding statistical test were included in the analysis. Studies based on censored data were excluded. Primary outcome was to assess quality of reporting for description of primary outcome measure in RCTs and of variables of interest in OBS. A logistic regression was performed to identify covariates of studies potentially associated with concordance of tests between Methods and Results parts. Results 826 studies were included in the review, and 698 were OBS. Variables were described in Methods section for all OBS studies and primary endpoint was clearly detailed in Methods section for 109 RCTs (85.2%). 295 OBS (42.2%) and 43 RCTs (33.6%) had perfect agreement for reported statistical test between Methods and Results parts. In multivariable analysis, variable "number of included patients in study" was associated with test consistency: aOR (adjusted Odds Ratio) for third group compared to first group was equal to: aOR Grp3 = 0.52 [0.31–0.89] (P value = 0.009). Conclusion Variables in OBS and primary endpoint in RCTs are reported and described with a high frequency. However, statistical tests consistency between methods and Results sections of OBS is not always noted. Therefore, we encourage authors and peer reviewers to verify consistency of statistical tests in oncology studies. PMID:27716793

  14. Wood Products Analysis

    NASA Technical Reports Server (NTRS)

    1990-01-01

    Structural Reliability Consultants' computer program creates graphic plots showing the statistical parameters of glue laminated timbers, or 'glulam.' The company president, Dr. Joseph Murphy, read in NASA Tech Briefs about work related to analysis of Space Shuttle surface tile strength performed for Johnson Space Center by Rockwell International Corporation. Analysis led to a theory of 'consistent tolerance bounds' for statistical distributions, applicable in industrial testing where statistical analysis can influence product development and use. Dr. Murphy then obtained the Tech Support Package that covers the subject in greater detail. The TSP became the basis for Dr. Murphy's computer program PC-DATA, which he is marketing commercially.

  15. Web-Based Statistical Sampling and Analysis

    ERIC Educational Resources Information Center

    Quinn, Anne; Larson, Karen

    2016-01-01

    Consistent with the Common Core State Standards for Mathematics (CCSSI 2010), the authors write that they have asked students to do statistics projects with real data. To obtain real data, their students use the free Web-based app, Census at School, created by the American Statistical Association (ASA) to help promote civic awareness among school…

  16. Random-Effects Meta-Analysis of Time-to-Event Data Using the Expectation-Maximisation Algorithm and Shrinkage Estimators

    ERIC Educational Resources Information Center

    Simmonds, Mark C.; Higgins, Julian P. T.; Stewart, Lesley A.

    2013-01-01

    Meta-analysis of time-to-event data has proved difficult in the past because consistent summary statistics often cannot be extracted from published results. The use of individual patient data allows for the re-analysis of each study in a consistent fashion and thus makes meta-analysis of time-to-event data feasible. Time-to-event data can be…

  17. [Statistical validity of the Mexican Food Security Scale and the Latin American and Caribbean Food Security Scale].

    PubMed

    Villagómez-Ornelas, Paloma; Hernández-López, Pedro; Carrasco-Enríquez, Brenda; Barrios-Sánchez, Karina; Pérez-Escamilla, Rafael; Melgar-Quiñónez, Hugo

    2014-01-01

    This article validates the statistical consistency of two food security scales: the Mexican Food Security Scale (EMSA) and the Latin American and Caribbean Food Security Scale (ELCSA). Validity tests were conducted in order to verify that both scales were consistent instruments, conformed by independent, properly calibrated and adequately sorted items, arranged in a continuum of severity. The following tests were developed: sorting of items; Cronbach's alpha analysis; parallelism of prevalence curves; Rasch models; sensitivity analysis through mean differences' hypothesis test. The tests showed that both scales meet the required attributes and are robust statistical instruments for food security measurement. This is relevant given that the lack of access to food indicator, included in multidimensional poverty measurement in Mexico, is calculated with EMSA.

  18. Data management in large-scale collaborative toxicity studies: how to file experimental data for automated statistical analysis.

    PubMed

    Stanzel, Sven; Weimer, Marc; Kopp-Schneider, Annette

    2013-06-01

    High-throughput screening approaches are carried out for the toxicity assessment of a large number of chemical compounds. In such large-scale in vitro toxicity studies several hundred or thousand concentration-response experiments are conducted. The automated evaluation of concentration-response data using statistical analysis scripts saves time and yields more consistent results in comparison to data analysis performed by the use of menu-driven statistical software. Automated statistical analysis requires that concentration-response data are available in a standardised data format across all compounds. To obtain consistent data formats, a standardised data management workflow must be established, including guidelines for data storage, data handling and data extraction. In this paper two procedures for data management within large-scale toxicological projects are proposed. Both procedures are based on Microsoft Excel files as the researcher's primary data format and use a computer programme to automate the handling of data files. The first procedure assumes that data collection has not yet started whereas the second procedure can be used when data files already exist. Successful implementation of the two approaches into the European project ACuteTox is illustrated. Copyright © 2012 Elsevier Ltd. All rights reserved.

  19. MSUSTAT.

    ERIC Educational Resources Information Center

    Mauriello, David

    1984-01-01

    Reviews an interactive statistical analysis package (designed to run on 8- and 16-bit machines that utilize CP/M 80 and MS-DOS operating systems), considering its features and uses, documentation, operation, and performance. The package consists of 40 general purpose statistical procedures derived from the classic textbook "Statistical…

  20. Methods for collection and analysis of aquatic biological and microbiological samples

    USGS Publications Warehouse

    Greeson, Phillip E.; Ehlke, T.A.; Irwin, G.A.; Lium, B.W.; Slack, K.V.

    1977-01-01

    Chapter A4 contains methods used by the U.S. Geological Survey to collect, preserve, and analyze waters to determine their biological and microbiological properties. Part 1 discusses biological sampling and sampling statistics. The statistical procedures are accompanied by examples. Part 2 consists of detailed descriptions of more than 45 individual methods, including those for bacteria, phytoplankton, zooplankton, seston, periphyton, macrophytes, benthic invertebrates, fish and other vertebrates, cellular contents, productivity, and bioassays. Each method is summarized, and the application, interferences, apparatus, reagents, collection, analysis, calculations, reporting of results, precision and references are given. Part 3 consists of a glossary. Part 4 is a list of taxonomic references.

  1. Planck 2015 results. XVI. Isotropy and statistics of the CMB

    NASA Astrophysics Data System (ADS)

    Planck Collaboration; Ade, P. A. R.; Aghanim, N.; Akrami, Y.; Aluri, P. K.; Arnaud, M.; Ashdown, M.; Aumont, J.; Baccigalupi, C.; Banday, A. J.; Barreiro, R. B.; Bartolo, N.; Basak, S.; Battaner, E.; Benabed, K.; Benoît, A.; Benoit-Lévy, A.; Bernard, J.-P.; Bersanelli, M.; Bielewicz, P.; Bock, J. J.; Bonaldi, A.; Bonavera, L.; Bond, J. R.; Borrill, J.; Bouchet, F. R.; Boulanger, F.; Bucher, M.; Burigana, C.; Butler, R. C.; Calabrese, E.; Cardoso, J.-F.; Casaponsa, B.; Catalano, A.; Challinor, A.; Chamballu, A.; Chiang, H. C.; Christensen, P. R.; Church, S.; Clements, D. L.; Colombi, S.; Colombo, L. P. L.; Combet, C.; Contreras, D.; Couchot, F.; Coulais, A.; Crill, B. P.; Cruz, M.; Curto, A.; Cuttaia, F.; Danese, L.; Davies, R. D.; Davis, R. J.; de Bernardis, P.; de Rosa, A.; de Zotti, G.; Delabrouille, J.; Désert, F.-X.; Diego, J. M.; Dole, H.; Donzelli, S.; Doré, O.; Douspis, M.; Ducout, A.; Dupac, X.; Efstathiou, G.; Elsner, F.; Enßlin, T. A.; Eriksen, H. K.; Fantaye, Y.; Fergusson, J.; Fernandez-Cobos, R.; Finelli, F.; Forni, O.; Frailis, M.; Fraisse, A. A.; Franceschi, E.; Frejsel, A.; Frolov, A.; Galeotta, S.; Galli, S.; Ganga, K.; Gauthier, C.; Ghosh, T.; Giard, M.; Giraud-Héraud, Y.; Gjerløw, E.; González-Nuevo, J.; Górski, K. M.; Gratton, S.; Gregorio, A.; Gruppuso, A.; Gudmundsson, J. E.; Hansen, F. K.; Hanson, D.; Harrison, D. L.; Henrot-Versillé, S.; Hernández-Monteagudo, C.; Herranz, D.; Hildebrandt, S. R.; Hivon, E.; Hobson, M.; Holmes, W. A.; Hornstrup, A.; Hovest, W.; Huang, Z.; Huffenberger, K. M.; Hurier, G.; Jaffe, A. H.; Jaffe, T. R.; Jones, W. C.; Juvela, M.; Keihänen, E.; Keskitalo, R.; Kim, J.; Kisner, T. S.; Knoche, J.; Kunz, M.; Kurki-Suonio, H.; Lagache, G.; Lähteenmäki, A.; Lamarre, J.-M.; Lasenby, A.; Lattanzi, M.; Lawrence, C. R.; Leonardi, R.; Lesgourgues, J.; Levrier, F.; Liguori, M.; Lilje, P. B.; Linden-Vørnle, M.; Liu, H.; López-Caniego, M.; Lubin, P. M.; Macías-Pérez, J. F.; Maggio, G.; Maino, D.; Mandolesi, N.; Mangilli, A.; Marinucci, D.; Maris, M.; Martin, P. G.; Martínez-González, E.; Masi, S.; Matarrese, S.; McGehee, P.; Meinhold, P. R.; Melchiorri, A.; Mendes, L.; Mennella, A.; Migliaccio, M.; Mikkelsen, K.; Mitra, S.; Miville-Deschênes, M.-A.; Molinari, D.; Moneti, A.; Montier, L.; Morgante, G.; Mortlock, D.; Moss, A.; Munshi, D.; Murphy, J. A.; Naselsky, P.; Nati, F.; Natoli, P.; Netterfield, C. B.; Nørgaard-Nielsen, H. U.; Noviello, F.; Novikov, D.; Novikov, I.; Oxborrow, C. A.; Paci, F.; Pagano, L.; Pajot, F.; Pant, N.; Paoletti, D.; Pasian, F.; Patanchon, G.; Pearson, T. J.; Perdereau, O.; Perotto, L.; Perrotta, F.; Pettorino, V.; Piacentini, F.; Piat, M.; Pierpaoli, E.; Pietrobon, D.; Plaszczynski, S.; Pointecouteau, E.; Polenta, G.; Popa, L.; Pratt, G. W.; Prézeau, G.; Prunet, S.; Puget, J.-L.; Rachen, J. P.; Rebolo, R.; Reinecke, M.; Remazeilles, M.; Renault, C.; Renzi, A.; Ristorcelli, I.; Rocha, G.; Rosset, C.; Rossetti, M.; Rotti, A.; Roudier, G.; Rubiño-Martín, J. A.; Rusholme, B.; Sandri, M.; Santos, D.; Savelainen, M.; Savini, G.; Scott, D.; Seiffert, M. D.; Shellard, E. P. S.; Souradeep, T.; Spencer, L. D.; Stolyarov, V.; Stompor, R.; Sudiwala, R.; Sunyaev, R.; Sutton, D.; Suur-Uski, A.-S.; Sygnet, J.-F.; Tauber, J. A.; Terenzi, L.; Toffolatti, L.; Tomasi, M.; Tristram, M.; Trombetti, T.; Tucci, M.; Tuovinen, J.; Valenziano, L.; Valiviita, J.; Van Tent, B.; Vielva, P.; Villa, F.; Wade, L. A.; Wandelt, B. D.; Wehus, I. K.; Yvon, D.; Zacchei, A.; Zibin, J. P.; Zonca, A.

    2016-09-01

    We test the statistical isotropy and Gaussianity of the cosmic microwave background (CMB) anisotropies using observations made by the Planck satellite. Our results are based mainly on the full Planck mission for temperature, but also include some polarization measurements. In particular, we consider the CMB anisotropy maps derived from the multi-frequency Planck data by several component-separation methods. For the temperature anisotropies, we find excellent agreement between results based on these sky maps over both a very large fraction of the sky and a broad range of angular scales, establishing that potential foreground residuals do not affect our studies. Tests of skewness, kurtosis, multi-normality, N-point functions, and Minkowski functionals indicate consistency with Gaussianity, while a power deficit at large angular scales is manifested in several ways, for example low map variance. The results of a peak statistics analysis are consistent with the expectations of a Gaussian random field. The "Cold Spot" is detected with several methods, including map kurtosis, peak statistics, and mean temperature profile. We thoroughly probe the large-scale dipolar power asymmetry, detecting it with several independent tests, and address the subject of a posteriori correction. Tests of directionality suggest the presence of angular clustering from large to small scales, but at a significance that is dependent on the details of the approach. We perform the first examination of polarization data, finding the morphology of stacked peaks to be consistent with the expectations of statistically isotropic simulations. Where they overlap, these results are consistent with the Planck 2013 analysis based on the nominal mission data and provide our most thorough view of the statistics of the CMB fluctuations to date.

  2. Planck 2015 results: XVI. Isotropy and statistics of the CMB

    DOE PAGES

    Ade, P. A. R.; Aghanim, N.; Akrami, Y.; ...

    2016-09-20

    In this paper, we test the statistical isotropy and Gaussianity of the cosmic microwave background (CMB) anisotropies using observations made by the Planck satellite. Our results are based mainly on the full Planck mission for temperature, but also include some polarization measurements. In particular, we consider the CMB anisotropy maps derived from the multi-frequency Planck data by several component-separation methods. For the temperature anisotropies, we find excellent agreement between results based on these sky maps over both a very large fraction of the sky and a broad range of angular scales, establishing that potential foreground residuals do not affect ourmore » studies. Tests of skewness, kurtosis, multi-normality, N-point functions, and Minkowski functionals indicate consistency with Gaussianity, while a power deficit at large angular scales is manifested in several ways, for example low map variance. The results of a peak statistics analysis are consistent with the expectations of a Gaussian random field. The “Cold Spot” is detected with several methods, including map kurtosis, peak statistics, and mean temperature profile. We thoroughly probe the large-scale dipolar power asymmetry, detecting it with several independent tests, and address the subject of a posteriori correction. Tests of directionality suggest the presence of angular clustering from large to small scales, but at a significance that is dependent on the details of the approach. We perform the first examination of polarization data, finding the morphology of stacked peaks to be consistent with the expectations of statistically isotropic simulations. Finally, where they overlap, these results are consistent with the Planck 2013 analysis based on the nominal mission data and provide our most thorough view of the statistics of the CMB fluctuations to date.« less

  3. Upward Flame Propagation and Wire Insulation Flammability: 2006 Round Robin Data Analysis

    NASA Technical Reports Server (NTRS)

    Hirsch, David B.

    2007-01-01

    This viewgraph document reviews test results from tests of different material used for wire insulation for flame propagation and flammability. The presentation focused on investigating data variability both within and between laboratories; evaluated the between-laboratory consistency through consistency statistic h, which indicates how one laboratory s cell average compares with averages from other labs; evaluated the within-laboratory consistency through the consistency statistic k, which is an indicator of how one laboratory s within-laboratory variability compares with the variability of other labs combined; and extreme results were tested to determine whether they resulted by chance or from nonrandom causes (human error, instrument calibration shift, non-adherence to procedures, etc.)

  4. Statistical analysis of the determinations of the Sun's Galactocentric distance

    NASA Astrophysics Data System (ADS)

    Malkin, Zinovy

    2013-02-01

    Based on several tens of R0 measurements made during the past two decades, several studies have been performed to derive the best estimate of R0. Some used just simple averaging to derive a result, whereas others provided comprehensive analyses of possible errors in published results. In either case, detailed statistical analyses of data used were not performed. However, a computation of the best estimates of the Galactic rotation constants is not only an astronomical but also a metrological task. Here we perform an analysis of 53 R0 measurements (published in the past 20 years) to assess the consistency of the data. Our analysis shows that they are internally consistent. It is also shown that any trend in the R0 estimates from the last 20 years is statistically negligible, which renders the presence of a bandwagon effect doubtful. On the other hand, the formal errors in the published R0 estimates improve significantly with time.

  5. A Finite-Volume "Shaving" Method for Interfacing NASA/DAO''s Physical Space Statistical Analysis System to the Finite-Volume GCM with a Lagrangian Control-Volume Vertical Coordinate

    NASA Technical Reports Server (NTRS)

    Lin, Shian-Jiann; DaSilva, Arlindo; Atlas, Robert (Technical Monitor)

    2001-01-01

    Toward the development of a finite-volume Data Assimilation System (fvDAS), a consistent finite-volume methodology is developed for interfacing the NASA/DAO's Physical Space Statistical Analysis System (PSAS) to the joint NASA/NCAR finite volume CCM3 (fvCCM3). To take advantage of the Lagrangian control-volume vertical coordinate of the fvCCM3, a novel "shaving" method is applied to the lowest few model layers to reflect the surface pressure changes as implied by the final analysis. Analysis increments (from PSAS) to the upper air variables are then consistently put onto the Lagrangian layers as adjustments to the volume-mean quantities during the analysis cycle. This approach is demonstrated to be superior to the conventional method of using independently computed "tendency terms" for surface pressure and upper air prognostic variables.

  6. Statistically Characterizing Intra- and Inter-Individual Variability in Children with Developmental Coordination Disorder

    ERIC Educational Resources Information Center

    King, Bradley R.; Harring, Jeffrey R.; Oliveira, Marcio A.; Clark, Jane E.

    2011-01-01

    Previous research investigating children with Developmental Coordination Disorder (DCD) has consistently reported increased intra- and inter-individual variability during motor skill performance. Statistically characterizing this variability is not only critical for the analysis and interpretation of behavioral data, but also may facilitate our…

  7. What Should Researchers Expect When They Replicate Studies? A Statistical View of Replicability in Psychological Science.

    PubMed

    Patil, Prasad; Peng, Roger D; Leek, Jeffrey T

    2016-07-01

    A recent study of the replicability of key psychological findings is a major contribution toward understanding the human side of the scientific process. Despite the careful and nuanced analysis reported, the simple narrative disseminated by the mass, social, and scientific media was that in only 36% of the studies were the original results replicated. In the current study, however, we showed that 77% of the replication effect sizes reported were within a 95% prediction interval calculated using the original effect size. Our analysis suggests two critical issues in understanding replication of psychological studies. First, researchers' intuitive expectations for what a replication should show do not always match with statistical estimates of replication. Second, when the results of original studies are very imprecise, they create wide prediction intervals-and a broad range of replication effects that are consistent with the original estimates. This may lead to effects that replicate successfully, in that replication results are consistent with statistical expectations, but do not provide much information about the size (or existence) of the true effect. In this light, the results of the Reproducibility Project: Psychology can be viewed as statistically consistent with what one might expect when performing a large-scale replication experiment. © The Author(s) 2016.

  8. Navigation analysis for Viking 1979, option B

    NASA Technical Reports Server (NTRS)

    Mitchell, P. H.

    1971-01-01

    A parametric study performed for 48 trans-Mars reference missions in support of the Viking program is reported. The launch dates cover several months in the year 1979, and each launch date has multiple arrival dates in 1980. A plot of launch versus arrival dates with case numbers designated for reference purposes is included. The analysis consists of the computation of statistical covariance matrices based on certain assumptions about the ground-based tracking systems. The error model statistics are listed in tables. Tracking systems were assumed at three sites: Goldstone, California; Canberra, Australia; and Madrid, Spain. The tracking data consisted of range and Doppler measurements taken during the tracking intervals starting at E-30(d) and ending at E-10(d) for the control data and ending at E-18(h) for the knowledge data. The control and knowledge covariance matrices were delivered to the Planetary Mission Analysis Branch for inputs into a delta V dispersion analysis.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kogalovskii, M.R.

    This paper presents a review of problems related to statistical database systems, which are wide-spread in various fields of activity. Statistical databases (SDB) are referred to as databases that consist of data and are used for statistical analysis. Topics under consideration are: SDB peculiarities, properties of data models adequate for SDB requirements, metadata functions, null-value problems, SDB compromise protection problems, stored data compression techniques, and statistical data representation means. Also examined is whether the present Database Management Systems (DBMS) satisfy the SDB requirements. Some actual research directions in SDB systems are considered.

  10. The statistics of identifying differentially expressed genes in Expresso and TM4: a comparison

    PubMed Central

    Sioson, Allan A; Mane, Shrinivasrao P; Li, Pinghua; Sha, Wei; Heath, Lenwood S; Bohnert, Hans J; Grene, Ruth

    2006-01-01

    Background Analysis of DNA microarray data takes as input spot intensity measurements from scanner software and returns differential expression of genes between two conditions, together with a statistical significance assessment. This process typically consists of two steps: data normalization and identification of differentially expressed genes through statistical analysis. The Expresso microarray experiment management system implements these steps with a two-stage, log-linear ANOVA mixed model technique, tailored to individual experimental designs. The complement of tools in TM4, on the other hand, is based on a number of preset design choices that limit its flexibility. In the TM4 microarray analysis suite, normalization, filter, and analysis methods form an analysis pipeline. TM4 computes integrated intensity values (IIV) from the average intensities and spot pixel counts returned by the scanner software as input to its normalization steps. By contrast, Expresso can use either IIV data or median intensity values (MIV). Here, we compare Expresso and TM4 analysis of two experiments and assess the results against qRT-PCR data. Results The Expresso analysis using MIV data consistently identifies more genes as differentially expressed, when compared to Expresso analysis with IIV data. The typical TM4 normalization and filtering pipeline corrects systematic intensity-specific bias on a per microarray basis. Subsequent statistical analysis with Expresso or a TM4 t-test can effectively identify differentially expressed genes. The best agreement with qRT-PCR data is obtained through the use of Expresso analysis and MIV data. Conclusion The results of this research are of practical value to biologists who analyze microarray data sets. The TM4 normalization and filtering pipeline corrects microarray-specific systematic bias and complements the normalization stage in Expresso analysis. The results of Expresso using MIV data have the best agreement with qRT-PCR results. In one experiment, MIV is a better choice than IIV as input to data normalization and statistical analysis methods, as it yields as greater number of statistically significant differentially expressed genes; TM4 does not support the choice of MIV input data. Overall, the more flexible and extensive statistical models of Expresso achieve more accurate analytical results, when judged by the yardstick of qRT-PCR data, in the context of an experimental design of modest complexity. PMID:16626497

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ade, P. A. R.; Aghanim, N.; Akrami, Y.

    In this paper, we test the statistical isotropy and Gaussianity of the cosmic microwave background (CMB) anisotropies using observations made by the Planck satellite. Our results are based mainly on the full Planck mission for temperature, but also include some polarization measurements. In particular, we consider the CMB anisotropy maps derived from the multi-frequency Planck data by several component-separation methods. For the temperature anisotropies, we find excellent agreement between results based on these sky maps over both a very large fraction of the sky and a broad range of angular scales, establishing that potential foreground residuals do not affect ourmore » studies. Tests of skewness, kurtosis, multi-normality, N-point functions, and Minkowski functionals indicate consistency with Gaussianity, while a power deficit at large angular scales is manifested in several ways, for example low map variance. The results of a peak statistics analysis are consistent with the expectations of a Gaussian random field. The “Cold Spot” is detected with several methods, including map kurtosis, peak statistics, and mean temperature profile. We thoroughly probe the large-scale dipolar power asymmetry, detecting it with several independent tests, and address the subject of a posteriori correction. Tests of directionality suggest the presence of angular clustering from large to small scales, but at a significance that is dependent on the details of the approach. We perform the first examination of polarization data, finding the morphology of stacked peaks to be consistent with the expectations of statistically isotropic simulations. Finally, where they overlap, these results are consistent with the Planck 2013 analysis based on the nominal mission data and provide our most thorough view of the statistics of the CMB fluctuations to date.« less

  12. BaTMAn: Bayesian Technique for Multi-image Analysis

    NASA Astrophysics Data System (ADS)

    Casado, J.; Ascasibar, Y.; García-Benito, R.; Guidi, G.; Choudhury, O. S.; Bellocchi, E.; Sánchez, S. F.; Díaz, A. I.

    2016-12-01

    Bayesian Technique for Multi-image Analysis (BaTMAn) characterizes any astronomical dataset containing spatial information and performs a tessellation based on the measurements and errors provided as input. The algorithm iteratively merges spatial elements as long as they are statistically consistent with carrying the same information (i.e. identical signal within the errors). The output segmentations successfully adapt to the underlying spatial structure, regardless of its morphology and/or the statistical properties of the noise. BaTMAn identifies (and keeps) all the statistically-significant information contained in the input multi-image (e.g. an IFS datacube). The main aim of the algorithm is to characterize spatially-resolved data prior to their analysis.

  13. Statistical Methodology for the Analysis of Repeated Duration Data in Behavioral Studies

    ERIC Educational Resources Information Center

    Letué, Frédérique; Martinez, Marie-José; Samson, Adeline; Vilain, Anne; Vilain, Coriandre

    2018-01-01

    Purpose: Repeated duration data are frequently used in behavioral studies. Classical linear or log-linear mixed models are often inadequate to analyze such data, because they usually consist of nonnegative and skew-distributed variables. Therefore, we recommend use of a statistical methodology specific to duration data. Method: We propose a…

  14. Bayesian test for colocalisation between pairs of genetic association studies using summary statistics.

    PubMed

    Giambartolomei, Claudia; Vukcevic, Damjan; Schadt, Eric E; Franke, Lude; Hingorani, Aroon D; Wallace, Chris; Plagnol, Vincent

    2014-05-01

    Genetic association studies, in particular the genome-wide association study (GWAS) design, have provided a wealth of novel insights into the aetiology of a wide range of human diseases and traits, in particular cardiovascular diseases and lipid biomarkers. The next challenge consists of understanding the molecular basis of these associations. The integration of multiple association datasets, including gene expression datasets, can contribute to this goal. We have developed a novel statistical methodology to assess whether two association signals are consistent with a shared causal variant. An application is the integration of disease scans with expression quantitative trait locus (eQTL) studies, but any pair of GWAS datasets can be integrated in this framework. We demonstrate the value of the approach by re-analysing a gene expression dataset in 966 liver samples with a published meta-analysis of lipid traits including >100,000 individuals of European ancestry. Combining all lipid biomarkers, our re-analysis supported 26 out of 38 reported colocalisation results with eQTLs and identified 14 new colocalisation results, hence highlighting the value of a formal statistical test. In three cases of reported eQTL-lipid pairs (SYPL2, IFT172, TBKBP1) for which our analysis suggests that the eQTL pattern is not consistent with the lipid association, we identify alternative colocalisation results with SORT1, GCKR, and KPNB1, indicating that these genes are more likely to be causal in these genomic intervals. A key feature of the method is the ability to derive the output statistics from single SNP summary statistics, hence making it possible to perform systematic meta-analysis type comparisons across multiple GWAS datasets (implemented online at http://coloc.cs.ucl.ac.uk/coloc/). Our methodology provides information about candidate causal genes in associated intervals and has direct implications for the understanding of complex diseases as well as the design of drugs to target disease pathways.

  15. Statistical analysis of CCSN/SS7 traffic data from working CCS subnetworks

    NASA Astrophysics Data System (ADS)

    Duffy, Diane E.; McIntosh, Allen A.; Rosenstein, Mark; Willinger, Walter

    1994-04-01

    In this paper, we report on an ongoing statistical analysis of actual CCSN traffic data. The data consist of approximately 170 million signaling messages collected from a variety of different working CCS subnetworks. The key findings from our analysis concern: (1) the characteristics of both the telephone call arrival process and the signaling message arrival process; (2) the tail behavior of the call holding time distribution; and (3) the observed performance of the CCSN with respect to a variety of performance and reliability measurements.

  16. Statistical analysis on experimental calibration data for flowmeters in pressure pipes

    NASA Astrophysics Data System (ADS)

    Lazzarin, Alessandro; Orsi, Enrico; Sanfilippo, Umberto

    2017-08-01

    This paper shows a statistical analysis on experimental calibration data for flowmeters (i.e.: electromagnetic, ultrasonic, turbine flowmeters) in pressure pipes. The experimental calibration data set consists of the whole archive of the calibration tests carried out on 246 flowmeters from January 2001 to October 2015 at Settore Portate of Laboratorio di Idraulica “G. Fantoli” of Politecnico di Milano, that is accredited as LAT 104 for a flow range between 3 l/s and 80 l/s, with a certified Calibration and Measurement Capability (CMC) - formerly known as Best Measurement Capability (BMC) - equal to 0.2%. The data set is split into three subsets, respectively consisting in: 94 electromagnetic, 83 ultrasonic and 69 turbine flowmeters; each subset is analysed separately from the others, but then a final comparison is carried out. In particular, the main focus of the statistical analysis is the correction C, that is the difference between the flow rate Q measured by the calibration facility (through the accredited procedures and the certified reference specimen) minus the flow rate QM contemporarily recorded by the flowmeter under calibration, expressed as a percentage of the same QM .

  17. Image analysis library software development

    NASA Technical Reports Server (NTRS)

    Guseman, L. F., Jr.; Bryant, J.

    1977-01-01

    The Image Analysis Library consists of a collection of general purpose mathematical/statistical routines and special purpose data analysis/pattern recognition routines basic to the development of image analysis techniques for support of current and future Earth Resources Programs. Work was done to provide a collection of computer routines and associated documentation which form a part of the Image Analysis Library.

  18. The space of ultrametric phylogenetic trees.

    PubMed

    Gavryushkin, Alex; Drummond, Alexei J

    2016-08-21

    The reliability of a phylogenetic inference method from genomic sequence data is ensured by its statistical consistency. Bayesian inference methods produce a sample of phylogenetic trees from the posterior distribution given sequence data. Hence the question of statistical consistency of such methods is equivalent to the consistency of the summary of the sample. More generally, statistical consistency is ensured by the tree space used to analyse the sample. In this paper, we consider two standard parameterisations of phylogenetic time-trees used in evolutionary models: inter-coalescent interval lengths and absolute times of divergence events. For each of these parameterisations we introduce a natural metric space on ultrametric phylogenetic trees. We compare the introduced spaces with existing models of tree space and formulate several formal requirements that a metric space on phylogenetic trees must possess in order to be a satisfactory space for statistical analysis, and justify them. We show that only a few known constructions of the space of phylogenetic trees satisfy these requirements. However, our results suggest that these basic requirements are not enough to distinguish between the two metric spaces we introduce and that the choice between metric spaces requires additional properties to be considered. Particularly, that the summary tree minimising the square distance to the trees from the sample might be different for different parameterisations. This suggests that further fundamental insight is needed into the problem of statistical consistency of phylogenetic inference methods. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  19. Impact of Integrated Science and English Language Arts Literacy Supplemental Instructional Intervention on Science Academic Achievement of Elementary Students

    NASA Astrophysics Data System (ADS)

    Marks, Jamar Terry

    The purpose of this quasi-experimental, nonequivalent pretest-posttest control group design study was to determine if any differences existed in upper elementary school students' science academic achievement when instructed using an 8-week integrated science and English language arts literacy supplemental instructional intervention in conjunction with traditional science classroom instruction as compared to when instructed using solely traditional science classroom instruction. The targeted sample population consisted of fourth-grade students enrolled in a public elementary school located in the southeastern region of the United States. The convenience sample size consisted of 115 fourth-grade students enrolled in science classes. The pretest and posttest academic achievement data collected consisted of the science segment from the Spring 2015, and Spring 2016 state standardized assessments. Pretest and posttest academic achievement data were analyzed using an ANCOVA statistical procedure to test for differences, and the researcher reported the results of the statistical analysis. The results of the study show no significant difference in science academic achievement between treatment and control groups. An interpretation of the results and recommendations for future research were provided by the researcher upon completion of the statistical analysis.

  20. Consistent Tolerance Bounds for Statistical Distributions

    NASA Technical Reports Server (NTRS)

    Mezzacappa, M. A.

    1983-01-01

    Assumption that sample comes from population with particular distribution is made with confidence C if data lie between certain bounds. These "confidence bounds" depend on C and assumption about distribution of sampling errors around regression line. Graphical test criteria using tolerance bounds are applied in industry where statistical analysis influences product development and use. Applied to evaluate equipment life.

  1. Applications of the DOE/NASA wind turbine engineering information system

    NASA Technical Reports Server (NTRS)

    Neustadter, H. E.; Spera, D. A.

    1981-01-01

    A statistical analysis of data obtained from the Technology and Engineering Information Systems was made. The systems analyzed consist of the following elements: (1) sensors which measure critical parameters (e.g., wind speed and direction, output power, blade loads and component vibrations); (2) remote multiplexing units (RMUs) on each wind turbine which frequency-modulate, multiplex and transmit sensor outputs; (3) on-site instrumentation to record, process and display the sensor output; and (4) statistical analysis of data. Two examples of the capabilities of these systems are presented. The first illustrates the standardized format for application of statistical analysis to each directly measured parameter. The second shows the use of a model to estimate the variability of the rotor thrust loading, which is a derived parameter.

  2. Photon counting statistics analysis of biophotons from hands.

    PubMed

    Jung, Hyun-Hee; Woo, Won-Myung; Yang, Joon-Mo; Choi, Chunho; Lee, Jonghan; Yoon, Gilwon; Yang, Jong S; Soh, Kwang-Sup

    2003-05-01

    The photon counting statistics of biophotons emitted from hands is studied with a view to test its agreement with the Poisson distribution. The moments of observed probability up to seventh order have been evaluated. The moments of biophoton emission from hands are in good agreement while those of dark counts of photomultiplier tube show large deviations from the theoretical values of Poisson distribution. The present results are consistent with the conventional delta-value analysis of the second moment of probability.

  3. Development of new on-line statistical program for the Korean Society for Radiation Oncology

    PubMed Central

    Song, Si Yeol; Ahn, Seung Do; Chung, Weon Kuu; Choi, Eun Kyung; Cho, Kwan Ho

    2015-01-01

    Purpose To develop new on-line statistical program for the Korean Society for Radiation Oncology (KOSRO) to collect and extract medical data in radiation oncology more efficiently. Materials and Methods The statistical program is a web-based program. The directory was placed in a sub-folder of the homepage of KOSRO and its web address is http://www.kosro.or.kr/asda. The operating systems server is Linux and the webserver is the Apache HTTP server. For database (DB) server, MySQL is adopted and dedicated scripting language is the PHP. Each ID and password are controlled independently and all screen pages for data input or analysis are made to be friendly to users. Scroll-down menu is actively used for the convenience of user and the consistence of data analysis. Results Year of data is one of top categories and main topics include human resource, equipment, clinical statistics, specialized treatment and research achievement. Each topic or category has several subcategorized topics. Real-time on-line report of analysis is produced immediately after entering each data and the administrator is able to monitor status of data input of each hospital. Backup of data as spread sheets can be accessed by the administrator and be used for academic works by any members of the KOSRO. Conclusion The new on-line statistical program was developed to collect data from nationwide departments of radiation oncology. Intuitive screen and consistent input structure are expected to promote entering data of member hospitals and annual statistics should be a cornerstone of advance in radiation oncology. PMID:26157684

  4. Development of new on-line statistical program for the Korean Society for Radiation Oncology.

    PubMed

    Song, Si Yeol; Ahn, Seung Do; Chung, Weon Kuu; Shin, Kyung Hwan; Choi, Eun Kyung; Cho, Kwan Ho

    2015-06-01

    To develop new on-line statistical program for the Korean Society for Radiation Oncology (KOSRO) to collect and extract medical data in radiation oncology more efficiently. The statistical program is a web-based program. The directory was placed in a sub-folder of the homepage of KOSRO and its web address is http://www.kosro.or.kr/asda. The operating systems server is Linux and the webserver is the Apache HTTP server. For database (DB) server, MySQL is adopted and dedicated scripting language is the PHP. Each ID and password are controlled independently and all screen pages for data input or analysis are made to be friendly to users. Scroll-down menu is actively used for the convenience of user and the consistence of data analysis. Year of data is one of top categories and main topics include human resource, equipment, clinical statistics, specialized treatment and research achievement. Each topic or category has several subcategorized topics. Real-time on-line report of analysis is produced immediately after entering each data and the administrator is able to monitor status of data input of each hospital. Backup of data as spread sheets can be accessed by the administrator and be used for academic works by any members of the KOSRO. The new on-line statistical program was developed to collect data from nationwide departments of radiation oncology. Intuitive screen and consistent input structure are expected to promote entering data of member hospitals and annual statistics should be a cornerstone of advance in radiation oncology.

  5. Recent Reliability Reporting Practices in "Psychological Assessment": Recognizing the People behind the Data

    ERIC Educational Resources Information Center

    Green, Carlton E.; Chen, Cynthia E.; Helms, Janet E.; Henze, Kevin T.

    2011-01-01

    Helms, Henze, Sass, and Mifsud (2006) defined good practices for internal consistency reporting, interpretation, and analysis consistent with an alpha-as-data perspective. Their viewpoint (a) expands on previous arguments that reliability coefficients are group-level summary statistics of samples' responses rather than stable properties of scales…

  6. Rock Statistics at the Mars Pathfinder Landing Site, Roughness and Roving on Mars

    NASA Technical Reports Server (NTRS)

    Haldemann, A. F. C.; Bridges, N. T.; Anderson, R. C.; Golombek, M. P.

    1999-01-01

    Several rock counts have been carried out at the Mars Pathfinder landing site producing consistent statistics of rock coverage and size-frequency distributions. These rock statistics provide a primary element of "ground truth" for anchoring remote sensing information used to pick the Pathfinder, and future, landing sites. The observed rock population statistics should also be consistent with the emplacement and alteration processes postulated to govern the landing site landscape. The rock population databases can however be used in ways that go beyond the calculation of cumulative number and cumulative area distributions versus rock diameter and height. Since the spatial parameters measured to characterize each rock are determined with stereo image pairs, the rock database serves as a subset of the full landing site digital terrain model (DTM). Insofar as a rock count can be carried out in a speedier, albeit coarser, manner than the full DTM analysis, rock counting offers several operational and scientific products in the near term. Quantitative rock mapping adds further information to the geomorphic study of the landing site, and can also be used for rover traverse planning. Statistical analysis of the surface roughness using the rock count proxy DTM is sufficiently accurate when compared to the full DTM to compare with radar remote sensing roughness measures, and with rover traverse profiles.

  7. Application of the Linux cluster for exhaustive window haplotype analysis using the FBAT and Unphased programs.

    PubMed

    Mishima, Hiroyuki; Lidral, Andrew C; Ni, Jun

    2008-05-28

    Genetic association studies have been used to map disease-causing genes. A newly introduced statistical method, called exhaustive haplotype association study, analyzes genetic information consisting of different numbers and combinations of DNA sequence variations along a chromosome. Such studies involve a large number of statistical calculations and subsequently high computing power. It is possible to develop parallel algorithms and codes to perform the calculations on a high performance computing (HPC) system. However, most existing commonly-used statistic packages for genetic studies are non-parallel versions. Alternatively, one may use the cutting-edge technology of grid computing and its packages to conduct non-parallel genetic statistical packages on a centralized HPC system or distributed computing systems. In this paper, we report the utilization of a queuing scheduler built on the Grid Engine and run on a Rocks Linux cluster for our genetic statistical studies. Analysis of both consecutive and combinational window haplotypes was conducted by the FBAT (Laird et al., 2000) and Unphased (Dudbridge, 2003) programs. The dataset consisted of 26 loci from 277 extended families (1484 persons). Using the Rocks Linux cluster with 22 compute-nodes, FBAT jobs performed about 14.4-15.9 times faster, while Unphased jobs performed 1.1-18.6 times faster compared to the accumulated computation duration. Execution of exhaustive haplotype analysis using non-parallel software packages on a Linux-based system is an effective and efficient approach in terms of cost and performance.

  8. Application of the Linux cluster for exhaustive window haplotype analysis using the FBAT and Unphased programs

    PubMed Central

    Mishima, Hiroyuki; Lidral, Andrew C; Ni, Jun

    2008-01-01

    Background Genetic association studies have been used to map disease-causing genes. A newly introduced statistical method, called exhaustive haplotype association study, analyzes genetic information consisting of different numbers and combinations of DNA sequence variations along a chromosome. Such studies involve a large number of statistical calculations and subsequently high computing power. It is possible to develop parallel algorithms and codes to perform the calculations on a high performance computing (HPC) system. However, most existing commonly-used statistic packages for genetic studies are non-parallel versions. Alternatively, one may use the cutting-edge technology of grid computing and its packages to conduct non-parallel genetic statistical packages on a centralized HPC system or distributed computing systems. In this paper, we report the utilization of a queuing scheduler built on the Grid Engine and run on a Rocks Linux cluster for our genetic statistical studies. Results Analysis of both consecutive and combinational window haplotypes was conducted by the FBAT (Laird et al., 2000) and Unphased (Dudbridge, 2003) programs. The dataset consisted of 26 loci from 277 extended families (1484 persons). Using the Rocks Linux cluster with 22 compute-nodes, FBAT jobs performed about 14.4–15.9 times faster, while Unphased jobs performed 1.1–18.6 times faster compared to the accumulated computation duration. Conclusion Execution of exhaustive haplotype analysis using non-parallel software packages on a Linux-based system is an effective and efficient approach in terms of cost and performance. PMID:18541045

  9. Single-case research design in pediatric psychology: considerations regarding data analysis.

    PubMed

    Cohen, Lindsey L; Feinstein, Amanda; Masuda, Akihiko; Vowles, Kevin E

    2014-03-01

    Single-case research allows for an examination of behavior and can demonstrate the functional relation between intervention and outcome in pediatric psychology. This review highlights key assumptions, methodological and design considerations, and options for data analysis. Single-case methodology and guidelines are reviewed with an in-depth focus on visual and statistical analyses. Guidelines allow for the careful evaluation of design quality and visual analysis. A number of statistical techniques have been introduced to supplement visual analysis, but to date, there is no consensus on their recommended use in single-case research design. Single-case methodology is invaluable for advancing pediatric psychology science and practice, and guidelines have been introduced to enhance the consistency, validity, and reliability of these studies. Experts generally agree that visual inspection is the optimal method of analysis in single-case design; however, statistical approaches are becoming increasingly evaluated and used to augment data interpretation.

  10. A Matlab user interface for the statistically assisted fluid registration algorithm and tensor-based morphometry

    NASA Astrophysics Data System (ADS)

    Yepes-Calderon, Fernando; Brun, Caroline; Sant, Nishita; Thompson, Paul; Lepore, Natasha

    2015-01-01

    Tensor-Based Morphometry (TBM) is an increasingly popular method for group analysis of brain MRI data. The main steps in the analysis consist of a nonlinear registration to align each individual scan to a common space, and a subsequent statistical analysis to determine morphometric differences, or difference in fiber structure between groups. Recently, we implemented the Statistically-Assisted Fluid Registration Algorithm or SAFIRA,1 which is designed for tracking morphometric differences among populations. To this end, SAFIRA allows the inclusion of statistical priors extracted from the populations being studied as regularizers in the registration. This flexibility and degree of sophistication limit the tool to expert use, even more so considering that SAFIRA was initially implemented in command line mode. Here, we introduce a new, intuitive, easy to use, Matlab-based graphical user interface for SAFIRA's multivariate TBM. The interface also generates different choices for the TBM statistics, including both the traditional univariate statistics on the Jacobian matrix, and comparison of the full deformation tensors.2 This software will be freely disseminated to the neuroimaging research community.

  11. Statistical analysis plan for the family-led rehabilitation after stroke in India (ATTEND) trial: A multicenter randomized controlled trial of a new model of stroke rehabilitation compared to usual care.

    PubMed

    Billot, Laurent; Lindley, Richard I; Harvey, Lisa A; Maulik, Pallab K; Hackett, Maree L; Murthy, Gudlavalleti Vs; Anderson, Craig S; Shamanna, Bindiganavale R; Jan, Stephen; Walker, Marion; Forster, Anne; Langhorne, Peter; Verma, Shweta J; Felix, Cynthia; Alim, Mohammed; Gandhi, Dorcas Bc; Pandian, Jeyaraj Durai

    2017-02-01

    Background In low- and middle-income countries, few patients receive organized rehabilitation after stroke, yet the burden of chronic diseases such as stroke is increasing in these countries. Affordable models of effective rehabilitation could have a major impact. The ATTEND trial is evaluating a family-led caregiver delivered rehabilitation program after stroke. Objective To publish the detailed statistical analysis plan for the ATTEND trial prior to trial unblinding. Methods Based upon the published registration and protocol, the blinded steering committee and management team, led by the trial statistician, have developed a statistical analysis plan. The plan has been informed by the chosen outcome measures, the data collection forms and knowledge of key baseline data. Results The resulting statistical analysis plan is consistent with best practice and will allow open and transparent reporting. Conclusions Publication of the trial statistical analysis plan reduces potential bias in trial reporting, and clearly outlines pre-specified analyses. Clinical Trial Registrations India CTRI/2013/04/003557; Australian New Zealand Clinical Trials Registry ACTRN1261000078752; Universal Trial Number U1111-1138-6707.

  12. K-means cluster analysis of tourist destination in special region of Yogyakarta using spatial approach and social network analysis (a case study: post of @explorejogja instagram account in 2016)

    NASA Astrophysics Data System (ADS)

    Iswandhani, N.; Muhajir, M.

    2018-03-01

    This research was conducted in Department of Statistics Islamic University of Indonesia. The data used are primary data obtained by post @explorejogja instagram account from January until December 2016. In the @explorejogja instagram account found many tourist destinations that can be visited by tourists both in the country and abroad, Therefore it is necessary to form a cluster of existing tourist destinations based on the number of likes from user instagram assumed as the most popular. The purpose of this research is to know the most popular distribution of tourist spot, the cluster formation of tourist destinations, and central popularity of tourist destinations based on @explorejogja instagram account in 2016. Statistical analysis used is descriptive statistics, k-means clustering, and social network analysis. The results of this research were obtained the top 10 most popular destinations in Yogyakarta, map of html-based tourist destination distribution consisting of 121 tourist destination points, formed 3 clusters each consisting of cluster 1 with 52 destinations, cluster 2 with 9 destinations and cluster 3 with 60 destinations, and Central popularity of tourist destinations in the special region of Yogyakarta by district.

  13. Symptom Clusters in Advanced Cancer Patients: An Empirical Comparison of Statistical Methods and the Impact on Quality of Life.

    PubMed

    Dong, Skye T; Costa, Daniel S J; Butow, Phyllis N; Lovell, Melanie R; Agar, Meera; Velikova, Galina; Teckle, Paulos; Tong, Allison; Tebbutt, Niall C; Clarke, Stephen J; van der Hoek, Kim; King, Madeleine T; Fayers, Peter M

    2016-01-01

    Symptom clusters in advanced cancer can influence patient outcomes. There is large heterogeneity in the methods used to identify symptom clusters. To investigate the consistency of symptom cluster composition in advanced cancer patients using different statistical methodologies for all patients across five primary cancer sites, and to examine which clusters predict functional status, a global assessment of health and global quality of life. Principal component analysis and exploratory factor analysis (with different rotation and factor selection methods) and hierarchical cluster analysis (with different linkage and similarity measures) were used on a data set of 1562 advanced cancer patients who completed the European Organization for the Research and Treatment of Cancer Quality of Life Questionnaire-Core 30. Four clusters consistently formed for many of the methods and cancer sites: tense-worry-irritable-depressed (emotional cluster), fatigue-pain, nausea-vomiting, and concentration-memory (cognitive cluster). The emotional cluster was a stronger predictor of overall quality of life than the other clusters. Fatigue-pain was a stronger predictor of overall health than the other clusters. The cognitive cluster and fatigue-pain predicted physical functioning, role functioning, and social functioning. The four identified symptom clusters were consistent across statistical methods and cancer types, although there were some noteworthy differences. Statistical derivation of symptom clusters is in need of greater methodological guidance. A psychosocial pathway in the management of symptom clusters may improve quality of life. Biological mechanisms underpinning symptom clusters need to be delineated by future research. A framework for evidence-based screening, assessment, treatment, and follow-up of symptom clusters in advanced cancer is essential. Copyright © 2016 American Academy of Hospice and Palliative Medicine. Published by Elsevier Inc. All rights reserved.

  14. OSO 8 observational limits to the acoustic coronal heating mechanism

    NASA Technical Reports Server (NTRS)

    Bruner, E. C., Jr.

    1981-01-01

    An improved analysis of time-resolved line profiles of the C IV resonance line at 1548 A has been used to test the acoustic wave hypothesis of solar coronal heating. It is shown that the observed motions and brightness fluctuations are consistent with the existence of acoustic waves. Specific account is taken of the effect of photon statistics on the observed velocities, and a test is devised to determine whether the motions represent propagating or evanescent waves. It is found that on the average about as much energy is carried upward as downward such that the net acoustic flux density is statistically consistent with zero. The statistical uncertainty in this null result is three orders of magnitue lower than the flux level needed to heat the corona.

  15. Energy-density field approach for low- and medium-frequency vibroacoustic analysis of complex structures using a statistical computational model

    NASA Astrophysics Data System (ADS)

    Kassem, M.; Soize, C.; Gagliardini, L.

    2009-06-01

    In this paper, an energy-density field approach applied to the vibroacoustic analysis of complex industrial structures in the low- and medium-frequency ranges is presented. This approach uses a statistical computational model. The analyzed system consists of an automotive vehicle structure coupled with its internal acoustic cavity. The objective of this paper is to make use of the statistical properties of the frequency response functions of the vibroacoustic system observed from previous experimental and numerical work. The frequency response functions are expressed in terms of a dimensionless matrix which is estimated using the proposed energy approach. Using this dimensionless matrix, a simplified vibroacoustic model is proposed.

  16. The Analysis of Organizational Diagnosis on Based Six Box Model in Universities

    ERIC Educational Resources Information Center

    Hamid, Rahimi; Siadat, Sayyed Ali; Reza, Hoveida; Arash, Shahin; Ali, Nasrabadi Hasan; Azizollah, Arbabisarjou

    2011-01-01

    Purpose: The analysis of organizational diagnosis on based six box model at universities. Research method: Research method was descriptive-survey. Statistical population consisted of 1544 faculty members of universities which through random strafed sampling method 218 persons were chosen as the sample. Research Instrument were organizational…

  17. Analysis of Statistical Methods Currently used in Toxicology Journals

    PubMed Central

    Na, Jihye; Yang, Hyeri

    2014-01-01

    Statistical methods are frequently used in toxicology, yet it is not clear whether the methods employed by the studies are used consistently and conducted based on sound statistical grounds. The purpose of this paper is to describe statistical methods used in top toxicology journals. More specifically, we sampled 30 papers published in 2014 from Toxicology and Applied Pharmacology, Archives of Toxicology, and Toxicological Science and described methodologies used to provide descriptive and inferential statistics. One hundred thirteen endpoints were observed in those 30 papers, and most studies had sample size less than 10, with the median and the mode being 6 and 3 & 6, respectively. Mean (105/113, 93%) was dominantly used to measure central tendency, and standard error of the mean (64/113, 57%) and standard deviation (39/113, 34%) were used to measure dispersion, while few studies provide justifications regarding why the methods being selected. Inferential statistics were frequently conducted (93/113, 82%), with one-way ANOVA being most popular (52/93, 56%), yet few studies conducted either normality or equal variance test. These results suggest that more consistent and appropriate use of statistical method is necessary which may enhance the role of toxicology in public health. PMID:25343012

  18. Analysis of Statistical Methods Currently used in Toxicology Journals.

    PubMed

    Na, Jihye; Yang, Hyeri; Bae, SeungJin; Lim, Kyung-Min

    2014-09-01

    Statistical methods are frequently used in toxicology, yet it is not clear whether the methods employed by the studies are used consistently and conducted based on sound statistical grounds. The purpose of this paper is to describe statistical methods used in top toxicology journals. More specifically, we sampled 30 papers published in 2014 from Toxicology and Applied Pharmacology, Archives of Toxicology, and Toxicological Science and described methodologies used to provide descriptive and inferential statistics. One hundred thirteen endpoints were observed in those 30 papers, and most studies had sample size less than 10, with the median and the mode being 6 and 3 & 6, respectively. Mean (105/113, 93%) was dominantly used to measure central tendency, and standard error of the mean (64/113, 57%) and standard deviation (39/113, 34%) were used to measure dispersion, while few studies provide justifications regarding why the methods being selected. Inferential statistics were frequently conducted (93/113, 82%), with one-way ANOVA being most popular (52/93, 56%), yet few studies conducted either normality or equal variance test. These results suggest that more consistent and appropriate use of statistical method is necessary which may enhance the role of toxicology in public health.

  19. Model Performance Evaluation and Scenario Analysis (MPESA) Tutorial

    EPA Pesticide Factsheets

    The model performance evaluation consists of metrics and model diagnostics. These metrics provides modelers with statistical goodness-of-fit measures that capture magnitude only, sequence only, and combined magnitude and sequence errors.

  20. Statistical shape analysis using 3D Poisson equation--A quantitatively validated approach.

    PubMed

    Gao, Yi; Bouix, Sylvain

    2016-05-01

    Statistical shape analysis has been an important area of research with applications in biology, anatomy, neuroscience, agriculture, paleontology, etc. Unfortunately, the proposed methods are rarely quantitatively evaluated, and as shown in recent studies, when they are evaluated, significant discrepancies exist in their outputs. In this work, we concentrate on the problem of finding the consistent location of deformation between two population of shapes. We propose a new shape analysis algorithm along with a framework to perform a quantitative evaluation of its performance. Specifically, the algorithm constructs a Signed Poisson Map (SPoM) by solving two Poisson equations on the volumetric shapes of arbitrary topology, and statistical analysis is then carried out on the SPoMs. The method is quantitatively evaluated on synthetic shapes and applied on real shape data sets in brain structures. Copyright © 2016 Elsevier B.V. All rights reserved.

  1. Performance of Blind Source Separation Algorithms for FMRI Analysis using a Group ICA Method

    PubMed Central

    Correa, Nicolle; Adali, Tülay; Calhoun, Vince D.

    2007-01-01

    Independent component analysis (ICA) is a popular blind source separation (BSS) technique that has proven to be promising for the analysis of functional magnetic resonance imaging (fMRI) data. A number of ICA approaches have been used for fMRI data analysis, and even more ICA algorithms exist, however the impact of using different algorithms on the results is largely unexplored. In this paper, we study the performance of four major classes of algorithms for spatial ICA, namely information maximization, maximization of non-gaussianity, joint diagonalization of cross-cumulant matrices, and second-order correlation based methods when they are applied to fMRI data from subjects performing a visuo-motor task. We use a group ICA method to study the variability among different ICA algorithms and propose several analysis techniques to evaluate their performance. We compare how different ICA algorithms estimate activations in expected neuronal areas. The results demonstrate that the ICA algorithms using higher-order statistical information prove to be quite consistent for fMRI data analysis. Infomax, FastICA, and JADE all yield reliable results; each having their strengths in specific areas. EVD, an algorithm using second-order statistics, does not perform reliably for fMRI data. Additionally, for the iterative ICA algorithms, it is important to investigate the variability of the estimates from different runs. We test the consistency of the iterative algorithms, Infomax and FastICA, by running the algorithm a number of times with different initializations and note that they yield consistent results over these multiple runs. Our results greatly improve our confidence in the consistency of ICA for fMRI data analysis. PMID:17540281

  2. Performance of blind source separation algorithms for fMRI analysis using a group ICA method.

    PubMed

    Correa, Nicolle; Adali, Tülay; Calhoun, Vince D

    2007-06-01

    Independent component analysis (ICA) is a popular blind source separation technique that has proven to be promising for the analysis of functional magnetic resonance imaging (fMRI) data. A number of ICA approaches have been used for fMRI data analysis, and even more ICA algorithms exist; however, the impact of using different algorithms on the results is largely unexplored. In this paper, we study the performance of four major classes of algorithms for spatial ICA, namely, information maximization, maximization of non-Gaussianity, joint diagonalization of cross-cumulant matrices and second-order correlation-based methods, when they are applied to fMRI data from subjects performing a visuo-motor task. We use a group ICA method to study variability among different ICA algorithms, and we propose several analysis techniques to evaluate their performance. We compare how different ICA algorithms estimate activations in expected neuronal areas. The results demonstrate that the ICA algorithms using higher-order statistical information prove to be quite consistent for fMRI data analysis. Infomax, FastICA and joint approximate diagonalization of eigenmatrices (JADE) all yield reliable results, with each having its strengths in specific areas. Eigenvalue decomposition (EVD), an algorithm using second-order statistics, does not perform reliably for fMRI data. Additionally, for iterative ICA algorithms, it is important to investigate the variability of estimates from different runs. We test the consistency of the iterative algorithms Infomax and FastICA by running the algorithm a number of times with different initializations, and we note that they yield consistent results over these multiple runs. Our results greatly improve our confidence in the consistency of ICA for fMRI data analysis.

  3. Exploratory statistical and geographical freight traffic data analysis

    DOT National Transportation Integrated Search

    2000-08-01

    Data from freight traffic roadside surveys in Mexican highways are analyzed in order to find consistent patterns or systematic relationships between variables characterizing this traffic. Patterns traced are validated by contrasting against new data ...

  4. Development of Consistency between Marketing and Planning.

    ERIC Educational Resources Information Center

    Williford, A. Michael

    1986-01-01

    Examined descriptive information about marketing, enrollment management, institutional planning and factors affecting them. A factor analysis of statistically appropriate variables identified factors associated with a state of symbiosis between marketing and institutional planning. (Author/BL)

  5. The Job Dimensions Underlying the Job Elements of the Position Analysis Questionnaire (PAQ) (Form B). Report No. 4.

    ERIC Educational Resources Information Center

    Marquardt, Lloyd D.; McCormick, Ernest J.

    This study was concerned with the identification of the job dimension underlying the job elements of the Position Analysis Questionnaire (PAQ), Form B. The PAQ is a structured job analysis instrument consisting of 187 worker-oriented job elements which are divided into six a priori major divisions. The statistical procedure of principal components…

  6. Consistency of performance of robot-assisted surgical tasks in virtual reality.

    PubMed

    Suh, I H; Siu, K-C; Mukherjee, M; Monk, E; Oleynikov, D; Stergiou, N

    2009-01-01

    The purpose of this study was to investigate consistency of performance of robot-assisted surgical tasks in a virtual reality environment. Eight subjects performed two surgical tasks, bimanual carrying and needle passing, with both the da Vinci surgical robot and a virtual reality equivalent environment. Nonlinear analysis was utilized to evaluate consistency of performance by calculating the regularity and the amount of divergence in the movement trajectories of the surgical instrument tips. Our results revealed that movement patterns for both training tasks were statistically similar between the two environments. Consistency of performance as measured by nonlinear analysis could be an appropriate methodology to evaluate the complexity of the training tasks between actual and virtual environments and assist in developing better surgical training programs.

  7. Quantile regression for the statistical analysis of immunological data with many non-detects.

    PubMed

    Eilers, Paul H C; Röder, Esther; Savelkoul, Huub F J; van Wijk, Roy Gerth

    2012-07-07

    Immunological parameters are hard to measure. A well-known problem is the occurrence of values below the detection limit, the non-detects. Non-detects are a nuisance, because classical statistical analyses, like ANOVA and regression, cannot be applied. The more advanced statistical techniques currently available for the analysis of datasets with non-detects can only be used if a small percentage of the data are non-detects. Quantile regression, a generalization of percentiles to regression models, models the median or higher percentiles and tolerates very high numbers of non-detects. We present a non-technical introduction and illustrate it with an implementation to real data from a clinical trial. We show that by using quantile regression, groups can be compared and that meaningful linear trends can be computed, even if more than half of the data consists of non-detects. Quantile regression is a valuable addition to the statistical methods that can be used for the analysis of immunological datasets with non-detects.

  8. A Comparison of Imputation Methods for Bayesian Factor Analysis Models

    ERIC Educational Resources Information Center

    Merkle, Edgar C.

    2011-01-01

    Imputation methods are popular for the handling of missing data in psychology. The methods generally consist of predicting missing data based on observed data, yielding a complete data set that is amiable to standard statistical analyses. In the context of Bayesian factor analysis, this article compares imputation under an unrestricted…

  9. The forest inventory and analysis database description and users manual version 1.0

    Treesearch

    Patrick D. Miles; Gary J. Brand; Carol L. Alerich; Larry F. Bednar; Sharon W. Woudenberg; Joseph F. Glover; Edward N. Ezell

    2001-01-01

    Describes the structure of the Forest Inventory and Analysis Database (FIADB) and provides information on generating estimates of forest statistics from these data. The FIADB structure provides a consistent framework for storing forest inventory data across all ownerships across the entire United States. These data are available to the public.

  10. An audit of the statistics and the comparison with the parameter in the population

    NASA Astrophysics Data System (ADS)

    Bujang, Mohamad Adam; Sa'at, Nadiah; Joys, A. Reena; Ali, Mariana Mohamad

    2015-10-01

    The sufficient sample size that is needed to closely estimate the statistics for particular parameters are use to be an issue. Although sample size might had been calculated referring to objective of the study, however, it is difficult to confirm whether the statistics are closed with the parameter for a particular population. All these while, guideline that uses a p-value less than 0.05 is widely used as inferential evidence. Therefore, this study had audited results that were analyzed from various sub sample and statistical analyses and had compared the results with the parameters in three different populations. Eight types of statistical analysis and eight sub samples for each statistical analysis were analyzed. Results found that the statistics were consistent and were closed to the parameters when the sample study covered at least 15% to 35% of population. Larger sample size is needed to estimate parameter that involve with categorical variables compared with numerical variables. Sample sizes with 300 to 500 are sufficient to estimate the parameters for medium size of population.

  11. Coupling strength assumption in statistical energy analysis

    PubMed Central

    Lafont, T.; Totaro, N.

    2017-01-01

    This paper is a discussion of the hypothesis of weak coupling in statistical energy analysis (SEA). The examples of coupled oscillators and statistical ensembles of coupled plates excited by broadband random forces are discussed. In each case, a reference calculation is compared with the SEA calculation. First, it is shown that the main SEA relation, the coupling power proportionality, is always valid for two oscillators irrespective of the coupling strength. But the case of three subsystems, consisting of oscillators or ensembles of plates, indicates that the coupling power proportionality fails when the coupling is strong. Strong coupling leads to non-zero indirect coupling loss factors and, sometimes, even to a reversal of the energy flow direction from low to high vibrational temperature. PMID:28484335

  12. Text grouping in patent analysis using adaptive K-means clustering algorithm

    NASA Astrophysics Data System (ADS)

    Shanie, Tiara; Suprijadi, Jadi; Zulhanif

    2017-03-01

    Patents are one of the Intellectual Property. Analyzing patent is one requirement in knowing well the development of technology in each country and in the world now. This study uses the patent document coming from the Espacenet server about Green Tea. Patent documents related to the technology in the field of tea is still widespread, so it will be difficult for users to information retrieval (IR). Therefore, it is necessary efforts to categorize documents in a specific group of related terms contained therein. This study uses titles patent text data with the proposed Green Tea in Statistical Text Mining methods consists of two phases: data preparation and data analysis stage. The data preparation phase uses Text Mining methods and data analysis stage is done by statistics. Statistical analysis in this study using a cluster analysis algorithm, the Adaptive K-Means Clustering Algorithm. Results from this study showed that based on the maximum value Silhouette, generate 87 clusters associated fifteen terms therein that can be utilized in the process of information retrieval needs.

  13. Measure Projection Analysis: A Probabilistic Approach to EEG Source Comparison and Multi-Subject Inference

    PubMed Central

    Bigdely-Shamlo, Nima; Mullen, Tim; Kreutz-Delgado, Kenneth; Makeig, Scott

    2013-01-01

    A crucial question for the analysis of multi-subject and/or multi-session electroencephalographic (EEG) data is how to combine information across multiple recordings from different subjects and/or sessions, each associated with its own set of source processes and scalp projections. Here we introduce a novel statistical method for characterizing the spatial consistency of EEG dynamics across a set of data records. Measure Projection Analysis (MPA) first finds voxels in a common template brain space at which a given dynamic measure is consistent across nearby source locations, then computes local-mean EEG measure values for this voxel subspace using a statistical model of source localization error and between-subject anatomical variation. Finally, clustering the mean measure voxel values in this locally consistent brain subspace finds brain spatial domains exhibiting distinguishable measure features and provides 3-D maps plus statistical significance estimates for each EEG measure of interest. Applied to sufficient high-quality data, the scalp projections of many maximally independent component (IC) processes contributing to recorded high-density EEG data closely match the projection of a single equivalent dipole located in or near brain cortex. We demonstrate the application of MPA to a multi-subject EEG study decomposed using independent component analysis (ICA), compare the results to k-means IC clustering in EEGLAB (sccn.ucsd.edu/eeglab), and use surrogate data to test MPA robustness. A Measure Projection Toolbox (MPT) plug-in for EEGLAB is available for download (sccn.ucsd.edu/wiki/MPT). Together, MPA and ICA allow use of EEG as a 3-D cortical imaging modality with near-cm scale spatial resolution. PMID:23370059

  14. Confirmatory Factor Analysis of Persian Adaptation of Multidimensional Students' Life Satisfaction Scale (MSLSS)

    ERIC Educational Resources Information Center

    Hatami, Gissou; Motamed, Niloofar; Ashrafzadeh, Mahshid

    2010-01-01

    Validity and reliability of Persian adaptation of MSLSS in the 12-18 years, middle and high school students (430 students in grades 6-12 in Bushehr port, Iran) using confirmatory factor analysis by means of LISREL statistical package were checked. Internal consistency reliability estimates (Cronbach's coefficient [alpha]) were all above the…

  15. Analysis of high-resolution foreign exchange data of USD-JPY for 13 years

    NASA Astrophysics Data System (ADS)

    Mizuno, Takayuki; Kurihara, Shoko; Takayasu, Misako; Takayasu, Hideki

    2003-06-01

    We analyze high-resolution foreign exchange data consisting of 20 million data points of USD-JPY for 13 years to report firm statistical laws in distributions and correlations of exchange rate fluctuations. A conditional probability density analysis clearly shows the existence of trend-following movements at time scale of 8-ticks, about 1 min.

  16. Root Cause Analysis of Quality Defects Using HPLC-MS Fingerprint Knowledgebase for Batch-to-batch Quality Control of Herbal Drugs.

    PubMed

    Yan, Binjun; Fang, Zhonghua; Shen, Lijuan; Qu, Haibin

    2015-01-01

    The batch-to-batch quality consistency of herbal drugs has always been an important issue. To propose a methodology for batch-to-batch quality control based on HPLC-MS fingerprints and process knowledgebase. The extraction process of Compound E-jiao Oral Liquid was taken as a case study. After establishing the HPLC-MS fingerprint analysis method, the fingerprints of the extract solutions produced under normal and abnormal operation conditions were obtained. Multivariate statistical models were built for fault detection and a discriminant analysis model was built using the probabilistic discriminant partial-least-squares method for fault diagnosis. Based on multivariate statistical analysis, process knowledge was acquired and the cause-effect relationship between process deviations and quality defects was revealed. The quality defects were detected successfully by multivariate statistical control charts and the type of process deviations were diagnosed correctly by discriminant analysis. This work has demonstrated the benefits of combining HPLC-MS fingerprints, process knowledge and multivariate analysis for the quality control of herbal drugs. Copyright © 2015 John Wiley & Sons, Ltd.

  17. Assessment of statistical methods used in library-based approaches to microbial source tracking.

    PubMed

    Ritter, Kerry J; Carruthers, Ethan; Carson, C Andrew; Ellender, R D; Harwood, Valerie J; Kingsley, Kyle; Nakatsu, Cindy; Sadowsky, Michael; Shear, Brian; West, Brian; Whitlock, John E; Wiggins, Bruce A; Wilbur, Jayson D

    2003-12-01

    Several commonly used statistical methods for fingerprint identification in microbial source tracking (MST) were examined to assess the effectiveness of pattern-matching algorithms to correctly identify sources. Although numerous statistical methods have been employed for source identification, no widespread consensus exists as to which is most appropriate. A large-scale comparison of several MST methods, using identical fecal sources, presented a unique opportunity to assess the utility of several popular statistical methods. These included discriminant analysis, nearest neighbour analysis, maximum similarity and average similarity, along with several measures of distance or similarity. Threshold criteria for excluding uncertain or poorly matched isolates from final analysis were also examined for their ability to reduce false positives and increase prediction success. Six independent libraries used in the study were constructed from indicator bacteria isolated from fecal materials of humans, seagulls, cows and dogs. Three of these libraries were constructed using the rep-PCR technique and three relied on antibiotic resistance analysis (ARA). Five of the libraries were constructed using Escherichia coli and one using Enterococcus spp. (ARA). Overall, the outcome of this study suggests a high degree of variability across statistical methods. Despite large differences in correct classification rates among the statistical methods, no single statistical approach emerged as superior. Thresholds failed to consistently increase rates of correct classification and improvement was often associated with substantial effective sample size reduction. Recommendations are provided to aid in selecting appropriate analyses for these types of data.

  18. Narrative Review of Statistical Reporting Checklists, Mandatory Statistical Editing, and Rectifying Common Problems in the Reporting of Scientific Articles.

    PubMed

    Dexter, Franklin; Shafer, Steven L

    2017-03-01

    Considerable attention has been drawn to poor reproducibility in the biomedical literature. One explanation is inadequate reporting of statistical methods by authors and inadequate assessment of statistical reporting and methods during peer review. In this narrative review, we examine scientific studies of several well-publicized efforts to improve statistical reporting. We also review several retrospective assessments of the impact of these efforts. These studies show that instructions to authors and statistical checklists are not sufficient; no findings suggested that either improves the quality of statistical methods and reporting. Second, even basic statistics, such as power analyses, are frequently missing or incorrectly performed. Third, statistical review is needed for all papers that involve data analysis. A consistent finding in the studies was that nonstatistical reviewers (eg, "scientific reviewers") and journal editors generally poorly assess statistical quality. We finish by discussing our experience with statistical review at Anesthesia & Analgesia from 2006 to 2016.

  19. mapDIA: Preprocessing and statistical analysis of quantitative proteomics data from data independent acquisition mass spectrometry.

    PubMed

    Teo, Guoshou; Kim, Sinae; Tsou, Chih-Chiang; Collins, Ben; Gingras, Anne-Claude; Nesvizhskii, Alexey I; Choi, Hyungwon

    2015-11-03

    Data independent acquisition (DIA) mass spectrometry is an emerging technique that offers more complete detection and quantification of peptides and proteins across multiple samples. DIA allows fragment-level quantification, which can be considered as repeated measurements of the abundance of the corresponding peptides and proteins in the downstream statistical analysis. However, few statistical approaches are available for aggregating these complex fragment-level data into peptide- or protein-level statistical summaries. In this work, we describe a software package, mapDIA, for statistical analysis of differential protein expression using DIA fragment-level intensities. The workflow consists of three major steps: intensity normalization, peptide/fragment selection, and statistical analysis. First, mapDIA offers normalization of fragment-level intensities by total intensity sums as well as a novel alternative normalization by local intensity sums in retention time space. Second, mapDIA removes outlier observations and selects peptides/fragments that preserve the major quantitative patterns across all samples for each protein. Last, using the selected fragments and peptides, mapDIA performs model-based statistical significance analysis of protein-level differential expression between specified groups of samples. Using a comprehensive set of simulation datasets, we show that mapDIA detects differentially expressed proteins with accurate control of the false discovery rates. We also describe the analysis procedure in detail using two recently published DIA datasets generated for 14-3-3β dynamic interaction network and prostate cancer glycoproteome. The software was written in C++ language and the source code is available for free through SourceForge website http://sourceforge.net/projects/mapdia/.This article is part of a Special Issue entitled: Computational Proteomics. Copyright © 2015 Elsevier B.V. All rights reserved.

  20. MTS dye based colorimetric CTLL-2 cell proliferation assay for product release and stability monitoring of interleukin-15: assay qualification, standardization and statistical analysis.

    PubMed

    Soman, Gopalan; Yang, Xiaoyi; Jiang, Hengguang; Giardina, Steve; Vyas, Vinay; Mitra, George; Yovandich, Jason; Creekmore, Stephen P; Waldmann, Thomas A; Quiñones, Octavio; Alvord, W Gregory

    2009-08-31

    A colorimetric cell proliferation assay using soluble tetrazolium salt [(CellTiter 96(R) Aqueous One Solution) cell proliferation reagent, containing the (3-(4,5-dimethylthiazol-2-yl)-5-(3-carboxymethoxyphenyl)-2-(4-sulfophenyl)-2H-tetrazolium, inner salt) and an electron coupling reagent phenazine ethosulfate], was optimized and qualified for quantitative determination of IL-15 dependent CTLL-2 cell proliferation activity. An in-house recombinant Human (rHu)IL-15 reference lot was standardized (IU/mg) against an international reference standard. Specificity of the assay for IL-15 was documented by illustrating the ability of neutralizing anti-IL-15 antibodies to block the product specific CTLL-2 cell proliferation and the lack of blocking effect with anti-IL-2 antibodies. Under the defined assay conditions, the linear dose-response concentration range was between 0.04 and 0.17ng/ml of the rHuIL-15 produced in-house and 0.5-3.0IU/ml for the international standard. Statistical analysis of the data was performed with the use of scripts written in the R Statistical Language and Environment utilizing a four-parameter logistic regression fit analysis procedure. The overall variation in the ED(50) values for the in-house reference standard from 55 independent estimates performed over the period of 1year was 12.3% of the average. Excellent intra-plate and within-day/inter-plate consistency was observed for all four parameter estimates in the model. Different preparations of rHuIL-15 showed excellent intra-plate consistency in the parameter estimates corresponding to the lower and upper asymptotes as well as to the 'slope' factor at the mid-point. The ED(50) values showed statistically significant differences for different lots and for control versus stressed samples. Three R-scripts improve data analysis capabilities allowing one to describe assay variations, to draw inferences between data sets from formal statistical tests, and to set up improved assay acceptance criteria based on comparability and consistency in the four parameters of the model. The assay is precise, accurate and robust and can be fully validated. Applications of the assay were established including process development support, release of the rHuIL-15 product for pre-clinical and clinical studies, and for monitoring storage stability.

  1. Insights into Corona Formation through Statistical Analyses

    NASA Technical Reports Server (NTRS)

    Glaze, L. S.; Stofan, E. R.; Smrekar, S. E.; Baloga, S. M.

    2002-01-01

    Statistical analysis of an expanded database of coronae on Venus indicates that the populations of Type 1 (with fracture annuli) and 2 (without fracture annuli) corona diameters are statistically indistinguishable, and therefore we have no basis for assuming different formation mechanisms. Analysis of the topography and diameters of coronae shows that coronae that are depressions, rimmed depressions, and domes tend to be significantly smaller than those that are plateaus, rimmed plateaus, or domes with surrounding rims. This is consistent with the model of Smrekar and Stofan and inconsistent with predictions of the spreading drop model of Koch and Manga. The diameter range for domes, the initial stage of corona formation, provides a broad constraint on the buoyancy of corona-forming plumes. Coronae are only slightly more likely to be topographically raised than depressions, with Type 1 coronae most frequently occurring as rimmed depressions and Type 2 coronae most frequently occuring with flat interiors and raised rims. Most Type 1 coronae are located along chasmata systems or fracture belts, while Type 2 coronas are found predominantly as isolated features in the plains. Coronae at hotspot rises tend to be significantly larger than coronae in other settings, consistent with a hotter upper mantle at hotspot rises and their active state.

  2. Prevalence of dental attrition in in vitro fertilization children of West Bengal

    PubMed Central

    Kar, Sudipta; Sarkar, Subrata; Mukherjee, Ananya

    2014-01-01

    CONTEXT: Dental attrition is one of the problems affecting the tooth structure. It may affect both in vitro fertilization (IVF) and spontaneously conceived children. AIMS: This study was aimed to evaluate and to compare the prevalence of dental attrition in deciduous dentition of IVF and spontaneously conceived children. SETTINGS AND DESIGN: In a cross-sectional case control study dental attrition status of 3-5 years old children were assessed. The case group consisted of term, singleton babies who were the outcome of IVF in the studied area in 2009. SUBJECTS AND METHODS: The control group consisted of term, first child, singleton and spontaneously conceived 3-5 years old children who were also resident of the studied area. A sample of 153 IVF and 153 spontaneously conceived children was examined according to Hansson and Nilner classification. STATISTICAL ANALYSIS USED: Statistical analysis was carried out using Chi-square tests (χ2 ) or Z test. RESULTS: No statistically significant difference found in studied (IVF children) and control group (spontaneously conceived children). CONCLUSIONS: IVF children are considered same as spontaneously conceived children when studied in relation to dental attrition status. PMID:24829529

  3. The Ontology of Biological and Clinical Statistics (OBCS) for standardized and reproducible statistical analysis.

    PubMed

    Zheng, Jie; Harris, Marcelline R; Masci, Anna Maria; Lin, Yu; Hero, Alfred; Smith, Barry; He, Yongqun

    2016-09-14

    Statistics play a critical role in biological and clinical research. However, most reports of scientific results in the published literature make it difficult for the reader to reproduce the statistical analyses performed in achieving those results because they provide inadequate documentation of the statistical tests and algorithms applied. The Ontology of Biological and Clinical Statistics (OBCS) is put forward here as a step towards solving this problem. The terms in OBCS including 'data collection', 'data transformation in statistics', 'data visualization', 'statistical data analysis', and 'drawing a conclusion based on data', cover the major types of statistical processes used in basic biological research and clinical outcome studies. OBCS is aligned with the Basic Formal Ontology (BFO) and extends the Ontology of Biomedical Investigations (OBI), an OBO (Open Biological and Biomedical Ontologies) Foundry ontology supported by over 20 research communities. Currently, OBCS comprehends 878 terms, representing 20 BFO classes, 403 OBI classes, 229 OBCS specific classes, and 122 classes imported from ten other OBO ontologies. We discuss two examples illustrating how the ontology is being applied. In the first (biological) use case, we describe how OBCS was applied to represent the high throughput microarray data analysis of immunological transcriptional profiles in human subjects vaccinated with an influenza vaccine. In the second (clinical outcomes) use case, we applied OBCS to represent the processing of electronic health care data to determine the associations between hospital staffing levels and patient mortality. Our case studies were designed to show how OBCS can be used for the consistent representation of statistical analysis pipelines under two different research paradigms. Other ongoing projects using OBCS for statistical data processing are also discussed. The OBCS source code and documentation are available at: https://github.com/obcs/obcs . The Ontology of Biological and Clinical Statistics (OBCS) is a community-based open source ontology in the domain of biological and clinical statistics. OBCS is a timely ontology that represents statistics-related terms and their relations in a rigorous fashion, facilitates standard data analysis and integration, and supports reproducible biological and clinical research.

  4. Gender subordination in the vulnerability of women to domestic violence.

    PubMed

    Macedo Piosiadlo, Laura Christina; Godoy Serpa da Fonseca, Rosa Maria

    2016-06-01

    To create and validate an instrument that identifies women's vulnerability to domestic violence through gender subordination indicators in the family. An instrument consisting on 61 phrases was created, that indicates gender subordination in the family. After the assessment from ten judges, 34 phrases were validated. The approved version was administered to 321 health service users of São José dos Pinhais (Estado de Paraná, Brasil), along with the validated Portuguese version of the Abuse Assessment Screen (AAS) (for purposes of separating the sample group - the ''YES'' group was composed of women who have suffered violence and the ''NO'' group consisted of women who had not suffered violence). Data were transferred into the Statistical Package for the Social Sciences (SPSS) software, version 22, and quantitatively analyzed using exploratory and factor analysis, and tests for internal consistency. After analysis (Kaiser-Meyer-Olkin (KMO) statistics, Monte Carlo Principal Components Analysis (PCA, and diagram segmentation), two factors were identified: F1 - consisting of phrases related to home maintenance and family structure; F2 - phrases intrinsic to the couple's relationship. For the statements that reinforce gender subordination, the mean of the factors were higher for the group that answered YES to one of the violence identifying issues. The created instrument was able to identify women who were vulnerable to domestic violence using gender subordination indicators. This could be an important tool for nurses and other professionals in multidisciplinary teams, in order to organize and plan actions to prevent violence against women.

  5. Finding the Root Causes of Statistical Inconsistency in Community Earth System Model Output

    NASA Astrophysics Data System (ADS)

    Milroy, D.; Hammerling, D.; Baker, A. H.

    2017-12-01

    Baker et al (2015) developed the Community Earth System Model Ensemble Consistency Test (CESM-ECT) to provide a metric for software quality assurance by determining statistical consistency between an ensemble of CESM outputs and new test runs. The test has proved useful for detecting statistical difference caused by compiler bugs and errors in physical modules. However, detection is only the necessary first step in finding the causes of statistical difference. The CESM is a vastly complex model comprised of millions of lines of code which is developed and maintained by a large community of software engineers and scientists. Any root cause analysis is correspondingly challenging. We propose a new capability for CESM-ECT: identifying the sections of code that cause statistical distinguishability. The first step is to discover CESM variables that cause CESM-ECT to classify new runs as statistically distinct, which we achieve via Randomized Logistic Regression. Next we use a tool developed to identify CESM components that define or compute the variables found in the first step. Finally, we employ the application Kernel GENerator (KGEN) created in Kim et al (2016) to detect fine-grained floating point differences. We demonstrate an example of the procedure and advance a plan to automate this process in our future work.

  6. Fast mean and variance computation of the diffuse sound transmission through finite-sized thick and layered wall and floor systems

    NASA Astrophysics Data System (ADS)

    Decraene, Carolina; Dijckmans, Arne; Reynders, Edwin P. B.

    2018-05-01

    A method is developed for computing the mean and variance of the diffuse field sound transmission loss of finite-sized layered wall and floor systems that consist of solid, fluid and/or poroelastic layers. This is achieved by coupling a transfer matrix model of the wall or floor to statistical energy analysis subsystem models of the adjacent room volumes. The modal behavior of the wall is approximately accounted for by projecting the wall displacement onto a set of sinusoidal lateral basis functions. This hybrid modal transfer matrix-statistical energy analysis method is validated on multiple wall systems: a thin steel plate, a polymethyl methacrylate panel, a thick brick wall, a sandwich panel, a double-leaf wall with poro-elastic material in the cavity, and a double glazing. The predictions are compared with experimental data and with results obtained using alternative prediction methods such as the transfer matrix method with spatial windowing, the hybrid wave based-transfer matrix method, and the hybrid finite element-statistical energy analysis method. These comparisons confirm the prediction accuracy of the proposed method and the computational efficiency against the conventional hybrid finite element-statistical energy analysis method.

  7. Students' attitudes towards learning statistics

    NASA Astrophysics Data System (ADS)

    Ghulami, Hassan Rahnaward; Hamid, Mohd Rashid Ab; Zakaria, Roslinazairimah

    2015-05-01

    Positive attitude towards learning is vital in order to master the core content of the subject matters under study. This is unexceptional in learning statistics course especially at the university level. Therefore, this study investigates the students' attitude towards learning statistics. Six variables or constructs have been identified such as affect, cognitive competence, value, difficulty, interest, and effort. The instrument used for the study is questionnaire that was adopted and adapted from the reliable instrument of Survey of Attitudes towards Statistics(SATS©). This study is conducted to engineering undergraduate students in one of the university in the East Coast of Malaysia. The respondents consist of students who were taking the applied statistics course from different faculties. The results are analysed in terms of descriptive analysis and it contributes to the descriptive understanding of students' attitude towards the teaching and learning process of statistics.

  8. The Job Dimensions Underlying the Job Elements of the Position Analysis Questionnaire (PAQ) (Form B).

    DTIC Science & Technology

    The study was concerned with the identification of the job dimension underlying the job elements of the Position Analysis Questionnaire ( PAQ ), Form B...The PAQ is a structured job analysis instrument consisting of 187 worker-oriented job elements which are divided into six a priori major divisions...The statistical procedure of principal components analysis was used to identify the job dimensions of the PAQ . Forty-five job dimensions were

  9. Students' Successes and Challenges Applying Data Analysis and Measurement Skills in a Fifth-Grade Integrated STEM Unit

    ERIC Educational Resources Information Center

    Glancy, Aran W.; Moore, Tamara J.; Guzey, Selcen; Smith, Karl A.

    2017-01-01

    An understanding of statistics and skills in data analysis are becoming more and more essential, yet research consistently shows that students struggle with these concepts at all levels. This case study documents some of the struggles four groups of fifth-grade students encounter as they collect, organize, and interpret data and then ultimately…

  10. How Will DSM-5 Affect Autism Diagnosis? A Systematic Literature Review and Meta-Analysis

    ERIC Educational Resources Information Center

    Kulage, Kristine M.; Smaldone, Arlene M.; Cohn, Elizabeth G.

    2014-01-01

    We conducted a systematic review and meta-analysis to determine the effect of changes to the Diagnostic and Statistical Manual (DSM)-5 on autism spectrum disorder (ASD) and explore policy implications. We identified 418 studies; 14 met inclusion criteria. Studies consistently reported decreases in ASD diagnosis (range 7.3-68.4%) using DSM-5…

  11. Fully Bayesian Estimation of Data from Single Case Designs

    ERIC Educational Resources Information Center

    Rindskopf, David

    2013-01-01

    Single case designs (SCDs) generally consist of a small number of short time series in two or more phases. The analysis of SCDs statistically fits in the framework of a multilevel model, or hierarchical model. The usual analysis does not take into account the uncertainty in the estimation of the random effects. This not only has an effect on the…

  12. Psychometric evaluation of the Persian version of the Templer's Death Anxiety Scale in cancer patients.

    PubMed

    Soleimani, Mohammad Ali; Yaghoobzadeh, Ameneh; Bahrami, Nasim; Sharif, Saeed Pahlevan; Sharif Nia, Hamid

    2016-10-01

    In this study, 398 Iranian cancer patients completed the 15-item Templer's Death Anxiety Scale (TDAS). Tests of internal consistency, principal components analysis, and confirmatory factor analysis were conducted to assess the internal consistency and factorial validity of the Persian TDAS. The construct reliability statistic and average variance extracted were also calculated to measure construct reliability, convergent validity, and discriminant validity. Principal components analysis indicated a 3-component solution, which was generally supported in the confirmatory analysis. However, acceptable cutoffs for construct reliability, convergent validity, and discriminant validity were not fulfilled for the three subscales that were derived from the principal component analysis. This study demonstrated both the advantages and potential limitations of using the TDAS with Persian-speaking cancer patients.

  13. Comparative analysis of positive and negative attitudes toward statistics

    NASA Astrophysics Data System (ADS)

    Ghulami, Hassan Rahnaward; Ab Hamid, Mohd Rashid; Zakaria, Roslinazairimah

    2015-02-01

    Many statistics lecturers and statistics education researchers are interested to know the perception of their students' attitudes toward statistics during the statistics course. In statistics course, positive attitude toward statistics is a vital because it will be encourage students to get interested in the statistics course and in order to master the core content of the subject matters under study. Although, students who have negative attitudes toward statistics they will feel depressed especially in the given group assignment, at risk for failure, are often highly emotional, and could not move forward. Therefore, this study investigates the students' attitude towards learning statistics. Six latent constructs have been the measurement of students' attitudes toward learning statistic such as affect, cognitive competence, value, difficulty, interest, and effort. The questionnaire was adopted and adapted from the reliable and validate instrument of Survey of Attitudes towards Statistics (SATS). This study is conducted among engineering undergraduate engineering students in the university Malaysia Pahang (UMP). The respondents consist of students who were taking the applied statistics course from different faculties. From the analysis, it is found that the questionnaire is acceptable and the relationships among the constructs has been proposed and investigated. In this case, students show full effort to master the statistics course, feel statistics course enjoyable, have confidence that they have intellectual capacity, and they have more positive attitudes then negative attitudes towards statistics learning. In conclusion in terms of affect, cognitive competence, value, interest and effort construct the positive attitude towards statistics was mostly exhibited. While negative attitudes mostly exhibited by difficulty construct.

  14. A new statistic for identifying batch effects in high-throughput genomic data that uses guided principal component analysis.

    PubMed

    Reese, Sarah E; Archer, Kellie J; Therneau, Terry M; Atkinson, Elizabeth J; Vachon, Celine M; de Andrade, Mariza; Kocher, Jean-Pierre A; Eckel-Passow, Jeanette E

    2013-11-15

    Batch effects are due to probe-specific systematic variation between groups of samples (batches) resulting from experimental features that are not of biological interest. Principal component analysis (PCA) is commonly used as a visual tool to determine whether batch effects exist after applying a global normalization method. However, PCA yields linear combinations of the variables that contribute maximum variance and thus will not necessarily detect batch effects if they are not the largest source of variability in the data. We present an extension of PCA to quantify the existence of batch effects, called guided PCA (gPCA). We describe a test statistic that uses gPCA to test whether a batch effect exists. We apply our proposed test statistic derived using gPCA to simulated data and to two copy number variation case studies: the first study consisted of 614 samples from a breast cancer family study using Illumina Human 660 bead-chip arrays, whereas the second case study consisted of 703 samples from a family blood pressure study that used Affymetrix SNP Array 6.0. We demonstrate that our statistic has good statistical properties and is able to identify significant batch effects in two copy number variation case studies. We developed a new statistic that uses gPCA to identify whether batch effects exist in high-throughput genomic data. Although our examples pertain to copy number data, gPCA is general and can be used on other data types as well. The gPCA R package (Available via CRAN) provides functionality and data to perform the methods in this article. reesese@vcu.edu

  15. Quality and Consistency of the NASA Ocean Color Data Record

    NASA Technical Reports Server (NTRS)

    Franz, Bryan A.

    2012-01-01

    The NASA Ocean Biology Processing Group (OBPG) recently reprocessed the multimission ocean color time-series from SeaWiFS, MODIS-Aqua, and MODIS-Terra using common algorithms and improved instrument calibration knowledge. Here we present an analysis of the quality and consistency of the resulting ocean color retrievals, including spectral water-leaving reflectance, chlorophyll a concentration, and diffuse attenuation. Statistical analysis of satellite retrievals relative to in situ measurements will be presented for each sensor, as well as an assessment of consistency in the global time-series for the overlapping periods of the missions. Results will show that the satellite retrievals are in good agreement with in situ measurements, and that the sensor ocean color data records are highly consistent over the common mission lifespan for the global deep oceans, but with degraded agreement in higher productivity, higher complexity coastal regions.

  16. Quantitative Imaging Biomarkers: A Review of Statistical Methods for Technical Performance Assessment

    PubMed Central

    2017-01-01

    Technological developments and greater rigor in the quantitative measurement of biological features in medical images have given rise to an increased interest in using quantitative imaging biomarkers (QIBs) to measure changes in these features. Critical to the performance of a QIB in preclinical or clinical settings are three primary metrology areas of interest: measurement linearity and bias, repeatability, and the ability to consistently reproduce equivalent results when conditions change, as would be expected in any clinical trial. Unfortunately, performance studies to date differ greatly in designs, analysis method and metrics used to assess a QIB for clinical use. It is therefore, difficult or not possible to integrate results from different studies or to use reported results to design studies. The Radiological Society of North America (RSNA) and the Quantitative Imaging Biomarker Alliance (QIBA) with technical, radiological and statistical experts developed a set of technical performance analysis methods, metrics and study designs that provide terminology, metrics and methods consistent with widely accepted metrological standards. This document provides a consistent framework for the conduct and evaluation of QIB performance studies so that results from multiple studies can be compared, contrasted or combined. PMID:24919831

  17. Simultaneous Analysis and Quality Assurance for Diffusion Tensor Imaging

    PubMed Central

    Lauzon, Carolyn B.; Asman, Andrew J.; Esparza, Michael L.; Burns, Scott S.; Fan, Qiuyun; Gao, Yurui; Anderson, Adam W.; Davis, Nicole; Cutting, Laurie E.; Landman, Bennett A.

    2013-01-01

    Diffusion tensor imaging (DTI) enables non-invasive, cyto-architectural mapping of in vivo tissue microarchitecture through voxel-wise mathematical modeling of multiple magnetic resonance imaging (MRI) acquisitions, each differently sensitized to water diffusion. DTI computations are fundamentally estimation processes and are sensitive to noise and artifacts. Despite widespread adoption in the neuroimaging community, maintaining consistent DTI data quality remains challenging given the propensity for patient motion, artifacts associated with fast imaging techniques, and the possibility of hardware changes/failures. Furthermore, the quantity of data acquired per voxel, the non-linear estimation process, and numerous potential use cases complicate traditional visual data inspection approaches. Currently, quality inspection of DTI data has relied on visual inspection and individual processing in DTI analysis software programs (e.g. DTIPrep, DTI-studio). However, recent advances in applied statistical methods have yielded several different metrics to assess noise level, artifact propensity, quality of tensor fit, variance of estimated measures, and bias in estimated measures. To date, these metrics have been largely studied in isolation. Herein, we select complementary metrics for integration into an automatic DTI analysis and quality assurance pipeline. The pipeline completes in 24 hours, stores statistical outputs, and produces a graphical summary quality analysis (QA) report. We assess the utility of this streamlined approach for empirical quality assessment on 608 DTI datasets from pediatric neuroimaging studies. The efficiency and accuracy of quality analysis using the proposed pipeline is compared with quality analysis based on visual inspection. The unified pipeline is found to save a statistically significant amount of time (over 70%) while improving the consistency of QA between a DTI expert and a pool of research associates. Projection of QA metrics to a low dimensional manifold reveal qualitative, but clear, QA-study associations and suggest that automated outlier/anomaly detection would be feasible. PMID:23637895

  18. Simultaneous analysis and quality assurance for diffusion tensor imaging.

    PubMed

    Lauzon, Carolyn B; Asman, Andrew J; Esparza, Michael L; Burns, Scott S; Fan, Qiuyun; Gao, Yurui; Anderson, Adam W; Davis, Nicole; Cutting, Laurie E; Landman, Bennett A

    2013-01-01

    Diffusion tensor imaging (DTI) enables non-invasive, cyto-architectural mapping of in vivo tissue microarchitecture through voxel-wise mathematical modeling of multiple magnetic resonance imaging (MRI) acquisitions, each differently sensitized to water diffusion. DTI computations are fundamentally estimation processes and are sensitive to noise and artifacts. Despite widespread adoption in the neuroimaging community, maintaining consistent DTI data quality remains challenging given the propensity for patient motion, artifacts associated with fast imaging techniques, and the possibility of hardware changes/failures. Furthermore, the quantity of data acquired per voxel, the non-linear estimation process, and numerous potential use cases complicate traditional visual data inspection approaches. Currently, quality inspection of DTI data has relied on visual inspection and individual processing in DTI analysis software programs (e.g. DTIPrep, DTI-studio). However, recent advances in applied statistical methods have yielded several different metrics to assess noise level, artifact propensity, quality of tensor fit, variance of estimated measures, and bias in estimated measures. To date, these metrics have been largely studied in isolation. Herein, we select complementary metrics for integration into an automatic DTI analysis and quality assurance pipeline. The pipeline completes in 24 hours, stores statistical outputs, and produces a graphical summary quality analysis (QA) report. We assess the utility of this streamlined approach for empirical quality assessment on 608 DTI datasets from pediatric neuroimaging studies. The efficiency and accuracy of quality analysis using the proposed pipeline is compared with quality analysis based on visual inspection. The unified pipeline is found to save a statistically significant amount of time (over 70%) while improving the consistency of QA between a DTI expert and a pool of research associates. Projection of QA metrics to a low dimensional manifold reveal qualitative, but clear, QA-study associations and suggest that automated outlier/anomaly detection would be feasible.

  19. Spatial variation of volcanic rock geochemistry in the Virunga Volcanic Province: Statistical analysis of an integrated database

    NASA Astrophysics Data System (ADS)

    Barette, Florian; Poppe, Sam; Smets, Benoît; Benbakkar, Mhammed; Kervyn, Matthieu

    2017-10-01

    We present an integrated, spatially-explicit database of existing geochemical major-element analyses available from (post-) colonial scientific reports, PhD Theses and international publications for the Virunga Volcanic Province, located in the western branch of the East African Rift System. This volcanic province is characterised by alkaline volcanism, including silica-undersaturated, alkaline and potassic lavas. The database contains a total of 908 geochemical analyses of eruptive rocks for the entire volcanic province with a localisation for most samples. A preliminary analysis of the overall consistency of the database, using statistical techniques on sets of geochemical analyses with contrasted analytical methods or dates, demonstrates that the database is consistent. We applied a principal component analysis and cluster analysis on whole-rock major element compositions included in the database to study the spatial variation of the chemical composition of eruptive products in the Virunga Volcanic Province. These statistical analyses identify spatially distributed clusters of eruptive products. The known geochemical contrasts are highlighted by the spatial analysis, such as the unique geochemical signature of Nyiragongo lavas compared to other Virunga lavas, the geochemical heterogeneity of the Bulengo area, and the trachyte flows of Karisimbi volcano. Most importantly, we identified separate clusters of eruptive products which originate from primitive magmatic sources. These lavas of primitive composition are preferentially located along NE-SW inherited rift structures, often at distance from the central Virunga volcanoes. Our results illustrate the relevance of a spatial analysis on integrated geochemical data for a volcanic province, as a complement to classical petrological investigations. This approach indeed helps to characterise geochemical variations within a complex of magmatic systems and to identify specific petrologic and geochemical investigations that should be tackled within a study area.

  20. Questionable assumptions hampered interpretation of a network meta-analysis of primary care depression treatments.

    PubMed

    Linde, Klaus; Rücker, Gerta; Schneider, Antonius; Kriston, Levente

    2016-03-01

    We aimed to evaluate the underlying assumptions of a network meta-analysis investigating which depression treatment works best in primary care and to highlight challenges and pitfalls of interpretation under consideration of these assumptions. We reviewed 100 randomized trials investigating pharmacologic and psychological treatments for primary care patients with depression. Network meta-analysis was carried out within a frequentist framework using response to treatment as outcome measure. Transitivity was assessed by epidemiologic judgment based on theoretical and empirical investigation of the distribution of trial characteristics across comparisons. Homogeneity and consistency were investigated by decomposing the Q statistic. There were important clinical and statistically significant differences between "pure" drug trials comparing pharmacologic substances with each other or placebo (63 trials) and trials including a psychological treatment arm (37 trials). Overall network meta-analysis produced results well comparable with separate meta-analyses of drug trials and psychological trials. Although the homogeneity and consistency assumptions were mostly met, we considered the transitivity assumption unjustifiable. An exchange of experience between reviewers and, if possible, some guidance on how reviewers addressing important clinical questions can proceed in situations where important assumptions for valid network meta-analysis are not met would be desirable. Copyright © 2016 Elsevier Inc. All rights reserved.

  1. Potential Costs of Veterans’ Health Care

    DTIC Science & Technology

    2010-10-01

    coverage, there is no rigid mathematical relationship among those proportions because veterans enrolled in Part A may choose to enroll in either Part B or...assumption is consistent with the statistical analysis by an actuarial firm with which VA contracted when developing its model for projecting

  2. Likert scales, levels of measurement and the "laws" of statistics.

    PubMed

    Norman, Geoff

    2010-12-01

    Reviewers of research reports frequently criticize the choice of statistical methods. While some of these criticisms are well-founded, frequently the use of various parametric methods such as analysis of variance, regression, correlation are faulted because: (a) the sample size is too small, (b) the data may not be normally distributed, or (c) The data are from Likert scales, which are ordinal, so parametric statistics cannot be used. In this paper, I dissect these arguments, and show that many studies, dating back to the 1930s consistently show that parametric statistics are robust with respect to violations of these assumptions. Hence, challenges like those above are unfounded, and parametric methods can be utilized without concern for "getting the wrong answer".

  3. Statistical analysis of NaOH pretreatment effects on sweet sorghum bagasse characteristics

    NASA Astrophysics Data System (ADS)

    Putri, Ary Mauliva Hada; Wahyuni, Eka Tri; Sudiyani, Yanni

    2017-01-01

    We analyze the behavior of sweet sorghum bagasse characteristics before and after NaOH pretreatments by statistical analysis. These characteristics include the percentages of lignocellulosic materials and the degree of crystallinity. We use the chi-square method to get the values of fitted parameters, and then deploy student's t-test to check whether they are significantly different from zero at 99.73% confidence level (C.L.). We obtain, in the cases of hemicellulose and lignin, that their percentages after pretreatment decrease statistically. On the other hand, crystallinity does not possess similar behavior as the data proves that all fitted parameters in this case might be consistent with zero. Our statistical result is then cross examined with the observations from X-ray diffraction (XRD) and Fourier Transform Infrared (FTIR) Spectroscopy, showing pretty good agreement. This result may indicate that the 10% NaOH pretreatment might not be sufficient in changing the crystallinity index of the sweet sorghum bagasse.

  4. The effect of project-based learning on students' statistical literacy levels for data representation

    NASA Astrophysics Data System (ADS)

    Koparan, Timur; Güven, Bülent

    2015-07-01

    The point of this study is to define the effect of project-based learning approach on 8th Grade secondary-school students' statistical literacy levels for data representation. To achieve this goal, a test which consists of 12 open-ended questions in accordance with the views of experts was developed. Seventy 8th grade secondary-school students, 35 in the experimental group and 35 in the control group, took this test twice, one before the application and one after the application. All the raw scores were turned into linear points by using the Winsteps 3.72 modelling program that makes the Rasch analysis and t-tests, and an ANCOVA analysis was carried out with the linear points. Depending on the findings, it was concluded that the project-based learning approach increases students' level of statistical literacy for data representation. Students' levels of statistical literacy before and after the application were shown through the obtained person-item maps.

  5. The enhanced forest inventory and analysis program of the USDA forest service: historical perspective and announcements of statistical documentation

    Treesearch

    Ronald E. McRoberts; William A. Bechtold; Paul L. Patterson; Charles T. Scott; Gregory A. Reams

    2005-01-01

    The Forest Inventory and Analysis (FIA) program of the USDA Forest Service has initiated a transition from regional, periodic inventories to an enhanced national FIA program featuring annual measurement of a proportion of plots in each state, greater national consistency, and integration with the ground sampling component of the Forest Health Monitoring (FHM) program...

  6. Atmospheric pollution measurement by optical cross correlation methods - A concept

    NASA Technical Reports Server (NTRS)

    Fisher, M. J.; Krause, F. R.

    1971-01-01

    Method combines standard spectroscopy with statistical cross correlation analysis of two narrow light beams for remote sensing to detect foreign matter of given particulate size and consistency. Method is applicable in studies of generation and motion of clouds, nuclear debris, ozone, and radiation belts.

  7. ON THE GEOSTATISTICAL APPROACH TO THE INVERSE PROBLEM. (R825689C037)

    EPA Science Inventory

    Abstract

    The geostatistical approach to the inverse problem is discussed with emphasis on the importance of structural analysis. Although the geostatistical approach is occasionally misconstrued as mere cokriging, in fact it consists of two steps: estimation of statist...

  8. Project Sell, Title VII: Final Evaluation 1970-1971.

    ERIC Educational Resources Information Center

    Condon, Elaine C.; And Others

    This evaluative report consists of two parts. The first is a narrative report which represents a summary by the evaluation team and recommendations regarding project activities; the second part provides a statistical analysis of project achievements. Details are provided on evaluation techniques, staff, management, instructional materials,…

  9. Are well functioning civil registration and vital statistics systems associated with better health outcomes?

    PubMed

    Phillips, David E; AbouZahr, Carla; Lopez, Alan D; Mikkelsen, Lene; de Savigny, Don; Lozano, Rafael; Wilmoth, John; Setel, Philip W

    2015-10-03

    In this Series paper, we examine whether well functioning civil registration and vital statistics (CRVS) systems are associated with improved population health outcomes. We present a conceptual model connecting CRVS to wellbeing, and describe an ecological association between CRVS and health outcomes. The conceptual model posits that the legal identity that civil registration provides to individuals is key to access entitlements and services. Vital statistics produced by CRVS systems provide essential information for public health policy and prevention. These outcomes benefit individuals and societies, including improved health. We use marginal linear models and lag-lead analysis to measure ecological associations between a composite metric of CRVS performance and three health outcomes. Results are consistent with the conceptual model: improved CRVS performance coincides with improved health outcomes worldwide in a temporally consistent manner. Investment to strengthen CRVS systems is not only an important goal for individuals and societies, but also a development imperative that is good for health. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. Waveform classification and statistical analysis of seismic precursors to the July 2008 Vulcanian Eruption of Soufrière Hills Volcano, Montserrat

    NASA Astrophysics Data System (ADS)

    Rodgers, Mel; Smith, Patrick; Pyle, David; Mather, Tamsin

    2016-04-01

    Understanding the transition between quiescence and eruption at dome-forming volcanoes, such as Soufrière Hills Volcano (SHV), Montserrat, is important for monitoring volcanic activity during long-lived eruptions. Statistical analysis of seismic events (e.g. spectral analysis and identification of multiplets via cross-correlation) can be useful for characterising seismicity patterns and can be a powerful tool for analysing temporal changes in behaviour. Waveform classification is crucial for volcano monitoring, but consistent classification, both during real-time analysis and for retrospective analysis of previous volcanic activity, remains a challenge. Automated classification allows consistent re-classification of events. We present a machine learning (random forest) approach to rapidly classify waveforms that requires minimal training data. We analyse the seismic precursors to the July 2008 Vulcanian explosion at SHV and show systematic changes in frequency content and multiplet behaviour that had not previously been recognised. These precursory patterns of seismicity may be interpreted as changes in pressure conditions within the conduit during magma ascent and could be linked to magma flow rates. Frequency analysis of the different waveform classes supports the growing consensus that LP and Hybrid events should be considered end members of a continuum of low-frequency source processes. By using both supervised and unsupervised machine-learning methods we investigate the nature of waveform classification and assess current classification schemes.

  11. Reconnection properties in Kelvin-Helmholtz instabilities

    NASA Astrophysics Data System (ADS)

    Vernisse, Y.; Lavraud, B.; Eriksson, S.; Gershman, D. J.; Dorelli, J.; Pollock, C. J.; Giles, B. L.; Aunai, N.; Avanov, L. A.; Burch, J.; Chandler, M. O.; Coffey, V. N.; Dargent, J.; Ergun, R.; Farrugia, C. J.; Genot, V. N.; Graham, D.; Hasegawa, H.; Jacquey, C.; Kacem, I.; Khotyaintsev, Y. V.; Li, W.; Magnes, W.; Marchaudon, A.; Moore, T. E.; Paterson, W. R.; Penou, E.; Phan, T.; Retino, A.; Schwartz, S. J.; Saito, Y.; Sauvaud, J. A.; Schiff, C.; Torbert, R. B.; Wilder, F. D.; Yokota, S.

    2017-12-01

    Kelvin-Helmholtz instabilities are particular laboratories to study strong guide field reconnection processes. In particular, unlike the usual dayside magnetopause, the conditions across the magnetopause in KH vortices are quasi-symmetric, with low differences in beta and magnetic shear angle. We study these properties by means of statistical analysis of the high-resolution data of the Magnetospheric Multiscale mission. Several events of Kelvin-Helmholtz instabilities pas the terminator plane and a long lasting dayside instabilities event where used in order to produce this statistical analysis. Early results present a consistency between the data and the theory. In addition, the results emphasize the importance of the thickness of the magnetopause as a driver of magnetic reconnection in low magnetic shear events.

  12. Scout trajectory error propagation computer program

    NASA Technical Reports Server (NTRS)

    Myler, T. R.

    1982-01-01

    Since 1969, flight experience has been used as the basis for predicting Scout orbital accuracy. The data used for calculating the accuracy consists of errors in the trajectory parameters (altitude, velocity, etc.) at stage burnout as observed on Scout flights. Approximately 50 sets of errors are used in Monte Carlo analysis to generate error statistics in the trajectory parameters. A covariance matrix is formed which may be propagated in time. The mechanization of this process resulted in computer program Scout Trajectory Error Propagation (STEP) and is described herein. Computer program STEP may be used in conjunction with the Statistical Orbital Analysis Routine to generate accuracy in the orbit parameters (apogee, perigee, inclination, etc.) based upon flight experience.

  13. Getting the big picture in community science: methods that capture context.

    PubMed

    Luke, Douglas A

    2005-06-01

    Community science has a rich tradition of using theories and research designs that are consistent with its core value of contextualism. However, a survey of empirical articles published in the American Journal of Community Psychology shows that community scientists utilize a narrow range of statistical tools that are not well suited to assess contextual data. Multilevel modeling, geographic information systems (GIS), social network analysis, and cluster analysis are recommended as useful tools to address contextual questions in community science. An argument for increased methodological consilience is presented, where community scientists are encouraged to adopt statistical methodology that is capable of modeling a greater proportion of the data than is typical with traditional methods.

  14. Comparative analysis of seronegative and seropositive rheumatoid arthritis regarding some epidemiological and anamnestic characteristics.

    PubMed

    Sahatçiu-Meka, Vjollca; Izairi, Remzi; Rexhepi, Sylejman; Manxhuka-Kerliu, Suzana

    2007-01-01

    Classifying patients into two subsets of the disease--seronegative RA and seropositive RA--has been the subject of many studies which aim to clarify this phenomenon--without any conclusive or acceptable answer so far. The aim of this prospective study was to establish a scientific comparative analysis between seronegative and seropositive rheumatoid arthritis (RA) regarding some epidemiological and anamnestic characteristics. The studied group consisted of seronegative patients with titers lower than 1:64 as defined by Rose-Waaler test, while the control group consisted of seropositive patients with titers of 1:64 or higher. All patients belonged to 2nd and 3rd functional class according to the ARA criteria, were between 25-60 years of age (Xb=49.96), with disease duration between 1-27 years (Xb=6.41). Education, residence, economic and living conditions did not show any significant statistical difference regarding serostatus. Familial clustering of RA confirmed higher susceptibility in the seropositive group (chi2=7.02; p<0.01). In both subsets banal diseases, psychic and physical trauma, weakness, and numbness of hands and legs dominated, without any statistical differenceregarding serostatus. Some differences between groups regarding sex were noticed, but were not statistically significant, except regarding physical trauma, which was more present in seronegative females (chi2=8.05; p<0.01).

  15. Simplified Approach to Predicting Rough Surface Transition

    NASA Technical Reports Server (NTRS)

    Boyle, Robert J.; Stripf, Matthias

    2009-01-01

    Turbine vane heat transfer predictions are given for smooth and rough vanes where the experimental data show transition moving forward on the vane as the surface roughness physical height increases. Consiste nt with smooth vane heat transfer, the transition moves forward for a fixed roughness height as the Reynolds number increases. Comparison s are presented with published experimental data. Some of the data ar e for a regular roughness geometry with a range of roughness heights, Reynolds numbers, and inlet turbulence intensities. The approach ta ken in this analysis is to treat the roughness in a statistical sense , consistent with what would be obtained from blades measured after e xposure to actual engine environments. An approach is given to determ ine the equivalent sand grain roughness from the statistics of the re gular geometry. This approach is guided by the experimental data. A roughness transition criterion is developed, and comparisons are made with experimental data over the entire range of experimental test co nditions. Additional comparisons are made with experimental heat tran sfer data, where the roughness geometries are both regular as well a s statistical. Using the developed analysis, heat transfer calculatio ns are presented for the second stage vane of a high pressure turbine at hypothetical engine conditions.

  16. Generalized statistical mechanics approaches to earthquakes and tectonics.

    PubMed

    Vallianatos, Filippos; Papadakis, Giorgos; Michas, Georgios

    2016-12-01

    Despite the extreme complexity that characterizes the mechanism of the earthquake generation process, simple empirical scaling relations apply to the collective properties of earthquakes and faults in a variety of tectonic environments and scales. The physical characterization of those properties and the scaling relations that describe them attract a wide scientific interest and are incorporated in the probabilistic forecasting of seismicity in local, regional and planetary scales. Considerable progress has been made in the analysis of the statistical mechanics of earthquakes, which, based on the principle of entropy, can provide a physical rationale to the macroscopic properties frequently observed. The scale-invariant properties, the (multi) fractal structures and the long-range interactions that have been found to characterize fault and earthquake populations have recently led to the consideration of non-extensive statistical mechanics (NESM) as a consistent statistical mechanics framework for the description of seismicity. The consistency between NESM and observations has been demonstrated in a series of publications on seismicity, faulting, rock physics and other fields of geosciences. The aim of this review is to present in a concise manner the fundamental macroscopic properties of earthquakes and faulting and how these can be derived by using the notions of statistical mechanics and NESM, providing further insights into earthquake physics and fault growth processes.

  17. Generalized statistical mechanics approaches to earthquakes and tectonics

    PubMed Central

    Papadakis, Giorgos; Michas, Georgios

    2016-01-01

    Despite the extreme complexity that characterizes the mechanism of the earthquake generation process, simple empirical scaling relations apply to the collective properties of earthquakes and faults in a variety of tectonic environments and scales. The physical characterization of those properties and the scaling relations that describe them attract a wide scientific interest and are incorporated in the probabilistic forecasting of seismicity in local, regional and planetary scales. Considerable progress has been made in the analysis of the statistical mechanics of earthquakes, which, based on the principle of entropy, can provide a physical rationale to the macroscopic properties frequently observed. The scale-invariant properties, the (multi) fractal structures and the long-range interactions that have been found to characterize fault and earthquake populations have recently led to the consideration of non-extensive statistical mechanics (NESM) as a consistent statistical mechanics framework for the description of seismicity. The consistency between NESM and observations has been demonstrated in a series of publications on seismicity, faulting, rock physics and other fields of geosciences. The aim of this review is to present in a concise manner the fundamental macroscopic properties of earthquakes and faulting and how these can be derived by using the notions of statistical mechanics and NESM, providing further insights into earthquake physics and fault growth processes. PMID:28119548

  18. Exploratory analysis of environmental interactions in central California

    USGS Publications Warehouse

    De Cola, Lee; Falcone, Neil L.

    1996-01-01

    As part of its global change research program, the United States Geological Survey (USGS) has produced raster data that describe the land cover of the United States using a consistent format. The data consist of elevations, satellite measurements, computed vegetation indices, land cover classes, and ancillary political, topographic and hydrographic information. This open-file report uses some of these data to explore the environment of a (256-km)? region of central California. We present various visualizations of the data, multiscale correlations between topography and vegetation, a path analysis of more complex statistical interactions, and a map that portrays the influence of agriculture on the region's vegetation. An appendix contains C and Mathematica code used to generate the graphics and some of the analysis.

  19. Welfare Reform in California: Early Results from the Impact Analysis.

    ERIC Educational Resources Information Center

    Klerman, Jacob Alex; Hotz, V. Joseph; Reardon, Elaine; Cox, Amy G.; Farley, Donna O.; Haider, Steven J.; Imbens, Guido; Schoeni, Robert

    The impact of California Work Opportunity and Responsibility to Kids (CalWORKS), which was passed to increase California welfare recipients' participation in welfare-to-work (WTW) activities, was examined. The impact study consisted of a nonexperimental program evaluation that used statistical models to estimate causal effects and a simulation…

  20. Analysis of First-Term Attrition of Non-Prior Service High-Quality U.S. Army Male Recruits

    DTIC Science & Technology

    1989-12-13

    the estimators. Under broad conditions ( Hanushek , 1977 ), the maximum likelihood estimators are: ( a ) consistent (b) asymptotically efficient, and...Diseases, Vol. 24, 1971, pp. 125 - 158. Hanushek , Eric ., and John E. Jackson, Statistical Methods for Social Scientists, Academic Press, New York, 1977

  1. Manifestations of Namibian Boy's Underachievement in Education

    ERIC Educational Resources Information Center

    Zimba, Roderick F.

    2015-01-01

    An analysis of the 2012 grade 10 and grade 12 Namibian examination data indicate that girls received higher grades than boys across the then 13 education regions (Educational Management Information System, EMIS, 2012). University of Namibia graduation statistics for the period of 2002 to 2012 revealed that the institution consistently produced…

  2. The Myth and Reality of Aging in America.

    ERIC Educational Resources Information Center

    National Council on the Aging, Inc., Washington, DC.

    To understand and document the image and the reality of old age and older Americans, the National Council on the Aging (NCOA) commissioned the major, in-depth survey which examined public attitudes and expectations and documented older Americans' views and personal experiences. Consisting of statistical tables, textual analysis, and subjective…

  3. Leveraging Code Comments to Improve Software Reliability

    ERIC Educational Resources Information Center

    Tan, Lin

    2009-01-01

    Commenting source code has long been a common practice in software development. This thesis, consisting of three pieces of work, made novel use of the code comments written in natural language to improve software reliability. Our solution combines Natural Language Processing (NLP), Machine Learning, Statistics, and Program Analysis techniques to…

  4. Fall 2013 International Comparisons

    ERIC Educational Resources Information Center

    Northwest Evaluation Association, 2014

    2014-01-01

    This Fall report is an aggregated statistical analysis of Measures of Academic Progress® (MAP®) data from international schools. The report provides a consistent means of comparisons of specific sub-groups by subject and grade, which allows partners to compare their MAP® results with other schools within their region or membership organization.…

  5. Contributions to Statistical Problems Related to Microarray Data

    ERIC Educational Resources Information Center

    Hong, Feng

    2009-01-01

    Microarray is a high throughput technology to measure the gene expression. Analysis of microarray data brings many interesting and challenging problems. This thesis consists three studies related to microarray data. First, we propose a Bayesian model for microarray data and use Bayes Factors to identify differentially expressed genes. Second, we…

  6. Implementation of the medical research curriculum in graduate medical school.

    PubMed

    Park, Kwi Hwa; Kim, Tae-Hee; Chung, Wook-Jin

    2011-06-01

    The purpose of this study was to analyze the effect of the medical research curriculum on the students' satisfaction and the research self-efficacy. The curriculum was implemented to 79 graduate medical school students who entered in 2007 and 2008. This curriculum is implemented through 3 years consisting of 5 different sub-courses: Research design, Research ethics, Medical statistics, Writing medical paper, and Presentation. The effect of this program was measured with 2 self-administered surveys to students: the course satisfaction survey and the self-efficacy inventories. The Research Self-Efficacy Scale consisted of 18 items from 4 categories: Research design, Research ethics, Data analysis, and Result presentation. The descriptive statistics, paired t-test, and analysis of covariance (ANCOVA) were implemented. The average point of satisfaction of the course was 2.74 out of 4, which told us that students generally satisfied with the course. The frequencies of tutoring for research course were 2 or 3 times on average and each session of tutorial lasted 1.5 to 2 hours. The research self-efficacy in three categories (Research design, Research ethics, and Result presentation) increased significantly (p<0.1). The self-efficacy of the male students was higher than females' one. The self-efficacy was not significantly different by the experience of research paper writing at undergraduate level. The curriculum showed positive results in cultivating research self-efficacy of students. There is a need for improvement of the class of Statistical analysis as students reported that it was difficult.

  7. Nonparametric functional data estimation applied to ozone data: prediction and extreme value analysis.

    PubMed

    Quintela-del-Río, Alejandro; Francisco-Fernández, Mario

    2011-02-01

    The study of extreme values and prediction of ozone data is an important topic of research when dealing with environmental problems. Classical extreme value theory is usually used in air-pollution studies. It consists in fitting a parametric generalised extreme value (GEV) distribution to a data set of extreme values, and using the estimated distribution to compute return levels and other quantities of interest. Here, we propose to estimate these values using nonparametric functional data methods. Functional data analysis is a relatively new statistical methodology that generally deals with data consisting of curves or multi-dimensional variables. In this paper, we use this technique, jointly with nonparametric curve estimation, to provide alternatives to the usual parametric statistical tools. The nonparametric estimators are applied to real samples of maximum ozone values obtained from several monitoring stations belonging to the Automatic Urban and Rural Network (AURN) in the UK. The results show that nonparametric estimators work satisfactorily, outperforming the behaviour of classical parametric estimators. Functional data analysis is also used to predict stratospheric ozone concentrations. We show an application, using the data set of mean monthly ozone concentrations in Arosa, Switzerland, and the results are compared with those obtained by classical time series (ARIMA) analysis. Copyright © 2010 Elsevier Ltd. All rights reserved.

  8. Model Performance Evaluation and Scenario Analysis ...

    EPA Pesticide Factsheets

    This tool consists of two parts: model performance evaluation and scenario analysis (MPESA). The model performance evaluation consists of two components: model performance evaluation metrics and model diagnostics. These metrics provides modelers with statistical goodness-of-fit measures that capture magnitude only, sequence only, and combined magnitude and sequence errors. The performance measures include error analysis, coefficient of determination, Nash-Sutcliffe efficiency, and a new weighted rank method. These performance metrics only provide useful information about the overall model performance. Note that MPESA is based on the separation of observed and simulated time series into magnitude and sequence components. The separation of time series into magnitude and sequence components and the reconstruction back to time series provides diagnostic insights to modelers. For example, traditional approaches lack the capability to identify if the source of uncertainty in the simulated data is due to the quality of the input data or the way the analyst adjusted the model parameters. This report presents a suite of model diagnostics that identify if mismatches between observed and simulated data result from magnitude or sequence related errors. MPESA offers graphical and statistical options that allow HSPF users to compare observed and simulated time series and identify the parameter values to adjust or the input data to modify. The scenario analysis part of the too

  9. A Statistical Review of Alternative Zinc and Copper Extraction from Mineral Fertilizers and Industrial By-Products.

    PubMed

    Cenciani de Souza, Camila Prado; Aparecida de Abreu, Cleide; Coscione, Aline Renée; Alberto de Andrade, Cristiano; Teixeira, Luiz Antonio Junqueira; Consolini, Flavia

    2018-01-01

    Rapid, accurate, and low-cost alternative analytical methods for micronutrient quantification in fertilizers are fundamental in QC. The purpose of this study was to evaluate whether zinc (Zn) and copper (Cu) content in mineral fertilizers and industrial by-products determined by the alternative methods USEPA 3051a, 10% HCl, and 10% H2SO4 are statistically equivalent to the standard method, consisting of hot-plate digestion using concentrated HCl. The commercially marketed Zn and Cu sources in Brazil consisted of oxides, carbonate, and sulfate fertilizers and by-products consisting of galvanizing ash, galvanizing sludge, brass ash, and brass or scrap slag. The contents of sources ranged from 15 to 82% and 10 to 45%, respectively, for Zn and Cu. The Zn and Cu contents refer to the variation of the elements found in the different sources evaluated with the concentrated HCl method as shown in Table 1. A protocol based on the following criteria was used for the statistical analysis assessment of the methods: F-test modified by Graybill, t-test for the mean error, and linear correlation coefficient analysis. In terms of equivalents, 10% HCl extraction was equivalent to the standard method for Zn, and the results of the USEPA 3051a and 10% HCl methods indicated that these methods were equivalents for Cu. Therefore, these methods can be considered viable alternatives to the standard method of determination for Cu and Zn in mineral fertilizers and industrial by-products in future research for their complete validation.

  10. Outcry Consistency and Prosecutorial Decisions in Child Sexual Abuse Cases.

    PubMed

    Bracewell, Tammy E

    2018-05-18

    This study examines the correlation between the consistency in a child's sexual abuse outcry and the prosecutorial decision to accept or reject cases of child sexual abuse. Case-specific information was obtained from one Texas Children's Advocacy Center on all cases from 2010 to 2013. After the needed deletion, the total number of cases included in the analysis was 309. An outcry was defined as a sexual abuse disclosure. Consistency was measured at both the forensic interview and the sexual assault exam. Logistic regression was used to evaluate whether a correlation existed between disclosure and prosecutorial decisions. Disclosure was statistically significant. Partial disclosure (disclosure at one point in time and denial at another) versus full disclosure (disclosure at two points in time) had a statistically significant odds ratio of 4.801. Implications are discussed, specifically, how the different disciplines involved in child protection should take advantage of the expertise of both forensic interviewers and forensic nurses to inform their decisions.

  11. Consistency errors in p-values reported in Spanish psychology journals.

    PubMed

    Caperos, José Manuel; Pardo, Antonio

    2013-01-01

    Recent reviews have drawn attention to frequent consistency errors when reporting statistical results. We have reviewed the statistical results reported in 186 articles published in four Spanish psychology journals. Of these articles, 102 contained at least one of the statistics selected for our study: Fisher-F , Student-t and Pearson-c 2 . Out of the 1,212 complete statistics reviewed, 12.2% presented a consistency error, meaning that the reported p-value did not correspond to the reported value of the statistic and its degrees of freedom. In 2.3% of the cases, the correct calculation would have led to a different conclusion than the reported one. In terms of articles, 48% included at least one consistency error, and 17.6% would have to change at least one conclusion. In meta-analytical terms, with a focus on effect size, consistency errors can be considered substantial in 9.5% of the cases. These results imply a need to improve the quality and precision with which statistical results are reported in Spanish psychology journals.

  12. Featured Article: Transcriptional landscape analysis identifies differently expressed genes involved in follicle-stimulating hormone induced postmenopausal osteoporosis.

    PubMed

    Maasalu, Katre; Laius, Ott; Zhytnik, Lidiia; Kõks, Sulev; Prans, Ele; Reimann, Ene; Märtson, Aare

    2017-01-01

    Osteoporosis is a disorder associated with bone tissue reorganization, bone mass, and mineral density. Osteoporosis can severely affect postmenopausal women, causing bone fragility and osteoporotic fractures. The aim of the current study was to compare blood mRNA profiles of postmenopausal women with and without osteoporosis, with the aim of finding different gene expressions and thus targets for future osteoporosis biomarker studies. Our study consisted of transcriptome analysis of whole blood serum from 12 elderly female osteoporotic patients and 12 non-osteoporotic elderly female controls. The transcriptome analysis was performed with RNA sequencing technology. For data analysis, the edgeR package of R Bioconductor was used. Two hundred and fourteen genes were expressed differently in osteoporotic compared with non-osteoporotic patients. Statistical analysis revealed 20 differently expressed genes with a false discovery rate of less than 1.47 × 10 -4 among osteoporotic patients. The expression of 10 genes were up-regulated and 10 down-regulated. Further statistical analysis identified a potential osteoporosis mRNA biomarker pattern consisting of six genes: CACNA1G, ALG13, SBK1, GGT7, MBNL3, and RIOK3. Functional ingenuity pathway analysis identified the strongest candidate genes with regard to potential involvement in a follicle-stimulating hormone activated network of increased osteoclast activity and hypogonadal bone loss. The differentially expressed genes identified in this study may contribute to future research of postmenopausal osteoporosis blood biomarkers.

  13. Quantitative imaging biomarkers: a review of statistical methods for technical performance assessment.

    PubMed

    Raunig, David L; McShane, Lisa M; Pennello, Gene; Gatsonis, Constantine; Carson, Paul L; Voyvodic, James T; Wahl, Richard L; Kurland, Brenda F; Schwarz, Adam J; Gönen, Mithat; Zahlmann, Gudrun; Kondratovich, Marina V; O'Donnell, Kevin; Petrick, Nicholas; Cole, Patricia E; Garra, Brian; Sullivan, Daniel C

    2015-02-01

    Technological developments and greater rigor in the quantitative measurement of biological features in medical images have given rise to an increased interest in using quantitative imaging biomarkers to measure changes in these features. Critical to the performance of a quantitative imaging biomarker in preclinical or clinical settings are three primary metrology areas of interest: measurement linearity and bias, repeatability, and the ability to consistently reproduce equivalent results when conditions change, as would be expected in any clinical trial. Unfortunately, performance studies to date differ greatly in designs, analysis method, and metrics used to assess a quantitative imaging biomarker for clinical use. It is therefore difficult or not possible to integrate results from different studies or to use reported results to design studies. The Radiological Society of North America and the Quantitative Imaging Biomarker Alliance with technical, radiological, and statistical experts developed a set of technical performance analysis methods, metrics, and study designs that provide terminology, metrics, and methods consistent with widely accepted metrological standards. This document provides a consistent framework for the conduct and evaluation of quantitative imaging biomarker performance studies so that results from multiple studies can be compared, contrasted, or combined. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  14. Monitoring and Evaluation: Statistical Support for Life-cycle Studies, Annual Report 2003.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Skalski, John

    2003-11-01

    The ongoing mission of this project is the development of statistical tools for analyzing fisheries tagging data in the most precise and appropriate manner possible. This mission also includes providing statistical guidance on the best ways to design large-scale tagging studies. This mission continues because the technologies for conducting fish tagging studies continuously evolve. In just the last decade, fisheries biologists have seen the evolution from freeze-brands and coded wire tags (CWT) to passive integrated transponder (PIT) tags, balloon-tags, radiotelemetry, and now, acoustic-tags. With each advance, the technology holds the promise of more detailed and precise information. However, the technologymore » for analyzing and interpreting the data also becomes more complex as the tagging techniques become more sophisticated. The goal of the project is to develop the analytical tools in parallel with the technical advances in tagging studies, so that maximum information can be extracted on a timely basis. Associated with this mission is the transfer of these analytical capabilities to the field investigators to assure consistency and the highest levels of design and analysis throughout the fisheries community. Consequently, this project provides detailed technical assistance on the design and analysis of tagging studies to groups requesting assistance throughout the fisheries community. Ideally, each project and each investigator would invest in the statistical support needed for the successful completion of their study. However, this is an ideal that is rarely if every attained. Furthermore, there is only a small pool of highly trained scientists in this specialized area of tag analysis here in the Northwest. Project 198910700 provides the financial support to sustain this local expertise on the statistical theory of tag analysis at the University of Washington and make it available to the fisheries community. Piecemeal and fragmented support from various agencies and organizations would be incapable of maintaining a center of expertise. The mission of the project is to help assure tagging studies are designed and analyzed from the onset to extract the best available information using state-of-the-art statistical methods. The overarching goals of the project is to assure statistically sound survival studies so that fish managers can focus on the management implications of their findings and not be distracted by concerns whether the studies are statistically reliable or not. Specific goals and objectives of the study include the following: (1) Provide consistent application of statistical methodologies for survival estimation across all salmon life cycle stages to assure comparable performance measures and assessment of results through time, to maximize learning and adaptive management opportunities, and to improve and maintain the ability to responsibly evaluate the success of implemented Columbia River FWP salmonid mitigation programs and identify future mitigation options. (2) Improve analytical capabilities to conduct research on survival processes of wild and hatchery chinook and steelhead during smolt outmigration, to improve monitoring and evaluation capabilities and assist in-season river management to optimize operational and fish passage strategies to maximize survival. (3) Extend statistical support to estimate ocean survival and in-river survival of returning adults. Provide statistical guidance in implementing a river-wide adult PIT-tag detection capability. (4) Develop statistical methods for survival estimation for all potential users and make this information available through peer-reviewed publications, statistical software, and technology transfers to organizations such as NOAA Fisheries, the Fish Passage Center, US Fish and Wildlife Service, US Geological Survey (USGS), US Army Corps of Engineers (USACE), Public Utility Districts (PUDs), the Independent Scientific Advisory Board (ISAB), and other members of the Northwest fisheries community. (5) Provide and maintain statistical software for tag analysis and user support. (6) Provide improvements in statistical theory and software as requested by user groups. These improvements include extending software capabilities to address new research issues, adapting tagging techniques to new study designs, and extending the analysis capabilities to new technologies such as radio-tags and acoustic-tags.« less

  15. Data Model Performance in Data Warehousing

    NASA Astrophysics Data System (ADS)

    Rorimpandey, G. C.; Sangkop, F. I.; Rantung, V. P.; Zwart, J. P.; Liando, O. E. S.; Mewengkang, A.

    2018-02-01

    Data Warehouses have increasingly become important in organizations that have large amount of data. It is not a product but a part of a solution for the decision support system in those organizations. Data model is the starting point for designing and developing of data warehouses architectures. Thus, the data model needs stable interfaces and consistent for a longer period of time. The aim of this research is to know which data model in data warehousing has the best performance. The research method is descriptive analysis, which has 3 main tasks, such as data collection and organization, analysis of data and interpretation of data. The result of this research is discussed in a statistic analysis method, represents that there is no statistical difference among data models used in data warehousing. The organization can utilize four data model proposed when designing and developing data warehouse.

  16. The sumLINK statistic for genetic linkage analysis in the presence of heterogeneity.

    PubMed

    Christensen, G B; Knight, S; Camp, N J

    2009-11-01

    We present the "sumLINK" statistic--the sum of multipoint LOD scores for the subset of pedigrees with nominally significant linkage evidence at a given locus--as an alternative to common methods to identify susceptibility loci in the presence of heterogeneity. We also suggest the "sumLOD" statistic (the sum of positive multipoint LOD scores) as a companion to the sumLINK. sumLINK analysis identifies genetic regions of extreme consistency across pedigrees without regard to negative evidence from unlinked or uninformative pedigrees. Significance is determined by an innovative permutation procedure based on genome shuffling that randomizes linkage information across pedigrees. This procedure for generating the empirical null distribution may be useful for other linkage-based statistics as well. Using 500 genome-wide analyses of simulated null data, we show that the genome shuffling procedure results in the correct type 1 error rates for both the sumLINK and sumLOD. The power of the statistics was tested using 100 sets of simulated genome-wide data from the alternative hypothesis from GAW13. Finally, we illustrate the statistics in an analysis of 190 aggressive prostate cancer pedigrees from the International Consortium for Prostate Cancer Genetics, where we identified a new susceptibility locus. We propose that the sumLINK and sumLOD are ideal for collaborative projects and meta-analyses, as they do not require any sharing of identifiable data between contributing institutions. Further, loci identified with the sumLINK have good potential for gene localization via statistical recombinant mapping, as, by definition, several linked pedigrees contribute to each peak.

  17. Multispectral scanner system parameter study and analysis software system description, volume 2

    NASA Technical Reports Server (NTRS)

    Landgrebe, D. A. (Principal Investigator); Mobasseri, B. G.; Wiersma, D. J.; Wiswell, E. R.; Mcgillem, C. D.; Anuta, P. E.

    1978-01-01

    The author has identified the following significant results. The integration of the available methods provided the analyst with the unified scanner analysis package (USAP), the flexibility and versatility of which was superior to many previous integrated techniques. The USAP consisted of three main subsystems; (1) a spatial path, (2) a spectral path, and (3) a set of analytic classification accuracy estimators which evaluated the system performance. The spatial path consisted of satellite and/or aircraft data, data correlation analyzer, scanner IFOV, and random noise model. The output of the spatial path was fed into the analytic classification and accuracy predictor. The spectral path consisted of laboratory and/or field spectral data, EXOSYS data retrieval, optimum spectral function calculation, data transformation, and statistics calculation. The output of the spectral path was fended into the stratified posterior performance estimator.

  18. Explanation of Two Anomalous Results in Statistical Mediation Analysis

    ERIC Educational Resources Information Center

    Fritz, Matthew S.; Taylor, Aaron B.; MacKinnon, David P.

    2012-01-01

    Previous studies of different methods of testing mediation models have consistently found two anomalous results. The first result is elevated Type I error rates for the bias-corrected and accelerated bias-corrected bootstrap tests not found in nonresampling tests or in resampling tests that did not include a bias correction. This is of special…

  19. Global Document Delivery, User Studies, and Service Evaluation: The Gateway Experience

    ERIC Educational Resources Information Center

    Miller, Rush; Xu, Hong; Zou, Xiuying

    2008-01-01

    This study examines user and service data from 2002-2006 at the East Asian Gateway Service for Chinese and Korean Academic Journal Publications (Gateway Service), the University of Pittsburgh. Descriptive statistical analysis reveals that the Gateway Service has been consistently playing the leading role in global document delivery service as well…

  20. Scripted or Non-Scripted: A Comparative Analysis of Two Reading Programs

    ERIC Educational Resources Information Center

    Bosen, Pamela K.

    2014-01-01

    The focus of this quantitative comparative study was to analyze school achievement on third-grade reading assessments in 60 similar schools over a three-year period on Washington state standardized criterion-referenced assessments. This study provides statistical data showing the non-scripted programs were consistent for all three years while…

  1. Negotiated Wages and Working Conditions in Ontario Hospitals: 1973.

    ERIC Educational Resources Information Center

    Ontario Dept. of Labour, Toronto. Research Branch.

    This report is a statistical analysis of provisions in collective agreements covering approximately 38,000 full-time employees in 156 hospitals in the Province of Ontario. Part 1 consists of 56 tables giving information on the geographical distribution of hospital contracts, the unions that are party to them, their duration, and the sizes and…

  2. Descriptive Analysis of Student Ratings

    ERIC Educational Resources Information Center

    Marasini, Donata; Quatto, Piero

    2011-01-01

    Let X be a statistical variable representing student ratings of University teaching. It is natural to assume for X an ordinal scale consisting of k categories (in ascending order of satisfaction). At first glance, student ratings can be summarized by a location index (such as the mode or the median of X) associated with a convenient measure of…

  3. Wavelet analysis of birefringence images of myocardium tissue

    NASA Astrophysics Data System (ADS)

    Sakhnovskiy, M. Yu.; Ushenko, Yu. O.; Kushnerik, L.; Soltys, I. V.; Pavlyukovich, N.; Pavlyukovich, O.

    2018-01-01

    The paper consists of two parts. The first part presents short theoretical basics of the method of azimuthally-invariant Mueller-matrix description of optical anisotropy of biological tissues. It was provided experimentally measured coordinate distributions of Mueller-matrix invariants (MMI) of linear and circular birefringences of skeletal muscle tissue. It was defined the values of statistic moments, which characterize the distributions of amplitudes of wavelet coefficients of MMI at different scales of scanning. The second part presents the data of statistic analysis of the distributions of amplitude of wavelet coefficients of the distributions of linear birefringence of myocardium tissue died after the infarction and ischemic heart disease. It was defined the objective criteria of differentiation of the cause of death.

  4. Statistical analysis of plasmatrough exohiss waves on Van Allen Probes

    NASA Astrophysics Data System (ADS)

    Zhu, H.; Chen, L.

    2017-12-01

    Plasmatrough exohiss waves have attracted much attention due to their potential important role in dynamics of radiation belt. We investigated three-year Van Allen Probe data and built up an event list of exohiss. The statistical analysis shows exohiss preferentially occurred in dayside at quite time and most wave power focuses on afternoon side of low L region. Consistent with plasmaspheric hiss, the peak frequency is around 200 Hz and wave amplitude decreases with L increasing. Furthermore, the ratios of equatorward Poynting fluxes to poleward Poynting fluxes significantly increase up to 10 times as magnetic latitude increasing up to 20 deg. Those results strong support that the formation of exohiss wave results from hiss leakage, particularly at quite time.

  5. Long-term Results of an Analytical Assessment of Student Compounded Preparations

    PubMed Central

    Roark, Angie M.; Anksorus, Heidi N.

    2014-01-01

    Objective. To investigate the long-term (ie, 6-year) impact of a required remake vs an optional remake on student performance in a compounding laboratory course in which students’ compounded preparations were analyzed. Methods. The analysis data for several preparations made by students were compared for differences in the analyzed content of the active pharmaceutical ingredient (API) and the number of students who successfully compounded the preparation on the first attempt. Results. There was a consistent statistical difference in the API amount or concentration in 4 of the preparations (diphenhydramine, ketoprofen, metoprolol, and progesterone) in each optional remake year compared to the required remake year. As the analysis requirement was continued, the outcome for each preparation approached and/or attained the expected API result. Two preparations required more than 1 year to demonstrate a statistical difference. Conclusion. The analytical assessment resulted in a consistent, long-term improvement in student performance during the 5-year period after the optional remake policy was instituted. Our assumption is that investment in such an assessment would result in a similar benefits at other colleges and schools of pharmacy. PMID:26056402

  6. Long-term Results of an Analytical Assessment of Student Compounded Preparations.

    PubMed

    Roark, Angie M; Anksorus, Heidi N; Shrewsbury, Robert P

    2014-11-15

    To investigate the long-term (ie, 6-year) impact of a required remake vs an optional remake on student performance in a compounding laboratory course in which students' compounded preparations were analyzed. The analysis data for several preparations made by students were compared for differences in the analyzed content of the active pharmaceutical ingredient (API) and the number of students who successfully compounded the preparation on the first attempt. There was a consistent statistical difference in the API amount or concentration in 4 of the preparations (diphenhydramine, ketoprofen, metoprolol, and progesterone) in each optional remake year compared to the required remake year. As the analysis requirement was continued, the outcome for each preparation approached and/or attained the expected API result. Two preparations required more than 1 year to demonstrate a statistical difference. The analytical assessment resulted in a consistent, long-term improvement in student performance during the 5-year period after the optional remake policy was instituted. Our assumption is that investment in such an assessment would result in a similar benefits at other colleges and schools of pharmacy.

  7. Technician Consistency in Specular Microscopy Measurements: A "Real-World" Retrospective Analysis of a United States Eye Bank.

    PubMed

    Rand, Gabriel M; Kwon, Ji Won; Gore, Patrick K; McCartney, Mitchell D; Chuck, Roy S

    2017-10-01

    To quantify consistency of endothelial cell density (ECD) measurements among technicians in a single US eye bank operating under typical operating conditions. In this retrospective analysis of 51 microscopy technicians using a semiautomated counting method on 35,067 eyes from July 2007 to May 2015, technician- and date-related marginal ECD effects were calculated using linear regression models. ECD variance was correlated with the number of specular microscopy technicians. Technician mean ECDs ranged from 2386 ± 431 to 3005 ± 560 cells/mm. Nine technicians had statistically and clinically significant marginal effects. Annual mean ECDs adjusted for changes in technicians ranged from 2422 ± 433 to 2644 ± 430 cells/mm. The period of 2007 to 2009 had statistically and clinically significant marginal effects. There was a nonstatistically significant association between the number of technicians and ECD standard deviation. There was significant ECD variability associated with specular microscopy technicians and with the date of measurement. We recommend that eye banks collect data related to laboratory factors that have been shown to influence ECD variability.

  8. Hydrometeorological application of an extratropical cyclone classification scheme in the southern United States

    NASA Astrophysics Data System (ADS)

    Senkbeil, J. C.; Brommer, D. M.; Comstock, I. J.; Loyd, T.

    2012-07-01

    Extratropical cyclones (ETCs) in the southern United States are often overlooked when compared with tropical cyclones in the region and ETCs in the northern United States. Although southern ETCs are significant weather events, there is currently not an operational scheme used for identifying and discussing these nameless storms. In this research, we classified 84 ETCs (1970-2009). We manually identified five distinct formation regions and seven unique ETC types using statistical classification. Statistical classification employed the use of principal components analysis and two methods of cluster analysis. Both manual and statistical storm types generally showed positive (negative) relationships with El Niño (La Niña). Manual storm types displayed precipitation swaths consistent with discrete storm tracks which further legitimizes the existence of multiple modes of southern ETCs. Statistical storm types also displayed unique precipitation intensity swaths, but these swaths were less indicative of track location. It is hoped that by classifying southern ETCs into types, that forecasters, hydrologists, and broadcast meteorologists might be able to better anticipate projected amounts of precipitation at their locations.

  9. Asymptotic approximation method of force reconstruction: Application and analysis of stationary random forces

    NASA Astrophysics Data System (ADS)

    Sanchez, J.

    2018-06-01

    In this paper, the application and analysis of the asymptotic approximation method to a single degree-of-freedom has recently been produced. The original concepts are summarized, and the necessary probabilistic concepts are developed and applied to single degree-of-freedom systems. Then, these concepts are united, and the theoretical and computational models are developed. To determine the viability of the proposed method in a probabilistic context, numerical experiments are conducted, and consist of a frequency analysis, analysis of the effects of measurement noise, and a statistical analysis. In addition, two examples are presented and discussed.

  10. Statistics of Delta v magnitude for a trajectory correction maneuver containing deterministic and random components

    NASA Technical Reports Server (NTRS)

    Bollman, W. E.; Chadwick, C.

    1982-01-01

    A number of interplanetary missions now being planned involve placing deterministic maneuvers along the flight path to alter the trajectory. Lee and Boain (1973) examined the statistics of trajectory correction maneuver (TCM) magnitude with no deterministic ('bias') component. The Delta v vector magnitude statistics were generated for several values of random Delta v standard deviations using expansions in terms of infinite hypergeometric series. The present investigation uses a different technique (Monte Carlo simulation) to generate Delta v magnitude statistics for a wider selection of random Delta v standard deviations and also extends the analysis to the case of nonzero deterministic Delta v's. These Delta v magnitude statistics are plotted parametrically. The plots are useful in assisting the analyst in quickly answering questions about the statistics of Delta v magnitude for single TCM's consisting of both a deterministic and a random component. The plots provide quick insight into the nature of the Delta v magnitude distribution for the TCM.

  11. MAGMA: analysis of two-channel microarrays made easy.

    PubMed

    Rehrauer, Hubert; Zoller, Stefan; Schlapbach, Ralph

    2007-07-01

    The web application MAGMA provides a simple and intuitive interface to identify differentially expressed genes from two-channel microarray data. While the underlying algorithms are not superior to those of similar web applications, MAGMA is particularly user friendly and can be used without prior training. The user interface guides the novice user through the most typical microarray analysis workflow consisting of data upload, annotation, normalization and statistical analysis. It automatically generates R-scripts that document MAGMA's entire data processing steps, thereby allowing the user to regenerate all results in his local R installation. The implementation of MAGMA follows the model-view-controller design pattern that strictly separates the R-based statistical data processing, the web-representation and the application logic. This modular design makes the application flexible and easily extendible by experts in one of the fields: statistical microarray analysis, web design or software development. State-of-the-art Java Server Faces technology was used to generate the web interface and to perform user input processing. MAGMA's object-oriented modular framework makes it easily extendible and applicable to other fields and demonstrates that modern Java technology is also suitable for rather small and concise academic projects. MAGMA is freely available at www.magma-fgcz.uzh.ch.

  12. A review of empirical research related to the use of small quantitative samples in clinical outcome scale development.

    PubMed

    Houts, Carrie R; Edwards, Michael C; Wirth, R J; Deal, Linda S

    2016-11-01

    There has been a notable increase in the advocacy of using small-sample designs as an initial quantitative assessment of item and scale performance during the scale development process. This is particularly true in the development of clinical outcome assessments (COAs), where Rasch analysis has been advanced as an appropriate statistical tool for evaluating the developing COAs using a small sample. We review the benefits such methods are purported to offer from both a practical and statistical standpoint and detail several problematic areas, including both practical and statistical theory concerns, with respect to the use of quantitative methods, including Rasch-consistent methods, with small samples. The feasibility of obtaining accurate information and the potential negative impacts of misusing large-sample statistical methods with small samples during COA development are discussed.

  13. Assessing the Suitability of Historical PM(2.5) Element Measurements for Trend Analysis.

    PubMed

    Hyslop, Nicole P; Trzepla, Krystyna; White, Warren H

    2015-08-04

    The IMPROVE (Interagency Monitoring of Protected Visual Environments) network has characterized fine particulate matter composition at locations throughout the United States since 1988. A main objective of the network is to evaluate long-term trends in aerosol concentrations. Measurements inevitably advance over time, but changes in measurement technique have the potential to confound the interpretation of long-term trends. Problems of interpretation typically arise from changing biases, and changes in bias can be difficult to identify without comparison data that are consistent throughout the measurement series, which rarely exist. We created a consistent measurement series for exactly this purpose by reanalyzing the 15-year archives (1995-2009) of aerosol samples from three sites - Great Smoky Mountains National Park, Mount Rainier National Park, and Point Reyes National Seashore-as single batches using consistent analytical methods. In most cases, trend estimates based on the original and reanalysis measurements are statistically different for elements that were not measured above the detection limit consistently over the years (e.g., Na, Cl, Si, Ti, V, Mn). The original trends are more reliable for elements consistently measured above the detection limit. All but one of the 23 site-element series with detection rates >80% had statistically indistinguishable original and reanalysis trends (overlapping 95% confidence intervals).

  14. Standardization, evaluation and early-phase method validation of an analytical scheme for batch-consistency N-glycosylation analysis of recombinant produced glycoproteins.

    PubMed

    Zietze, Stefan; Müller, Rainer H; Brecht, René

    2008-03-01

    In order to set up a batch-to-batch-consistency analytical scheme for N-glycosylation analysis, several sample preparation steps including enzyme digestions and fluorophore labelling and two HPLC-methods were established. The whole method scheme was standardized, evaluated and validated according to the requirements on analytical testing in early clinical drug development by usage of a recombinant produced reference glycoprotein (RGP). The standardization of the methods was performed by clearly defined standard operation procedures. During evaluation of the methods, the major interest was in the loss determination of oligosaccharides within the analytical scheme. Validation of the methods was performed with respect to specificity, linearity, repeatability, LOD and LOQ. Due to the fact that reference N-glycan standards were not available, a statistical approach was chosen to derive accuracy from the linearity data. After finishing the validation procedure, defined limits for method variability could be calculated and differences observed in consistency analysis could be separated into significant and incidental ones.

  15. Searching for hidden unexpected features in the SnIa data

    NASA Astrophysics Data System (ADS)

    Shafieloo, A.; Perivolaropoulos, L.

    2010-06-01

    It is known that κ2 statistic and likelihood analysis may not be sensitive to the all features of the data. Despite of the fact that by using κ2 statistic we can measure the overall goodness of fit for a model confronted to a data set, some specific features of the data can stay undetectable. For instance, it has been pointed out that there is an unexpected brightness of the SnIa data at z > 1 in the Union compilation. We quantify this statement by constructing a new statistic, called Binned Normalized Difference (BND) statistic, which is applicable directly on the Type Ia Supernova (SnIa) distance moduli. This statistic is designed to pick up systematic brightness trends of SnIa data points with respect to a best fit cosmological model at high redshifts. According to this statistic there are 2.2%, 5.3% and 12.6% consistency between the Gold06, Union08 and Constitution09 data and spatially flat ΛCDM model when the real data is compared with many realizations of the simulated monte carlo datasets. The corresponding realization probability in the context of a (w0,w1) = (-1.4,2) model is more than 30% for all mentioned datasets indicating a much better consistency for this model with respect to the BND statistic. The unexpected high z brightness of SnIa can be interpreted either as a trend towards more deceleration at high z than expected in the context of ΛCDM or as a statistical fluctuation or finally as a systematic effect perhaps due to a mild SnIa evolution at high z.

  16. BATSE gamma-ray burst line search. 2: Bayesian consistency methodology

    NASA Technical Reports Server (NTRS)

    Band, D. L.; Ford, L. A.; Matteson, J. L.; Briggs, M.; Paciesas, W.; Pendleton, G.; Preece, R.; Palmer, D.; Teegarden, B.; Schaefer, B.

    1994-01-01

    We describe a Bayesian methodology to evaluate the consistency between the reported Ginga and Burst and Transient Source Experiment (BATSE) detections of absorption features in gamma-ray burst spectra. Currently no features have been detected by BATSE, but this methodology will still be applicable if and when such features are discovered. The Bayesian methodology permits the comparison of hypotheses regarding the two detectors' observations and makes explicit the subjective aspects of our analysis (e.g., the quantification of our confidence in detector performance). We also present non-Bayesian consistency statistics. Based on preliminary calculations of line detectability, we find that both the Bayesian and non-Bayesian techniques show that the BATSE and Ginga observations are consistent given our understanding of these detectors.

  17. A cohort mortality study of employees exposed to chlorinated chemicals.

    PubMed

    Wong, O

    1988-01-01

    The cohort of this historical prospective mortality study consisted of 697 male employees at a chlorination plant. A majority of the cohort was potentially exposed to benzotrichloride, benzyl chloride, benzoyl chloride, and other related chemicals. The mortality experience of the cohort was observed from 1943 through 1982. For the cohort as a whole, no statistically significant mortality excess was detected. The overall Standardized Mortality Ratio (SMR) was 100, and the SMR for all cancers combined was 122 (not significant). The respiratory cancer SMR for the cohort as a whole was 246 (7 observed vs. 2.8 expected). The excess was of borderline statistical significance, the lower 95% confidence limit being 99. Analysis by race showed that all 7 respiratory cancer deaths came from the white male employees, with an SMR of 265 (p less than 0.05). The respiratory cancer mortality excess was higher among employees in maintenance (SMR = 229) than among those in operations or production (SMR = 178). The lung cancer mortality excess among the laboratory employees was statistically significant (SMR = 1292). However, this observation should be viewed with caution, since it was based on only 2 deaths. Further analysis indicated that the respiratory cancer mortality excess was limited to the male employees with 15 or more years of employment (SMR = 379, p less than 0.05). Based on animal data as well as other epidemiologic studies, together with the internal consistency of analysis by length of employment, the data suggest an association between the chlorination process of toluene at the plant and an increased risk of respiratory cancer.(ABSTRACT TRUNCATED AT 250 WORDS)

  18. Economic fluctuations and statistical physics: Quantifying extremely rare and less rare events in finance

    NASA Astrophysics Data System (ADS)

    Stanley, H. E.; Gabaix, Xavier; Gopikrishnan, Parameswaran; Plerou, Vasiliki

    2007-08-01

    One challenge of economics is that the systems treated by these sciences have no perfect metronome in time and no perfect spatial architecture-crystalline or otherwise. Nonetheless, as if by magic, out of nothing but randomness one finds remarkably fine-tuned processes in time. We present an overview of recent research joining practitioners of economic theory and statistical physics to try to better understand puzzles regarding economic fluctuations. One of these puzzles is how to describe outliers, phenomena that lie outside of patterns of statistical regularity. We review evidence consistent with the possibility that such outliers may not exist. This possibility is supported by recent analysis of databases containing information about each trade of every stock.

  19. EHME: a new word database for research in Basque language.

    PubMed

    Acha, Joana; Laka, Itziar; Landa, Josu; Salaburu, Pello

    2014-11-14

    This article presents EHME, the frequency dictionary of Basque structure, an online program that enables researchers in psycholinguistics to extract word and nonword stimuli, based on a broad range of statistics concerning the properties of Basque words. The database consists of 22.7 million tokens, and properties available include morphological structure frequency and word-similarity measures, apart from classical indexes: word frequency, orthographic structure, orthographic similarity, bigram and biphone frequency, and syllable-based measures. Measures are indexed at the lemma, morpheme and word level. We include reliability and validation analysis. The application is freely available, and enables the user to extract words based on concrete statistical criteria 1 , as well as to obtain statistical characteristics from a list of words

  20. Mars Pathfinder Near-Field Rock Distribution Re-Evaluation

    NASA Technical Reports Server (NTRS)

    Haldemann, A. F. C.; Golombek, M. P.

    2003-01-01

    We have completed analysis of a new near-field rock count at the Mars Pathfinder landing site and determined that the previously published rock count suggesting 16% cumulative fractional area (CFA) covered by rocks is incorrect. The earlier value is not so much wrong (our new CFA is 20%), as right for the wrong reason: both the old and the new CFA's are consistent with remote sensing data, however the earlier determination incorrectly calculated rock coverage using apparent width rather than average diameter. Here we present details of the new rock database and the new statistics, as well as the importance of using rock average diameter for rock population statistics. The changes to the near-field data do not affect the far-field rock statistics.

  1. Comparison of Response Surface and Kriging Models in the Multidisciplinary Design of an Aerospike Nozzle

    NASA Technical Reports Server (NTRS)

    Simpson, Timothy W.

    1998-01-01

    The use of response surface models and kriging models are compared for approximating non-random, deterministic computer analyses. After discussing the traditional response surface approach for constructing polynomial models for approximation, kriging is presented as an alternative statistical-based approximation method for the design and analysis of computer experiments. Both approximation methods are applied to the multidisciplinary design and analysis of an aerospike nozzle which consists of a computational fluid dynamics model and a finite element analysis model. Error analysis of the response surface and kriging models is performed along with a graphical comparison of the approximations. Four optimization problems are formulated and solved using both approximation models. While neither approximation technique consistently outperforms the other in this example, the kriging models using only a constant for the underlying global model and a Gaussian correlation function perform as well as the second order polynomial response surface models.

  2. Revisiting inconsistency in large pharmacogenomic studies

    PubMed Central

    Safikhani, Zhaleh; Smirnov, Petr; Freeman, Mark; El-Hachem, Nehme; She, Adrian; Rene, Quevedo; Goldenberg, Anna; Birkbak, Nicolai J.; Hatzis, Christos; Shi, Leming; Beck, Andrew H.; Aerts, Hugo J.W.L.; Quackenbush, John; Haibe-Kains, Benjamin

    2017-01-01

    In 2013, we published a comparative analysis of mutation and gene expression profiles and drug sensitivity measurements for 15 drugs characterized in the 471 cancer cell lines screened in the Genomics of Drug Sensitivity in Cancer (GDSC) and Cancer Cell Line Encyclopedia (CCLE). While we found good concordance in gene expression profiles, there was substantial inconsistency in the drug responses reported by the GDSC and CCLE projects. We received extensive feedback on the comparisons that we performed. This feedback, along with the release of new data, prompted us to revisit our initial analysis. We present a new analysis using these expanded data, where we address the most significant suggestions for improvements on our published analysis — that targeted therapies and broad cytotoxic drugs should have been treated differently in assessing consistency, that consistency of both molecular profiles and drug sensitivity measurements should be compared across cell lines, and that the software analysis tools provided should have been easier to run, particularly as the GDSC and CCLE released additional data. Our re-analysis supports our previous finding that gene expression data are significantly more consistent than drug sensitivity measurements. Using new statistics to assess data consistency allowed identification of two broad effect drugs and three targeted drugs with moderate to good consistency in drug sensitivity data between GDSC and CCLE. For three other targeted drugs, there were not enough sensitive cell lines to assess the consistency of the pharmacological profiles. We found evidence of inconsistencies in pharmacological phenotypes for the remaining eight drugs. Overall, our findings suggest that the drug sensitivity data in GDSC and CCLE continue to present challenges for robust biomarker discovery. This re-analysis provides additional support for the argument that experimental standardization and validation of pharmacogenomic response will be necessary to advance the broad use of large pharmacogenomic screens. PMID:28928933

  3. A web-based system for neural network based classification in temporomandibular joint osteoarthritis.

    PubMed

    de Dumast, Priscille; Mirabel, Clément; Cevidanes, Lucia; Ruellas, Antonio; Yatabe, Marilia; Ioshida, Marcos; Ribera, Nina Tubau; Michoud, Loic; Gomes, Liliane; Huang, Chao; Zhu, Hongtu; Muniz, Luciana; Shoukri, Brandon; Paniagua, Beatriz; Styner, Martin; Pieper, Steve; Budin, Francois; Vimort, Jean-Baptiste; Pascal, Laura; Prieto, Juan Carlos

    2018-07-01

    The purpose of this study is to describe the methodological innovations of a web-based system for storage, integration and computation of biomedical data, using a training imaging dataset to remotely compute a deep neural network classifier of temporomandibular joint osteoarthritis (TMJOA). This study imaging dataset consisted of three-dimensional (3D) surface meshes of mandibular condyles constructed from cone beam computed tomography (CBCT) scans. The training dataset consisted of 259 condyles, 105 from control subjects and 154 from patients with diagnosis of TMJ OA. For the image analysis classification, 34 right and left condyles from 17 patients (39.9 ± 11.7 years), who experienced signs and symptoms of the disease for less than 5 years, were included as the testing dataset. For the integrative statistical model of clinical, biological and imaging markers, the sample consisted of the same 17 test OA subjects and 17 age and sex matched control subjects (39.4 ± 15.4 years), who did not show any sign or symptom of OA. For these 34 subjects, a standardized clinical questionnaire, blood and saliva samples were also collected. The technological methodologies in this study include a deep neural network classifier of 3D condylar morphology (ShapeVariationAnalyzer, SVA), and a flexible web-based system for data storage, computation and integration (DSCI) of high dimensional imaging, clinical, and biological data. The DSCI system trained and tested the neural network, indicating 5 stages of structural degenerative changes in condylar morphology in the TMJ with 91% close agreement between the clinician consensus and the SVA classifier. The DSCI remotely ran with a novel application of a statistical analysis, the Multivariate Functional Shape Data Analysis, that computed high dimensional correlations between shape 3D coordinates, clinical pain levels and levels of biological markers, and then graphically displayed the computation results. The findings of this study demonstrate a comprehensive phenotypic characterization of TMJ health and disease at clinical, imaging and biological levels, using novel flexible and versatile open-source tools for a web-based system that provides advanced shape statistical analysis and a neural network based classification of temporomandibular joint osteoarthritis. Published by Elsevier Ltd.

  4. Ockham's razor and Bayesian analysis. [statistical theory for systems evaluation

    NASA Technical Reports Server (NTRS)

    Jefferys, William H.; Berger, James O.

    1992-01-01

    'Ockham's razor', the ad hoc principle enjoining the greatest possible simplicity in theoretical explanations, is presently shown to be justifiable as a consequence of Bayesian inference; Bayesian analysis can, moreover, clarify the nature of the 'simplest' hypothesis consistent with the given data. By choosing the prior probabilities of hypotheses, it becomes possible to quantify the scientific judgment that simpler hypotheses are more likely to be correct. Bayesian analysis also shows that a hypothesis with fewer adjustable parameters intrinsically possesses an enhanced posterior probability, due to the clarity of its predictions.

  5. Digital morphogenesis via Schelling segregation

    NASA Astrophysics Data System (ADS)

    Barmpalias, George; Elwes, Richard; Lewis-Pye, Andrew

    2018-04-01

    Schelling’s model of segregation looks to explain the way in which particles or agents of two types may come to arrange themselves spatially into configurations consisting of large homogeneous clusters, i.e. connected regions consisting of only one type. As one of the earliest agent based models studied by economists and perhaps the most famous model of self-organising behaviour, it also has direct links to areas at the interface between computer science and statistical mechanics, such as the Ising model and the study of contagion and cascading phenomena in networks. While the model has been extensively studied it has largely resisted rigorous analysis, prior results from the literature generally pertaining to variants of the model which are tweaked so as to be amenable to standard techniques from statistical mechanics or stochastic evolutionary game theory. In Brandt et al (2012 Proc. 44th Annual ACM Symp. on Theory of Computing) provided the first rigorous analysis of the unperturbed model, for a specific set of input parameters. Here we provide a rigorous analysis of the model’s behaviour much more generally and establish some surprising forms of threshold behaviour, notably the existence of situations where an increased level of intolerance for neighbouring agents of opposite type leads almost certainly to decreased segregation.

  6. Validation of a Survey Questionnaire on Organ Donation: An Arabic World Scenario

    PubMed Central

    Agarwal, Tulika Mehta; Al-Thani, Hassan; Al Maslamani, Yousuf

    2018-01-01

    Objective To validate a questionnaire for measuring factors influencing organ donation and transplant. Methods The constructed questionnaire was based on the theory of planned behavior by Ajzen Icek and had 45 questions including general inquiry and demographic information. Four experts on the topic, Arabic culture, and the Arabic and English languages established content validity through review. It was quantified by content validity index (CVI). Construct validity was established by principal component analysis (PCA), whereas internal consistency was checked by Cronbach's Alpha and intraclass correlation coefficient (ICC). Statistical analysis was performed by SPSS 22.0 statistical package. Results Content validity in the form of S-CVI/Average and S-CVI/UA was 0.95 and 0.82, respectively, suggesting adequate relevance content of the questionnaire. Factor analysis indicated that the construct validity for each domain (knowledge, attitudes, beliefs, and intention) was 65%, 71%, 77%, and 70%, respectively. Cronbach's Alpha and ICC coefficients were 0.90, 0.67, 0.75, and 0.74 and 0.82, 0.58, 0.61, and 0.74, respectively, for the domains. Conclusion The questionnaire consists of 39 items on knowledge, attitudes, beliefs, and intention domains which is valid and reliable tool to use for organ donation and transplant survey. PMID:29593894

  7. 3Drefine: an interactive web server for efficient protein structure refinement

    PubMed Central

    Bhattacharya, Debswapna; Nowotny, Jackson; Cao, Renzhi; Cheng, Jianlin

    2016-01-01

    3Drefine is an interactive web server for consistent and computationally efficient protein structure refinement with the capability to perform web-based statistical and visual analysis. The 3Drefine refinement protocol utilizes iterative optimization of hydrogen bonding network combined with atomic-level energy minimization on the optimized model using a composite physics and knowledge-based force fields for efficient protein structure refinement. The method has been extensively evaluated on blind CASP experiments as well as on large-scale and diverse benchmark datasets and exhibits consistent improvement over the initial structure in both global and local structural quality measures. The 3Drefine web server allows for convenient protein structure refinement through a text or file input submission, email notification, provided example submission and is freely available without any registration requirement. The server also provides comprehensive analysis of submissions through various energy and statistical feedback and interactive visualization of multiple refined models through the JSmol applet that is equipped with numerous protein model analysis tools. The web server has been extensively tested and used by many users. As a result, the 3Drefine web server conveniently provides a useful tool easily accessible to the community. The 3Drefine web server has been made publicly available at the URL: http://sysbio.rnet.missouri.edu/3Drefine/. PMID:27131371

  8. Bispectral analysis of equatorial spread F density irregularities

    NASA Technical Reports Server (NTRS)

    Labelle, J.; Lund, E. J.

    1992-01-01

    Bispectral analysis has been applied to density irregularities at frequencies 5-30 Hz observed with a sounding rocket launched from Peru in March 1983. Unlike the power spectrum, the bispectrum contains statistical information about the phase relations between the Fourier components which make up the waveform. In the case of spread F data from 475 km the 5-30 Hz portion of the spectrum displays overall enhanced bicoherence relative to that of the background instrumental noise and to that expected due to statistical considerations, implying that the observed f exp -2.5 power law spectrum has a significant non-Gaussian component. This is consistent with previous qualitative analyses. The bicoherence has also been calculated for simulated equatorial spread F density irregularities in approximately the same wavelength regime, and the resulting bispectrum has some features in common with that of the rocket data. The implications of this analysis for equatorial spread F are discussed, and some future investigations are suggested.

  9. Earning Differences by Major Field of Study: Evidence from Three Cohorts of Recent Canadian Graduates.

    ERIC Educational Resources Information Center

    Finnie, Ross; Frenette, Marc

    2003-01-01

    Analysis of earnings differences by major field of study of three cohorts of graduates (1982, 1986, 1990) with bachelors' degrees from Canadian postsecondary institutions. Finds that earnings differences are large and statistically significant. The patterns are relatively consistent for the three cohorts and for male and female graduates, 2 and 5…

  10. Analysis of the Relationship between the Emotional Intelligence and Professional Burnout Levels of Teachers

    ERIC Educational Resources Information Center

    Adilogullari, Ilhan

    2014-01-01

    The purpose of this study is to analyze the relationship between the emotional intelligence and professional burnout levels of teachers. The nature of the study consists of high school teachers employed in city center of Kirsehir Province; 563 volunteer teachers form the nature of sampling. The statistical implementation of the study is performed…

  11. Unique songs of African wood-owls (Strix woodfordii) in the Democratic Republic of Congo.

    Treesearch

    B.G. Marcot

    2007-01-01

    Statistical analysis of African wood-owl (Strix woodfordii) song spectrograms suggest a significantly different song type in Democratic Republic of Congo (DRC), central Africa, than elsewhere in eastern or southern Africa. Songs of DRC owls tend to be consistently shorter in duration and more monotone in overall frequency range. The first note is...

  12. High School and Beyond. 1980 Sophomore Cohort. First Follow-Up (1982). [machine-readable data file].

    ERIC Educational Resources Information Center

    National Center for Education Statistics (ED), Washington, DC.

    The High School and Beyond 1980 Sophomore Cohort First Follow-Up (1982) data file is presented. The First Follow-Up Sophomore Cohort data tape consists of four related data files: (1) the student data file (including data availability flags, weights, questionnaire data, and composite variables); (2) Statistical Analysis System (SAS) control cards…

  13. Homologues of insulinase, a new superfamily of metalloendopeptidases.

    PubMed Central

    Rawlings, N D; Barrett, A J

    1991-01-01

    On the basis of a statistical analysis of an alignment of the amino acid sequences, a new superfamily of metalloendopeptidases is proposed, consisting of human insulinase, Escherichia coli protease III and mitochondrial processing endopeptidases from Saccharomyces and Neurospora. These enzymes do not contain the 'HEXXH' consensus sequence found in all previously recognized zinc metalloendopeptidases. PMID:2025223

  14. Cross-Population Joint Analysis of eQTLs: Fine Mapping and Functional Annotation

    PubMed Central

    Wen, Xiaoquan; Luca, Francesca; Pique-Regi, Roger

    2015-01-01

    Mapping expression quantitative trait loci (eQTLs) has been shown as a powerful tool to uncover the genetic underpinnings of many complex traits at molecular level. In this paper, we present an integrative analysis approach that leverages eQTL data collected from multiple population groups. In particular, our approach effectively identifies multiple independent cis-eQTL signals that are consistent across populations, accounting for population heterogeneity in allele frequencies and linkage disequilibrium patterns. Furthermore, by integrating genomic annotations, our analysis framework enables high-resolution functional analysis of eQTLs. We applied our statistical approach to analyze the GEUVADIS data consisting of samples from five population groups. From this analysis, we concluded that i) jointly analysis across population groups greatly improves the power of eQTL discovery and the resolution of fine mapping of causal eQTL ii) many genes harbor multiple independent eQTLs in their cis regions iii) genetic variants that disrupt transcription factor binding are significantly enriched in eQTLs (p-value = 4.93 × 10-22). PMID:25906321

  15. Statistical assessment of bi-exponential diffusion weighted imaging signal characteristics induced by intravoxel incoherent motion in malignant breast tumors

    PubMed Central

    Wong, Oi Lei; Lo, Gladys G.; Chan, Helen H. L.; Wong, Ting Ting; Cheung, Polly S. Y.

    2016-01-01

    Background The purpose of this study is to statistically assess whether bi-exponential intravoxel incoherent motion (IVIM) model better characterizes diffusion weighted imaging (DWI) signal of malignant breast tumor than mono-exponential Gaussian diffusion model. Methods 3 T DWI data of 29 malignant breast tumors were retrospectively included. Linear least-square mono-exponential fitting and segmented least-square bi-exponential fitting were used for apparent diffusion coefficient (ADC) and IVIM parameter quantification, respectively. F-test and Akaike Information Criterion (AIC) were used to statistically assess the preference of mono-exponential and bi-exponential model using region-of-interests (ROI)-averaged and voxel-wise analysis. Results For ROI-averaged analysis, 15 tumors were significantly better fitted by bi-exponential function and 14 tumors exhibited mono-exponential behavior. The calculated ADC, D (true diffusion coefficient) and f (pseudo-diffusion fraction) showed no significant differences between mono-exponential and bi-exponential preferable tumors. Voxel-wise analysis revealed that 27 tumors contained more voxels exhibiting mono-exponential DWI decay while only 2 tumors presented more bi-exponential decay voxels. ADC was consistently and significantly larger than D for both ROI-averaged and voxel-wise analysis. Conclusions Although the presence of IVIM effect in malignant breast tumors could be suggested, statistical assessment shows that bi-exponential fitting does not necessarily better represent the DWI signal decay in breast cancer under clinically typical acquisition protocol and signal-to-noise ratio (SNR). Our study indicates the importance to statistically examine the breast cancer DWI signal characteristics in practice. PMID:27709078

  16. Polarization-interference Jones-matrix mapping of biological crystal networks

    NASA Astrophysics Data System (ADS)

    Ushenko, O. G.; Dubolazov, O. V.; Pidkamin, L. Y.; Sidor, M. I.; Pavlyukovich, N.; Pavlyukovich, O.

    2018-01-01

    The paper consists of two parts. The first part presents short theoretical basics of the method of Jones-matrix mapping with the help of reference wave. It was provided experimentally measured coordinate distributions of modulus of Jones-matrix elements of polycrystalline film of bile. It was defined the values and ranges of changing of statistic moments, which characterize such distributions. The second part presents the data of statistic analysis of the distributions of matrix elements of polycrystalline film of urine of donors and patients with albuminuria. It was defined the objective criteria of differentiation of albuminuria.

  17. Trends in groundwater quality in principal aquifers of the United States, 1988-2012

    USGS Publications Warehouse

    Lindsey, Bruce D.; Rupert, Michael G.

    2014-01-01

    The U.S. Geological Survey (USGS) National Water-Quality Assessment (NAWQA) Program analyzed trends in groundwater quality throughout the nation for the sampling period of 1988-2012. Trends were determined for networks (sets of wells routinely monitored by the USGS) for a subset of constituents by statistical analysis of paired water-quality measurements collected on a near-decadal time scale. The data set for chloride, dissolved solids, and nitrate consisted of 1,511 wells in 67 networks, whereas the data set for methyl tert-butyl ether (MTBE) consisted of 1, 013 wells in 46 networks. The 25 principal aquifers represented by these networks account for about 75 percent of withdrawals of groundwater used for drinking-water supply for the nation. Statistically significant changes in chloride, dissolved-solids, or nitrate concentrations were found in many well networks over a decadal period. Concentrations increased significantly in 48 percent of networks for chloride, 42 percent of networks for dissolved solids, and 21 percent of networks for nitrate. Chloride, dissolved solids, and nitrate concentrations decreased significantly in 3, 3, and 10 percent of the networks, respectively. The magnitude of change in concentrations was typically small in most networks; however, the magnitude of change in networks with statistically significant increases was typically much larger than the magnitude of change in networks with statistically significant decreases. The largest increases of chloride concentrations were in urban areas in the northeastern and north central United States. The largest increases of nitrate concentrations were in networks in agricultural areas. Statistical analysis showed 42 or the 46 networks had no statistically significant changes in MTBE concentrations. The four networks with statistically significant changes in MTBE concentrations were in the northeastern United States, where MTBE was widely used. Two networks had increasing concentrations, and two networks had decreasing concentrations. Production and use of MTBE peaked in about 2000 and has been effectively banned in many areas since about 2006. The two networks that had increasing concentrations were sampled for the second time close to the peak of MTBE production, whereas the two networks that had decreasing concentrations were sampled for the second time 10 years after the peak of MTBE production.

  18. Development of a Multidimensional Functional Health Scale for Older Adults in China.

    PubMed

    Mao, Fanzhen; Han, Yaofeng; Chen, Junze; Chen, Wei; Yuan, Manqiong; Alicia Hong, Y; Fang, Ya

    2016-05-01

    A first step to achieve successful aging is assessing functional wellbeing of older adults. This study reports the development of a culturally appropriate brief scale (the Multidimensional Functional Health Scale for Chinese Elderly, MFHSCE) to assess the functional health of Chinese elderly. Through systematic literature review, Delphi method, cultural adaptation, synthetic statistical item selection, Cronbach's alpha and confirmatory factor analysis, we conducted development of item pool, two rounds of item selection, and psychometric evaluation. Synthetic statistical item selection and psychometric evaluation was processed among 539 and 2032 older adults, separately. The MFHSCE consists of 30 items, covering activities of daily living, social relationships, physical health, mental health, cognitive function, and economic resources. The Cronbach's alpha was 0.92, and the comparative fit index was 0.917. The MFHSCE has good internal consistency and construct validity; it is also concise and easy to use in general practice, especially in communities in China.

  19. Empirical estimation of a distribution function with truncated and doubly interval-censored data and its application to AIDS studies.

    PubMed

    Sun, J

    1995-09-01

    In this paper we discuss the non-parametric estimation of a distribution function based on incomplete data for which the measurement origin of a survival time or the date of enrollment in a study is known only to belong to an interval. Also the survival time of interest itself is observed from a truncated distribution and is known only to lie in an interval. To estimate the distribution function, a simple self-consistency algorithm, a generalization of Turnbull's (1976, Journal of the Royal Statistical Association, Series B 38, 290-295) self-consistency algorithm, is proposed. This method is then used to analyze two AIDS cohort studies, for which direct use of the EM algorithm (Dempster, Laird and Rubin, 1976, Journal of the Royal Statistical Association, Series B 39, 1-38), which is computationally complicated, has previously been the usual method of the analysis.

  20. Enabling High-Energy, High-Voltage Lithium-Ion Cells: Standardization of Coin-Cell Assembly, Electrochemical Testing, and Evaluation of Full Cells

    DOE PAGES

    Long, Brandon R.; Rinaldo, Steven G.; Gallagher, Kevin G.; ...

    2016-11-09

    Coin-cells are often the test format of choice for laboratories engaged in battery research and development as they provide a convenient platform for rapid testing of new materials on a small scale. However, reliable, reproducible data via the coin-cell format is inherently difficult, particularly in the full-cell configuration. In addition, statistical evaluation to prove the consistency and reliability of such data is often neglected. Herein we report on several studies aimed at formalizing physical process parameters and coin-cell construction related to full cells. Statistical analysis and performance benchmarking approaches are advocated as a means to more confidently track changes inmore » cell performance. Finally, we show that trends in the electrochemical data obtained from coin-cells can be reliable and informative when standardized approaches are implemented in a consistent manner.« less

  1. Analysis of covariance as a remedy for demographic mismatch of research subject groups: some sobering simulations.

    PubMed

    Adams, K M; Brown, G G; Grant, I

    1985-08-01

    Analysis of Covariance (ANCOVA) is often used in neuropsychological studies to effect ex-post-facto adjustment of performance variables amongst groups of subjects mismatched on some relevant demographic variable. This paper reviews some of the statistical assumptions underlying this usage. In an attempt to illustrate the complexities of this statistical technique, three sham studies using actual patient data are presented. These staged simulations have varying relationships between group test performance differences and levels of covariate discrepancy. The results were robust and consistent in their nature, and were held to support the wisdom of previous cautions by statisticians concerning the employment of ANCOVA to justify comparisons between incomparable groups. ANCOVA should not be used in neuropsychological research to equate groups unequal on variables such as age and education or to exert statistical control whose objective is to eliminate consideration of the covariate as an explanation for results. Finally, the report advocates by example the use of simulation to further our understanding of neuropsychological variables.

  2. Solar granulation and statistical crystallography: A modeling approach using size-shape relations

    NASA Technical Reports Server (NTRS)

    Noever, D. A.

    1994-01-01

    The irregular polygonal pattern of solar granulation is analyzed for size-shape relations using statistical crystallography. In contrast to previous work which has assumed perfectly hexagonal patterns for granulation, more realistic accounting of cell (granule) shapes reveals a broader basis for quantitative analysis. Several features emerge as noteworthy: (1) a linear correlation between number of cell-sides and neighboring shapes (called Aboav-Weaire's law); (2) a linear correlation between both average cell area and perimeter and the number of cell-sides (called Lewis's law and a perimeter law, respectively) and (3) a linear correlation between cell area and squared perimeter (called convolution index). This statistical picture of granulation is consistent with a finding of no correlation in cell shapes beyond nearest neighbors. A comparative calculation between existing model predictions taken from luminosity data and the present analysis shows substantial agreements for cell-size distributions. A model for understanding grain lifetimes is proposed which links convective times to cell shape using crystallographic results.

  3. Assessment of statistical uncertainty in the quantitative analysis of solid samples in motion using laser-induced breakdown spectroscopy

    NASA Astrophysics Data System (ADS)

    Cabalín, L. M.; González, A.; Ruiz, J.; Laserna, J. J.

    2010-08-01

    Statistical uncertainty in the quantitative analysis of solid samples in motion by laser-induced breakdown spectroscopy (LIBS) has been assessed. For this purpose, a LIBS demonstrator was designed and constructed in our laboratory. The LIBS system consisted of a laboratory-scale conveyor belt, a compact optical module and a Nd:YAG laser operating at 532 nm. The speed of the conveyor belt was variable and could be adjusted up to a maximum speed of 2 m s - 1 . Statistical uncertainty in the analytical measurements was estimated in terms of precision (reproducibility and repeatability) and accuracy. The results obtained by LIBS on shredded scrap samples under real conditions have demonstrated that the analytical precision and accuracy of LIBS is dependent on the sample geometry, position on the conveyor belt and surface cleanliness. Flat, relatively clean scrap samples exhibited acceptable reproducibility and repeatability; by contrast, samples with an irregular shape or a dirty surface exhibited a poor relative standard deviation.

  4. A study of two statistical methods as applied to shuttle solid rocket booster expenditures

    NASA Technical Reports Server (NTRS)

    Perlmutter, M.; Huang, Y.; Graves, M.

    1974-01-01

    The state probability technique and the Monte Carlo technique are applied to finding shuttle solid rocket booster expenditure statistics. For a given attrition rate per launch, the probable number of boosters needed for a given mission of 440 launches is calculated. Several cases are considered, including the elimination of the booster after a maximum of 20 consecutive launches. Also considered is the case where the booster is composed of replaceable components with independent attrition rates. A simple cost analysis is carried out to indicate the number of boosters to build initially, depending on booster costs. Two statistical methods were applied in the analysis: (1) state probability method which consists of defining an appropriate state space for the outcome of the random trials, and (2) model simulation method or the Monte Carlo technique. It was found that the model simulation method was easier to formulate while the state probability method required less computing time and was more accurate.

  5. The 1993 Mississippi river flood: A one hundred or a one thousand year event?

    USGS Publications Warehouse

    Malamud, B.D.; Turcotte, D.L.; Barton, C.C.

    1996-01-01

    Power-law (fractal) extreme-value statistics are applicable to many natural phenomena under a wide variety of circumstances. Data from a hydrologic station in Keokuk, Iowa, shows the great flood of the Mississippi River in 1993 has a recurrence interval on the order of 100 years using power-law statistics applied to partial-duration flood series and on the order of 1,000 years using a log-Pearson type 3 (LP3) distribution applied to annual series. The LP3 analysis is the federally adopted probability distribution for flood-frequency estimation of extreme events. We suggest that power-law statistics are preferable to LP3 analysis. As a further test of the power-law approach we consider paleoflood data from the Colorado River. We compare power-law and LP3 extrapolations of historical data with these paleo-floods. The results are remarkably similar to those obtained for the Mississippi River: Recurrence intervals from power-law statistics applied to Lees Ferry discharge data are generally consistent with inferred 100- and 1,000-year paleofloods, whereas LP3 analysis gives recurrence intervals that are orders of magnitude longer. For both the Keokuk and Lees Ferry gauges, the use of an annual series introduces an artificial curvature in log-log space that leads to an underestimate of severe floods. Power-law statistics are predicting much shorter recurrence intervals than the federally adopted LP3 statistics. We suggest that if power-law behavior is applicable, then the likelihood of severe floods is much higher. More conservative dam designs and land-use restrictions Nay be required.

  6. Graph theory applied to noise and vibration control in statistical energy analysis models.

    PubMed

    Guasch, Oriol; Cortés, Lluís

    2009-06-01

    A fundamental aspect of noise and vibration control in statistical energy analysis (SEA) models consists in first identifying and then reducing the energy flow paths between subsystems. In this work, it is proposed to make use of some results from graph theory to address both issues. On the one hand, linear and path algebras applied to adjacency matrices of SEA graphs are used to determine the existence of any order paths between subsystems, counting and labeling them, finding extremal paths, or determining the power flow contributions from groups of paths. On the other hand, a strategy is presented that makes use of graph cut algorithms to reduce the energy flow from a source subsystem to a receiver one, modifying as few internal and coupling loss factors as possible.

  7. Analysis of pediatric blood lead levels in New York City for 1970-1976.

    PubMed Central

    Billick, I H; Curran, A S; Shier, D R

    1979-01-01

    A study was completed of more than 170,000 records of pediatric venous blood levels and supporting demographic information collected in New York City during 1970-1976. The geometric mean (GM) blood lead level shows a consistent cyclical variation superimposed on an overall decreasing trend with time for all ages and ethnic groups studied. The GM blood lead levels for blacks are significantly greater than those for either Hispanics or whites. Regression analysis indicates a significant statistical association between GM blood lead level and ambient air lead level, after appropriate adjustments are made for age and ethnic group. These highly significant statistical relationships provide extremely strong incentives and directions for research into casual factors related to blood lead levels in children. PMID:499123

  8. History and physical examination findings predictive of testicular torsion: an attempt to promote clinical diagnosis by house staff.

    PubMed

    Srinivasan, Arun; Cinman, Nadya; Feber, Kevin M; Gitlin, Jordan; Palmer, Lane S

    2011-08-01

    To standardize the history and physical examination of boys who present with acute scrotum and identify parameters that best predict testicular torsion. Over a 5-month period, a standardized history and physical examination form with 22 items was used for all boys presenting with scrotal pain. Management decisions for radiological evaluation and surgical intervention were based on the results. Data were statistically analyzed in correlation with the eventual diagnosis. Of the 79 boys evaluated, 8 (10.1%) had testicular torsion. On univariate analysis, age, worsening pain, nausea/vomiting, severe pain at rest, absence of ipsilateral cremaster reflex, abnormal testicular position and scrotal skin changes were statistically predictive of torsion. After multivariate analysis and adjusting for confounding effect of other co-existing variables, absence of ipsilateral cremaster reflex (P < 0.001), nausea/vomiting (P < 0.05) and scrotal skin changes (P < 0.001) were the only consistent predictive factors of testicular torsion. An accurate history and physical examination of boys with acute scrotum should be primary in deciding upon further radiographic or surgical evaluation. While several forces have led to less consistent overnight resident staffing, consistent and reliable clinical evaluation of the acute scrotum using a standardized approach should reduce error, improve patient care and potentially reduce health care costs. Copyright © 2011 Journal of Pediatric Urology Company. Published by Elsevier Ltd. All rights reserved.

  9. The effect of active video games by ethnicity, sex and fitness: subgroup analysis from a randomised controlled trial.

    PubMed

    Foley, Louise; Jiang, Yannan; Ni Mhurchu, Cliona; Jull, Andrew; Prapavessis, Harry; Rodgers, Anthony; Maddison, Ralph

    2014-04-03

    The prevention and treatment of childhood obesity is a key public health challenge. However, certain groups within populations have markedly different risk profiles for obesity and related health behaviours. Well-designed subgroup analysis can identify potential differential effects of obesity interventions, which may be important for reducing health inequalities. The study aim was to evaluate the consistency of the effects of active video games across important subgroups in a randomised controlled trial (RCT). A two-arm, parallel RCT was conducted in overweight or obese children (n=322; aged 10-14 years) to determine the effect of active video games on body composition. Statistically significant overall treatment effects favouring the intervention group were found for body mass index, body mass index z-score and percentage body fat at 24 weeks. For these outcomes, pre-specified subgroup analyses were conducted among important baseline demographic (ethnicity, sex) and prognostic (cardiovascular fitness) groups. No statistically significant interaction effects were found between the treatment and subgroup terms in the main regression model (p=0.36 to 0.93), indicating a consistent treatment effect across these groups. Preliminary evidence suggests an active video games intervention had a consistent positive effect on body composition among important subgroups. This may support the use of these games as a pragmatic public health intervention to displace sedentary behaviour with physical activity in young people.

  10. Sulfur in Cometary Dust

    NASA Technical Reports Server (NTRS)

    Fomenkova, M. N.

    1997-01-01

    The computer-intensive project consisted of the analysis and synthesis of existing data on composition of comet Halley dust particles. The main objective was to obtain a complete inventory of sulfur containing compounds in the comet Halley dust by building upon the existing classification of organic and inorganic compounds and applying a variety of statistical techniques for cluster and cross-correlational analyses. A student hired for this project wrote and tested the software to perform cluster analysis. The following tasks were carried out: (1) selecting the data from existing database for the proposed project; (2) finding access to a standard library of statistical routines for cluster analysis; (3) reformatting the data as necessary for input into the library routines; (4) performing cluster analysis and constructing hierarchical cluster trees using three methods to define the proximity of clusters; (5) presenting the output results in different formats to facilitate the interpretation of the obtained cluster trees; (6) selecting groups of data points common for all three trees as stable clusters. We have also considered the chemistry of sulfur in inorganic compounds.

  11. The statistical big bang of 1911: ideology, technological innovation and the production of medical statistics.

    PubMed

    Higgs, W

    1996-12-01

    This paper examines the relationship between intellectual debate, technologies for analysing information, and the production of statistics in the General Register Office (GRO) in London in the early twentieth century. It argues that controversy between eugenicists and public health officials respecting the cause and effect of class-specific variations in fertility led to the introduction of questions in the 1911 census on marital fertility. The increasing complexity of the census necessitated a shift from manual to mechanised forms of data processing within the GRO. The subsequent increase in processing power allowed the GRO to make important changes to the medical and demographic statistics it published in the annual Reports of the Registrar General. These included substituting administrative sanitary districts for registration districts as units of analysis, consistently transferring deaths in institutions back to place of residence, and abstracting deaths according to the International List of Causes of Death.

  12. An astronomer's guide to period searching

    NASA Astrophysics Data System (ADS)

    Schwarzenberg-Czerny, A.

    2003-03-01

    We concentrate on analysis of unevenly sampled time series, interrupted by periodic gaps, as often encountered in astronomy. While some of our conclusions may appear surprising, all are based on classical statistical principles of Fisher & successors. Except for discussion of the resolution issues, it is best for the reader to forget temporarily about Fourier transforms and to concentrate on problems of fitting of a time series with a model curve. According to their statistical content we divide the issues into several sections, consisting of: (ii) statistical numerical aspects of model fitting, (iii) evaluation of fitted models as hypotheses testing, (iv) the role of the orthogonal models in signal detection (v) conditions for equivalence of periodograms (vi) rating sensitivity by test power. An experienced observer working with individual objects would benefit little from formalized statistical approach. However, we demonstrate the usefulness of this approach in evaluation of performance of periodograms and in quantitative design of large variability surveys.

  13. Multivariate Statistical Analysis: a tool for groundwater quality assessment in the hidrogeologic region of the Ring of Cenotes, Yucatan, Mexico.

    NASA Astrophysics Data System (ADS)

    Ye, M.; Pacheco Castro, R. B.; Pacheco Avila, J.; Cabrera Sansores, A.

    2014-12-01

    The karstic aquifer of Yucatan is a vulnerable and complex system. The first fifteen meters of this aquifer have been polluted, due to this the protection of this resource is important because is the only source of potable water of the entire State. Through the assessment of groundwater quality we can gain some knowledge about the main processes governing water chemistry as well as spatial patterns which are important to establish protection zones. In this work multivariate statistical techniques are used to assess the groundwater quality of the supply wells (30 to 40 meters deep) in the hidrogeologic region of the Ring of Cenotes, located in Yucatan, Mexico. Cluster analysis and principal component analysis are applied in groundwater chemistry data of the study area. Results of principal component analysis show that the main sources of variation in the data are due sea water intrusion and the interaction of the water with the carbonate rocks of the system and some pollution processes. The cluster analysis shows that the data can be divided in four clusters. The spatial distribution of the clusters seems to be random, but is consistent with sea water intrusion and pollution with nitrates. The overall results show that multivariate statistical analysis can be successfully applied in the groundwater quality assessment of this karstic aquifer.

  14. Improving information retrieval in functional analysis.

    PubMed

    Rodriguez, Juan C; González, Germán A; Fresno, Cristóbal; Llera, Andrea S; Fernández, Elmer A

    2016-12-01

    Transcriptome analysis is essential to understand the mechanisms regulating key biological processes and functions. The first step usually consists of identifying candidate genes; to find out which pathways are affected by those genes, however, functional analysis (FA) is mandatory. The most frequently used strategies for this purpose are Gene Set and Singular Enrichment Analysis (GSEA and SEA) over Gene Ontology. Several statistical methods have been developed and compared in terms of computational efficiency and/or statistical appropriateness. However, whether their results are similar or complementary, the sensitivity to parameter settings, or possible bias in the analyzed terms has not been addressed so far. Here, two GSEA and four SEA methods and their parameter combinations were evaluated in six datasets by comparing two breast cancer subtypes with well-known differences in genetic background and patient outcomes. We show that GSEA and SEA lead to different results depending on the chosen statistic, model and/or parameters. Both approaches provide complementary results from a biological perspective. Hence, an Integrative Functional Analysis (IFA) tool is proposed to improve information retrieval in FA. It provides a common gene expression analytic framework that grants a comprehensive and coherent analysis. Only a minimal user parameter setting is required, since the best SEA/GSEA alternatives are integrated. IFA utility was demonstrated by evaluating four prostate cancer and the TCGA breast cancer microarray datasets, which showed its biological generalization capabilities. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. User's manual for the Simulated Life Analysis of Vehicle Elements (SLAVE) model

    NASA Technical Reports Server (NTRS)

    Paul, D. D., Jr.

    1972-01-01

    The simulated life analysis of vehicle elements model was designed to perform statistical simulation studies for any constant loss rate. The outputs of the model consist of the total number of stages required, stages successfully completing their lifetime, and average stage flight life. This report contains a complete description of the model. Users' instructions and interpretation of input and output data are presented such that a user with little or no prior programming knowledge can successfully implement the program.

  16. Multiplicative point process as a model of trading activity

    NASA Astrophysics Data System (ADS)

    Gontis, V.; Kaulakys, B.

    2004-11-01

    Signals consisting of a sequence of pulses show that inherent origin of the 1/ f noise is a Brownian fluctuation of the average interevent time between subsequent pulses of the pulse sequence. In this paper, we generalize the model of interevent time to reproduce a variety of self-affine time series exhibiting power spectral density S( f) scaling as a power of the frequency f. Furthermore, we analyze the relation between the power-law correlations and the origin of the power-law probability distribution of the signal intensity. We introduce a stochastic multiplicative model for the time intervals between point events and analyze the statistical properties of the signal analytically and numerically. Such model system exhibits power-law spectral density S( f)∼1/ fβ for various values of β, including β= {1}/{2}, 1 and {3}/{2}. Explicit expressions for the power spectra in the low-frequency limit and for the distribution density of the interevent time are obtained. The counting statistics of the events is analyzed analytically and numerically, as well. The specific interest of our analysis is related with the financial markets, where long-range correlations of price fluctuations largely depend on the number of transactions. We analyze the spectral density and counting statistics of the number of transactions. The model reproduces spectral properties of the real markets and explains the mechanism of power-law distribution of trading activity. The study provides evidence that the statistical properties of the financial markets are enclosed in the statistics of the time interval between trades. A multiplicative point process serves as a consistent model generating this statistics.

  17. Rigorous Statistical Bounds in Uncertainty Quantification for One-Layer Turbulent Geophysical Flows

    NASA Astrophysics Data System (ADS)

    Qi, Di; Majda, Andrew J.

    2018-04-01

    Statistical bounds controlling the total fluctuations in mean and variance about a basic steady-state solution are developed for the truncated barotropic flow over topography. Statistical ensemble prediction is an important topic in weather and climate research. Here, the evolution of an ensemble of trajectories is considered using statistical instability analysis and is compared and contrasted with the classical deterministic instability for the growth of perturbations in one pointwise trajectory. The maximum growth of the total statistics in fluctuations is derived relying on the statistical conservation principle of the pseudo-energy. The saturation bound of the statistical mean fluctuation and variance in the unstable regimes with non-positive-definite pseudo-energy is achieved by linking with a class of stable reference states and minimizing the stable statistical energy. Two cases with dependence on initial statistical uncertainty and on external forcing and dissipation are compared and unified under a consistent statistical stability framework. The flow structures and statistical stability bounds are illustrated and verified by numerical simulations among a wide range of dynamical regimes, where subtle transient statistical instability exists in general with positive short-time exponential growth in the covariance even when the pseudo-energy is positive-definite. Among the various scenarios in this paper, there exist strong forward and backward energy exchanges between different scales which are estimated by the rigorous statistical bounds.

  18. An Asynchronous Many-Task Implementation of In-Situ Statistical Analysis using Legion.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pebay, Philippe Pierre; Bennett, Janine Camille

    2015-11-01

    In this report, we propose a framework for the design and implementation of in-situ analy- ses using an asynchronous many-task (AMT) model, using the Legion programming model together with the MiniAero mini-application as a surrogate for full-scale parallel scientific computing applications. The bulk of this work consists of converting the Learn/Derive/Assess model which we had initially developed for parallel statistical analysis using MPI [PTBM11], from a SPMD to an AMT model. In this goal, we propose an original use of the concept of Legion logical regions as a replacement for the parallel communication schemes used for the only operation ofmore » the statistics engines that require explicit communication. We then evaluate this proposed scheme in a shared memory environment, using the Legion port of MiniAero as a proxy for a full-scale scientific application, as a means to provide input data sets of variable size for the in-situ statistical analyses in an AMT context. We demonstrate in particular that the approach has merit, and warrants further investigation, in collaboration with ongoing efforts to improve the overall parallel performance of the Legion system.« less

  19. Statistical Quality Control of Moisture Data in GEOS DAS

    NASA Technical Reports Server (NTRS)

    Dee, D. P.; Rukhovets, L.; Todling, R.

    1999-01-01

    A new statistical quality control algorithm was recently implemented in the Goddard Earth Observing System Data Assimilation System (GEOS DAS). The final step in the algorithm consists of an adaptive buddy check that either accepts or rejects outlier observations based on a local statistical analysis of nearby data. A basic assumption in any such test is that the observed field is spatially coherent, in the sense that nearby data can be expected to confirm each other. However, the buddy check resulted in excessive rejection of moisture data, especially during the Northern Hemisphere summer. The analysis moisture variable in GEOS DAS is water vapor mixing ratio. Observational evidence shows that the distribution of mixing ratio errors is far from normal. Furthermore, spatial correlations among mixing ratio errors are highly anisotropic and difficult to identify. Both factors contribute to the poor performance of the statistical quality control algorithm. To alleviate the problem, we applied the buddy check to relative humidity data instead. This variable explicitly depends on temperature and therefore exhibits a much greater spatial coherence. As a result, reject rates of moisture data are much more reasonable and homogeneous in time and space.

  20. BATMAN: Bayesian Technique for Multi-image Analysis

    NASA Astrophysics Data System (ADS)

    Casado, J.; Ascasibar, Y.; García-Benito, R.; Guidi, G.; Choudhury, O. S.; Bellocchi, E.; Sánchez, S. F.; Díaz, A. I.

    2017-04-01

    This paper describes the Bayesian Technique for Multi-image Analysis (BATMAN), a novel image-segmentation technique based on Bayesian statistics that characterizes any astronomical data set containing spatial information and performs a tessellation based on the measurements and errors provided as input. The algorithm iteratively merges spatial elements as long as they are statistically consistent with carrying the same information (I.e. identical signal within the errors). We illustrate its operation and performance with a set of test cases including both synthetic and real integral-field spectroscopic data. The output segmentations adapt to the underlying spatial structure, regardless of its morphology and/or the statistical properties of the noise. The quality of the recovered signal represents an improvement with respect to the input, especially in regions with low signal-to-noise ratio. However, the algorithm may be sensitive to small-scale random fluctuations, and its performance in presence of spatial gradients is limited. Due to these effects, errors may be underestimated by as much as a factor of 2. Our analysis reveals that the algorithm prioritizes conservation of all the statistically significant information over noise reduction, and that the precise choice of the input data has a crucial impact on the results. Hence, the philosophy of BaTMAn is not to be used as a 'black box' to improve the signal-to-noise ratio, but as a new approach to characterize spatially resolved data prior to its analysis. The source code is publicly available at http://astro.ft.uam.es/SELGIFS/BaTMAn.

  1. Evaluation of scheduling techniques for payload activity planning

    NASA Technical Reports Server (NTRS)

    Bullington, Stanley F.

    1991-01-01

    Two tasks related to payload activity planning and scheduling were performed. The first task involved making a comparison of space mission activity scheduling problems with production scheduling problems. The second task consisted of a statistical analysis of the output of runs of the Experiment Scheduling Program (ESP). Details of the work which was performed on these two tasks are presented.

  2. Title I ESEA, High School; English as a Second Language: 1979-1980. OEE Evaluation Report.

    ERIC Educational Resources Information Center

    New York City Board of Education, Brooklyn, NY. Office of Educational Evaluation.

    The report is an evaluation of the 1979-80 High School Title I English as a Second Language Program. Two types of information are presented: (1) a narrative description of the program which provides qualitative data regarding the program, and (2) a statistical analysis of test results which consists of quantitative, city-wide data. By integrating…

  3. High School and Beyond. 1980 Senior Cohort. First Follow-Up (1982). [machine-readable data file].

    ERIC Educational Resources Information Center

    National Center for Education Statistics (ED), Washington, DC.

    The High School and Beyond 1980 Senior Cohort First Follow-Up (1982) Data File is presented. The First Follow-Up Senior Cohort data tape consists of four related data files: (1) the student data file (including data availability flags, weights, questionnaire data, and composite variables); (2) Statistical Analysis System (SAS) control cards for…

  4. 45 CFR 1356.71 - Federal review of the eligibility of children in foster care and the eligibility of foster care...

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... by ACF statistical staff from the Adoption and Foster Care Analysis and Reporting System (AFCARS) data which are transmitted by the State agency to ACF. The sampling frame will consist of cases of... State's most recent AFCARS data submission. For the initial primary review, if these data are not...

  5. Cloud encounter statistics in the 28.5-43.5 KFT altitude region from four years of GASP observations

    NASA Technical Reports Server (NTRS)

    Jasperson, W. H.; Nastrom, G. D.; Davis, R. E.; Holdeman, J. D.

    1983-01-01

    The results of an analysis of cloud encounter measurements taken at aircraft flight altitudes as part of the Global Atmospheric Sampling Program are summarized. The results can be used in estimating the probability of cloud encounter and in assessing the economic feasibility of laminar flow control aircraft along particular routes. The data presented clearly show the tropical circulation and its seasonal migration; characteristics of the mid-latitude regime, such as the large-scale traveling cyclones in the winter and increased convective activity in the summer, can be isolated in the data. The cloud encounter statistics are shown to be consistent with the mid-latitude cyclone model. A model for TIC (time-in-clouds), a cloud encounter statistic, is presented for several common airline routes.

  6. Interpreting statistics of small lunar craters

    NASA Technical Reports Server (NTRS)

    Schultz, P. H.; Gault, D.; Greeley, R.

    1977-01-01

    Some of the wide variations in the crater-size distributions in lunar photography and in the resulting statistics were interpreted as different degradation rates on different surfaces, different scaling laws in different targets, and a possible population of endogenic craters. These possibilities are reexamined for statistics of 26 different regions. In contrast to most other studies, crater diameters as small as 5 m were measured from enlarged Lunar Orbiter framelets. According to the results of the reported analysis, the different crater distribution types appear to be most consistent with the hypotheses of differential degradation and a superposed crater population. Differential degradation can account for the low level of equilibrium in incompetent materials such as ejecta deposits, mantle deposits, and deep regoliths where scaling law changes and catastrophic processes introduce contradictions with other observations.

  7. Computation of statistical secondary structure of nucleic acids.

    PubMed Central

    Yamamoto, K; Kitamura, Y; Yoshikura, H

    1984-01-01

    This paper presents a computer analysis of statistical secondary structure of nucleic acids. For a given single stranded nucleic acid, we generated "structure map" which included all the annealing structures in the sequence. The map was transformed into "energy map" by rough approximation; here, the energy level of every pairing structure consisting of more than 2 successive nucleic acid pairs was calculated. By using the "energy map", the probability of occurrence of each annealed structure was computed, i.e., the structure was computed statistically. The basis of computation was the 8-queen problem in the chess game. The validity of our computer programme was checked by computing tRNA structure which has been well established. Successful application of this programme to small nuclear RNAs of various origins is demonstrated. PMID:6198622

  8. Statistical physics and economic fluctuations: do outliers exist?

    NASA Astrophysics Data System (ADS)

    Stanley, H. Eugene

    2003-02-01

    We present an overview of recent research applying ideas of statistical physics to try to better understand puzzles regarding economic fluctuations. One of these puzzles is how to describe outliers, phenomena that lie outside of patterns of statistical regularity. We review evidence consistent with the possibility that such outliers may not exist. This possibility is supported by recent analysis by Plerou et al. of a database containing the bid, ask, and sale price of each trade of every stock. Further, the data support the picture of economic fluctuations, due to Plerou et al., in which a financial market alternates between being in an “equilibrium phase” where market behavior is split roughly equally between buying and selling, and an “out-of-equilibrium phase” where the market is mainly either buying or selling.

  9. An overview of meta-analysis for clinicians.

    PubMed

    Lee, Young Ho

    2018-03-01

    The number of medical studies being published is increasing exponentially, and clinicians must routinely process large amounts of new information. Moreover, the results of individual studies are often insufficient to provide confident answers, as their results are not consistently reproducible. A meta-analysis is a statistical method for combining the results of different studies on the same topic and it may resolve conflicts among studies. Meta-analysis is being used increasingly and plays an important role in medical research. This review introduces the basic concepts, steps, advantages, and caveats of meta-analysis, to help clinicians understand it in clinical practice and research. A major advantage of a meta-analysis is that it produces a precise estimate of the effect size, with considerably increased statistical power, which is important when the power of the primary study is limited because of a small sample size. A meta-analysis may yield conclusive results when individual studies are inconclusive. Furthermore, meta-analyses investigate the source of variation and different effects among subgroups. In summary, a meta-analysis is an objective, quantitative method that provides less biased estimates on a specific topic. Understanding how to conduct a meta-analysis aids clinicians in the process of making clinical decisions.

  10. A Systematic Review and Meta-Regression Analysis of Lung Cancer Risk and Inorganic Arsenic in Drinking Water.

    PubMed

    Lamm, Steven H; Ferdosi, Hamid; Dissen, Elisabeth K; Li, Ji; Ahn, Jaeil

    2015-12-07

    High levels (> 200 µg/L) of inorganic arsenic in drinking water are known to be a cause of human lung cancer, but the evidence at lower levels is uncertain. We have sought the epidemiological studies that have examined the dose-response relationship between arsenic levels in drinking water and the risk of lung cancer over a range that includes both high and low levels of arsenic. Regression analysis, based on six studies identified from an electronic search, examined the relationship between the log of the relative risk and the log of the arsenic exposure over a range of 1-1000 µg/L. The best-fitting continuous meta-regression model was sought and found to be a no-constant linear-quadratic analysis where both the risk and the exposure had been logarithmically transformed. This yielded both a statistically significant positive coefficient for the quadratic term and a statistically significant negative coefficient for the linear term. Sub-analyses by study design yielded results that were similar for both ecological studies and non-ecological studies. Statistically significant X-intercepts consistently found no increased level of risk at approximately 100-150 µg/L arsenic.

  11. Climate Considerations Of The Electricity Supply Systems In Industries

    NASA Astrophysics Data System (ADS)

    Asset, Khabdullin; Zauresh, Khabdullina

    2014-12-01

    The study is focused on analysis of climate considerations of electricity supply systems in a pellet industry. The developed analysis model consists of two modules: statistical data of active power losses evaluation module and climate aspects evaluation module. The statistical data module is presented as a universal mathematical model of electrical systems and components of industrial load. It forms a basis for detailed accounting of power loss from the voltage levels. On the basis of the universal model, a set of programs is designed to perform the calculation and experimental research. It helps to obtain the statistical characteristics of the power losses and loads of the electricity supply systems and to define the nature of changes in these characteristics. Within the module, several methods and algorithms for calculating parameters of equivalent circuits of low- and high-voltage ADC and SD with a massive smooth rotor with laminated poles are developed. The climate aspects module includes an analysis of the experimental data of power supply system in pellet production. It allows identification of GHG emission reduction parameters: operation hours, type of electrical motors, values of load factor and deviation of standard value of voltage.

  12. A Systematic Review and Meta-Regression Analysis of Lung Cancer Risk and Inorganic Arsenic in Drinking Water

    PubMed Central

    Lamm, Steven H.; Ferdosi, Hamid; Dissen, Elisabeth K.; Li, Ji; Ahn, Jaeil

    2015-01-01

    High levels (> 200 µg/L) of inorganic arsenic in drinking water are known to be a cause of human lung cancer, but the evidence at lower levels is uncertain. We have sought the epidemiological studies that have examined the dose-response relationship between arsenic levels in drinking water and the risk of lung cancer over a range that includes both high and low levels of arsenic. Regression analysis, based on six studies identified from an electronic search, examined the relationship between the log of the relative risk and the log of the arsenic exposure over a range of 1–1000 µg/L. The best-fitting continuous meta-regression model was sought and found to be a no-constant linear-quadratic analysis where both the risk and the exposure had been logarithmically transformed. This yielded both a statistically significant positive coefficient for the quadratic term and a statistically significant negative coefficient for the linear term. Sub-analyses by study design yielded results that were similar for both ecological studies and non-ecological studies. Statistically significant X-intercepts consistently found no increased level of risk at approximately 100–150 µg/L arsenic. PMID:26690190

  13. Global Profiling and Novel Structure Discovery Using Multiple Neutral Loss/Precursor Ion Scanning Combined with Substructure Recognition and Statistical Analysis (MNPSS): Characterization of Terpene-Conjugated Curcuminoids in Curcuma longa as a Case Study.

    PubMed

    Qiao, Xue; Lin, Xiong-hao; Ji, Shuai; Zhang, Zheng-xiang; Bo, Tao; Guo, De-an; Ye, Min

    2016-01-05

    To fully understand the chemical diversity of an herbal medicine is challenging. In this work, we describe a new approach to globally profile and discover novel compounds from an herbal extract using multiple neutral loss/precursor ion scanning combined with substructure recognition and statistical analysis. Turmeric (the rhizomes of Curcuma longa L.) was used as an example. This approach consists of three steps: (i) multiple neutral loss/precursor ion scanning to obtain substructure information; (ii) targeted identification of new compounds by extracted ion current and substructure recognition; and (iii) untargeted identification using total ion current and multivariate statistical analysis to discover novel structures. Using this approach, 846 terpecurcumins (terpene-conjugated curcuminoids) were discovered from turmeric, including a number of potentially novel compounds. Furthermore, two unprecedented compounds (terpecurcumins X and Y) were purified, and their structures were identified by NMR spectroscopy. This study extended the application of mass spectrometry to global profiling of natural products in herbal medicines and could help chemists to rapidly discover novel compounds from a complex matrix.

  14. Inferring consistent functional interaction patterns from natural stimulus FMRI data

    PubMed Central

    Sun, Jiehuan; Hu, Xintao; Huang, Xiu; Liu, Yang; Li, Kaiming; Li, Xiang; Han, Junwei; Guo, Lei

    2014-01-01

    There has been increasing interest in how the human brain responds to natural stimulus such as video watching in the neuroimaging field. Along this direction, this paper presents our effort in inferring consistent and reproducible functional interaction patterns under natural stimulus of video watching among known functional brain regions identified by task-based fMRI. Then, we applied and compared four statistical approaches, including Bayesian network modeling with searching algorithms: greedy equivalence search (GES), Peter and Clark (PC) analysis, independent multiple greedy equivalence search (IMaGES), and the commonly used Granger causality analysis (GCA), to infer consistent and reproducible functional interaction patterns among these brain regions. It is interesting that a number of reliable and consistent functional interaction patterns were identified by the GES, PC and IMaGES algorithms in different participating subjects when they watched multiple video shots of the same semantic category. These interaction patterns are meaningful given current neuroscience knowledge and are reasonably reproducible across different brains and video shots. In particular, these consistent functional interaction patterns are supported by structural connections derived from diffusion tensor imaging (DTI) data, suggesting the structural underpinnings of consistent functional interactions. Our work demonstrates that specific consistent patterns of functional interactions among relevant brain regions might reflect the brain's fundamental mechanisms of online processing and comprehension of video messages. PMID:22440644

  15. GARNET--gene set analysis with exploration of annotation relations.

    PubMed

    Rho, Kyoohyoung; Kim, Bumjin; Jang, Youngjun; Lee, Sanghyun; Bae, Taejeong; Seo, Jihae; Seo, Chaehwa; Lee, Jihyun; Kang, Hyunjung; Yu, Ungsik; Kim, Sunghoon; Lee, Sanghyuk; Kim, Wan Kyu

    2011-02-15

    Gene set analysis is a powerful method of deducing biological meaning for an a priori defined set of genes. Numerous tools have been developed to test statistical enrichment or depletion in specific pathways or gene ontology (GO) terms. Major difficulties towards biological interpretation are integrating diverse types of annotation categories and exploring the relationships between annotation terms of similar information. GARNET (Gene Annotation Relationship NEtwork Tools) is an integrative platform for gene set analysis with many novel features. It includes tools for retrieval of genes from annotation database, statistical analysis & visualization of annotation relationships, and managing gene sets. In an effort to allow access to a full spectrum of amassed biological knowledge, we have integrated a variety of annotation data that include the GO, domain, disease, drug, chromosomal location, and custom-defined annotations. Diverse types of molecular networks (pathways, transcription and microRNA regulations, protein-protein interaction) are also included. The pair-wise relationship between annotation gene sets was calculated using kappa statistics. GARNET consists of three modules--gene set manager, gene set analysis and gene set retrieval, which are tightly integrated to provide virtually automatic analysis for gene sets. A dedicated viewer for annotation network has been developed to facilitate exploration of the related annotations. GARNET (gene annotation relationship network tools) is an integrative platform for diverse types of gene set analysis, where complex relationships among gene annotations can be easily explored with an intuitive network visualization tool (http://garnet.isysbio.org/ or http://ercsb.ewha.ac.kr/garnet/).

  16. Application of survival analysis methodology to the quantitative analysis of LC-MS proteomics data.

    PubMed

    Tekwe, Carmen D; Carroll, Raymond J; Dabney, Alan R

    2012-08-01

    Protein abundance in quantitative proteomics is often based on observed spectral features derived from liquid chromatography mass spectrometry (LC-MS) or LC-MS/MS experiments. Peak intensities are largely non-normal in distribution. Furthermore, LC-MS-based proteomics data frequently have large proportions of missing peak intensities due to censoring mechanisms on low-abundance spectral features. Recognizing that the observed peak intensities detected with the LC-MS method are all positive, skewed and often left-censored, we propose using survival methodology to carry out differential expression analysis of proteins. Various standard statistical techniques including non-parametric tests such as the Kolmogorov-Smirnov and Wilcoxon-Mann-Whitney rank sum tests, and the parametric survival model and accelerated failure time-model with log-normal, log-logistic and Weibull distributions were used to detect any differentially expressed proteins. The statistical operating characteristics of each method are explored using both real and simulated datasets. Survival methods generally have greater statistical power than standard differential expression methods when the proportion of missing protein level data is 5% or more. In particular, the AFT models we consider consistently achieve greater statistical power than standard testing procedures, with the discrepancy widening with increasing missingness in the proportions. The testing procedures discussed in this article can all be performed using readily available software such as R. The R codes are provided as supplemental materials. ctekwe@stat.tamu.edu.

  17. Analysis methodology and development of a statistical tool for biodistribution data from internal contamination with actinides.

    PubMed

    Lamart, Stephanie; Griffiths, Nina M; Tchitchek, Nicolas; Angulo, Jaime F; Van der Meeren, Anne

    2017-03-01

    The aim of this work was to develop a computational tool that integrates several statistical analysis features for biodistribution data from internal contamination experiments. These data represent actinide levels in biological compartments as a function of time and are derived from activity measurements in tissues and excreta. These experiments aim at assessing the influence of different contamination conditions (e.g. intake route or radioelement) on the biological behavior of the contaminant. The ever increasing number of datasets and diversity of experimental conditions make the handling and analysis of biodistribution data difficult. This work sought to facilitate the statistical analysis of a large number of datasets and the comparison of results from diverse experimental conditions. Functional modules were developed using the open-source programming language R to facilitate specific operations: descriptive statistics, visual comparison, curve fitting, and implementation of biokinetic models. In addition, the structure of the datasets was harmonized using the same table format. Analysis outputs can be written in text files and updated data can be written in the consistent table format. Hence, a data repository is built progressively, which is essential for the optimal use of animal data. Graphical representations can be automatically generated and saved as image files. The resulting computational tool was applied using data derived from wound contamination experiments conducted under different conditions. In facilitating biodistribution data handling and statistical analyses, this computational tool ensures faster analyses and a better reproducibility compared with the use of multiple office software applications. Furthermore, re-analysis of archival data and comparison of data from different sources is made much easier. Hence this tool will help to understand better the influence of contamination characteristics on actinide biokinetics. Our approach can aid the optimization of treatment protocols and therefore contribute to the improvement of the medical response after internal contamination with actinides.

  18. Coordinate based random effect size meta-analysis of neuroimaging studies.

    PubMed

    Tench, C R; Tanasescu, Radu; Constantinescu, C S; Auer, D P; Cottam, W J

    2017-06-01

    Low power in neuroimaging studies can make them difficult to interpret, and Coordinate based meta-analysis (CBMA) may go some way to mitigating this issue. CBMA has been used in many analyses to detect where published functional MRI or voxel-based morphometry studies testing similar hypotheses report significant summary results (coordinates) consistently. Only the reported coordinates and possibly t statistics are analysed, and statistical significance of clusters is determined by coordinate density. Here a method of performing coordinate based random effect size meta-analysis and meta-regression is introduced. The algorithm (ClusterZ) analyses both coordinates and reported t statistic or Z score, standardised by the number of subjects. Statistical significance is determined not by coordinate density, but by a random effects meta-analyses of reported effects performed cluster-wise using standard statistical methods and taking account of censoring inherent in the published summary results. Type 1 error control is achieved using the false cluster discovery rate (FCDR), which is based on the false discovery rate. This controls both the family wise error rate under the null hypothesis that coordinates are randomly drawn from a standard stereotaxic space, and the proportion of significant clusters that are expected under the null. Such control is necessary to avoid propagating and even amplifying the very issues motivating the meta-analysis in the first place. ClusterZ is demonstrated on both numerically simulated data and on real data from reports of grey matter loss in multiple sclerosis (MS) and syndromes suggestive of MS, and of painful stimulus in healthy controls. The software implementation is available to download and use freely. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. A Categorization of Dynamic Analyzers

    NASA Technical Reports Server (NTRS)

    Lujan, Michelle R.

    1997-01-01

    Program analysis techniques and tools are essential to the development process because of the support they provide in detecting errors and deficiencies at different phases of development. The types of information rendered through analysis includes the following: statistical measurements of code, type checks, dataflow analysis, consistency checks, test data,verification of code, and debugging information. Analyzers can be broken into two major categories: dynamic and static. Static analyzers examine programs with respect to syntax errors and structural properties., This includes gathering statistical information on program content, such as the number of lines of executable code, source lines. and cyclomatic complexity. In addition, static analyzers provide the ability to check for the consistency of programs with respect to variables. Dynamic analyzers in contrast are dependent on input and the execution of a program providing the ability to find errors that cannot be detected through the use of static analysis alone. Dynamic analysis provides information on the behavior of a program rather than on the syntax. Both types of analysis detect errors in a program, but dynamic analyzers accomplish this through run-time behavior. This paper focuses on the following broad classification of dynamic analyzers: 1) Metrics; 2) Models; and 3) Monitors. Metrics are those analyzers that provide measurement. The next category, models, captures those analyzers that present the state of the program to the user at specified points in time. The last category, monitors, checks specified code based on some criteria. The paper discusses each classification and the techniques that are included under them. In addition, the role of each technique in the software life cycle is discussed. Familiarization with the tools that measure, model and monitor programs provides a framework for understanding the program's dynamic behavior from different, perspectives through analysis of the input/output data.

  20. A comprehensive comparison of RNA-Seq-based transcriptome analysis from reads to differential gene expression and cross-comparison with microarrays: a case study in Saccharomyces cerevisiae

    PubMed Central

    Nookaew, Intawat; Papini, Marta; Pornputtapong, Natapol; Scalcinati, Gionata; Fagerberg, Linn; Uhlén, Matthias; Nielsen, Jens

    2012-01-01

    RNA-seq, has recently become an attractive method of choice in the studies of transcriptomes, promising several advantages compared with microarrays. In this study, we sought to assess the contribution of the different analytical steps involved in the analysis of RNA-seq data generated with the Illumina platform, and to perform a cross-platform comparison based on the results obtained through Affymetrix microarray. As a case study for our work we, used the Saccharomyces cerevisiae strain CEN.PK 113-7D, grown under two different conditions (batch and chemostat). Here, we asses the influence of genetic variation on the estimation of gene expression level using three different aligners for read-mapping (Gsnap, Stampy and TopHat) on S288c genome, the capabilities of five different statistical methods to detect differential gene expression (baySeq, Cuffdiff, DESeq, edgeR and NOISeq) and we explored the consistency between RNA-seq analysis using reference genome and de novo assembly approach. High reproducibility among biological replicates (correlation ≥0.99) and high consistency between the two platforms for analysis of gene expression levels (correlation ≥0.91) are reported. The results from differential gene expression identification derived from the different statistical methods, as well as their integrated analysis results based on gene ontology annotation are in good agreement. Overall, our study provides a useful and comprehensive comparison between the two platforms (RNA-seq and microrrays) for gene expression analysis and addresses the contribution of the different steps involved in the analysis of RNA-seq data. PMID:22965124

  1. An analysis of Greek seismicity based on Non Extensive Statistical Physics: The interdependence of magnitude, interevent time and interevent distance.

    NASA Astrophysics Data System (ADS)

    Efstathiou, Angeliki; Tzanis, Andreas; Vallianatos, Filippos

    2014-05-01

    The context of Non Extensive Statistical Physics (NESP) has recently been suggested to comprise an appropriate tool for the analysis of complex dynamic systems with scale invariance, long-range interactions, long-range memory and systems that evolve in a fractal-like space-time. This is because the active tectonic grain is thought to comprise a (self-organizing) complex system; therefore, its expression (seismicity) should be manifested in the temporal and spatial statistics of energy release rates. In addition to energy release rates expressed by the magnitude M, measures of the temporal and spatial interactions are the time (Δt) and hypocentral distance (Δd) between consecutive events. Recent work indicated that if the distributions of M, Δt and Δd are independent so that the joint probability p(M,Δt,Δd) factorizes into the probabilities of M, Δt and Δd, i.e. p(M,Δt,Δd)= p(M)p(Δt)p(Δd), then the frequency of earthquake occurrence is multiply related, not only to magnitude as the celebrated Gutenberg - Richter law predicts, but also to interevent time and distance by means of well-defined power-laws consistent with NESP. The present work applies these concepts to investigate the self-organization and temporal/spatial dynamics of seismicity in Greece and western Turkey, for the period 1964-2011. The analysis was based on the ISC earthquake catalogue which is homogenous by construction with consistently determined hypocenters and magnitude. The presentation focuses on the analysis of bivariate Frequency-Magnitude-Time distributions, while using the interevent distances as spatial constraints (or spatial filters) for studying the spatial dependence of the energy and time dynamics of the seismicity. It is demonstrated that the frequency of earthquake occurrence is multiply related to the magnitude and the interevent time by means of well-defined multi-dimensional power-laws consistent with NESP and has attributes of universality,as its holds for a broad range of spatial, temporal and magnitude scales. Provided that the multivariate empirical frequency distributions are based on a sufficient number of observations as an empirical lower limit, the results are stable and consistent with the established ken, irrespective of the magnitude and spatio-temporal range of the earthquake catalogue, or operations pertaining to re-sampling, bootstrapping or re-arrangement of the catalogue. It is also demonstrated that that the expression of the regional active tectonic grain may comprise a mixture of processes significantly dependent on Δd. The analysis of the size (energy) distribution of earthquakes yielded results consistent with a correlated sub-extensive system; the results are also consistent with conventional determinations of Frequency-Magnitude distributions. The analysis of interevent times, has determined the existence of sub-extensivity and near-field interaction (correlation) in the complete catalogue of Greek and western Turkish seismicity (mixed background earthquake activity and aftershock processes),as well as in the pure background process (declustered catalogue).This could be attributed to the joint effect of near-field interaction between neighbouring earthquakes or seismic areas and interaction within aftershock sequences. The background process appears to be moderately - weakly correlated at the far field. Formal random temporal processes have not been detected. A general syllogism affordable by the above observations is that aftershock sequences may be an integral part of the seismogenetic process, as they appear to partake in long-range interaction. A formal explanation of such an effect is pending, but may nevertheless involve delayed remote triggering of seismic activity by (transient or static) stress transfer from the main shocks and large aftershocks and/or cascading effects already discussed by Marsan and Lengliné (2008). In this view, the effect weakens when aftershocks are removed because aftershocks are the link between the main shocks and their remote offshoot. Overall, the above results compare well to the results of North Californian seismicity which have shown that the expression of seismicity at Northern California is generally consistent with non-extensive (sub-extensive) thermodynamics. Acknowledgments: This work was supported by the THALES Program of the Ministry of Education of Greece and the European Union in the framework of the project "Integrated understanding of Seismicity, using innovative methodologies of Fracture Mechanics along with Earthquake and Non-Extensive Statistical Physics - Application to the geodynamic system of the Hellenic Arc - SEISMO FEAR HELLARC". References: Tzanis A., Vallianatos F., Efstathiou A., Multidimensional earthquake frequency distributions consistent with Non-Extensive Statistical Physics: the interdependence of magnitude, interevent time and interevent distance in North California. Bulletin of the Geological Society of Greece, vol. XLVII 2013. Proceedings of the 13th International Congress, Chania, Sept. 2013 Tzanis A., Vallianatos F., Efstathiou A., Generalized multidimensional earthquake frequency distributions consistent with Non-Extensive Statistical Physics: An appraisal of the universality in the interdependence of magnitude, interevent time and interevent distance Geophysical Research Abstracts, Vol. 15, EGU2013-628, 2013, EGU General Assembly 2013 Marsan, D. and Lengliné, O., 2008. Extending earthquakes's reach through cascading, Science, 319, 1076; doi: 10.1126/science.1148783 On-line Bulletin, http://www.isc.ac.uk, Internatl. Seis. Cent., Thatcham, United Kingdom, 2011.

  2. Unmasking the masked Universe: the 2M++ catalogue through Bayesian eyes

    NASA Astrophysics Data System (ADS)

    Lavaux, Guilhem; Jasche, Jens

    2016-01-01

    This work describes a full Bayesian analysis of the Nearby Universe as traced by galaxies of the 2M++ survey. The analysis is run in two sequential steps. The first step self-consistently derives the luminosity-dependent galaxy biases, the power spectrum of matter fluctuations and matter density fields within a Gaussian statistic approximation. The second step makes a detailed analysis of the three-dimensional large-scale structures, assuming a fixed bias model and a fixed cosmology. This second step allows for the reconstruction of both the final density field and the initial conditions at z = 1000 assuming a fixed bias model. From these, we derive fields that self-consistently extrapolate the observed large-scale structures. We give two examples of these extrapolation and their utility for the detection of structures: the visibility of the Sloan Great Wall, and the detection and characterization of the Local Void using DIVA, a Lagrangian based technique to classify structures.

  3. Raindrop Size Distribution in Different Climatic Regimes from Disdrometer and Dual-Polarized Radar Analysis.

    NASA Astrophysics Data System (ADS)

    Bringi, V. N.; Chandrasekar, V.; Hubbert, J.; Gorgucci, E.; Randeu, W. L.; Schoenhuber, M.

    2003-01-01

    The application of polarimetric radar data to the retrieval of raindrop size distribution parameters and rain rate in samples of convective and stratiform rain types is presented. Data from the Colorado State University (CSU), CHILL, NCAR S-band polarimetric (S-Pol), and NASA Kwajalein radars are analyzed for the statistics and functional relation of these parameters with rain rate. Surface drop size distribution measurements using two different disdrometers (2D video and RD-69) from a number of climatic regimes are analyzed and compared with the radar retrievals in a statistical and functional approach. The composite statistics based on disdrometer and radar retrievals suggest that, on average, the two parameters (generalized intercept and median volume diameter) for stratiform rain distributions lie on a straight line with negative slope, which appears to be consistent with variations in the microphysics of stratiform precipitation (melting of larger, dry snow particles versus smaller, rimed ice particles). In convective rain, `maritime-like' and `continental-like' clusters could be identified in the same two-parameter space that are consistent with the different multiplicative coefficients in the Z = aR1.5 relations quoted in the literature for maritime and continental regimes.

  4. External Tank Liquid Hydrogen (LH2) Prepress Regression Analysis Independent Review Technical Consultation Report

    NASA Technical Reports Server (NTRS)

    Parsons, Vickie s.

    2009-01-01

    The request to conduct an independent review of regression models, developed for determining the expected Launch Commit Criteria (LCC) External Tank (ET)-04 cycle count for the Space Shuttle ET tanking process, was submitted to the NASA Engineering and Safety Center NESC on September 20, 2005. The NESC team performed an independent review of regression models documented in Prepress Regression Analysis, Tom Clark and Angela Krenn, 10/27/05. This consultation consisted of a peer review by statistical experts of the proposed regression models provided in the Prepress Regression Analysis. This document is the consultation's final report.

  5. Using assemblage data in ecological indicators: A comparison and evaluation of commonly available statistical tools

    USGS Publications Warehouse

    Smith, Joseph M.; Mather, Martha E.

    2012-01-01

    Ecological indicators are science-based tools used to assess how human activities have impacted environmental resources. For monitoring and environmental assessment, existing species assemblage data can be used to make these comparisons through time or across sites. An impediment to using assemblage data, however, is that these data are complex and need to be simplified in an ecologically meaningful way. Because multivariate statistics are mathematical relationships, statistical groupings may not make ecological sense and will not have utility as indicators. Our goal was to define a process to select defensible and ecologically interpretable statistical simplifications of assemblage data in which researchers and managers can have confidence. For this, we chose a suite of statistical methods, compared the groupings that resulted from these analyses, identified convergence among groupings, then we interpreted the groupings using species and ecological guilds. When we tested this approach using a statewide stream fish dataset, not all statistical methods worked equally well. For our dataset, logistic regression (Log), detrended correspondence analysis (DCA), cluster analysis (CL), and non-metric multidimensional scaling (NMDS) provided consistent, simplified output. Specifically, the Log, DCA, CL-1, and NMDS-1 groupings were ≥60% similar to each other, overlapped with the fluvial-specialist ecological guild, and contained a common subset of species. Groupings based on number of species (e.g., Log, DCA, CL and NMDS) outperformed groupings based on abundance [e.g., principal components analysis (PCA) and Poisson regression]. Although the specific methods that worked on our test dataset have generality, here we are advocating a process (e.g., identifying convergent groupings with redundant species composition that are ecologically interpretable) rather than the automatic use of any single statistical tool. We summarize this process in step-by-step guidance for the future use of these commonly available ecological and statistical methods in preparing assemblage data for use in ecological indicators.

  6. Tract-Based Spatial Statistics in Preterm-Born Neonates Predicts Cognitive and Motor Outcomes at 18 Months.

    PubMed

    Duerden, E G; Foong, J; Chau, V; Branson, H; Poskitt, K J; Grunau, R E; Synnes, A; Zwicker, J G; Miller, S P

    2015-08-01

    Adverse neurodevelopmental outcome is common in children born preterm. Early sensitive predictors of neurodevelopmental outcome such as MR imaging are needed. Tract-based spatial statistics, a diffusion MR imaging analysis method, performed at term-equivalent age (40 weeks) is a promising predictor of neurodevelopmental outcomes in children born very preterm. We sought to determine the association of tract-based spatial statistics findings before term-equivalent age with neurodevelopmental outcome at 18-months corrected age. Of 180 neonates (born at 24-32-weeks' gestation) enrolled, 153 had DTI acquired early at 32 weeks' postmenstrual age and 105 had DTI acquired later at 39.6 weeks' postmenstrual age. Voxelwise statistics were calculated by performing tract-based spatial statistics on DTI that was aligned to age-appropriate templates. At 18-month corrected age, 166 neonates underwent neurodevelopmental assessment by using the Bayley Scales of Infant Development, 3rd ed, and the Peabody Developmental Motor Scales, 2nd ed. Tract-based spatial statistics analysis applied to early-acquired scans (postmenstrual age of 30-33 weeks) indicated a limited significant positive association between motor skills and axial diffusivity and radial diffusivity values in the corpus callosum, internal and external/extreme capsules, and midbrain (P < .05, corrected). In contrast, for term scans (postmenstrual age of 37-41 weeks), tract-based spatial statistics analysis showed a significant relationship between both motor and cognitive scores with fractional anisotropy in the corpus callosum and corticospinal tracts (P < .05, corrected). Tract-based spatial statistics in a limited subset of neonates (n = 22) scanned at <30 weeks did not significantly predict neurodevelopmental outcomes. The strength of the association between fractional anisotropy values and neurodevelopmental outcome scores increased from early-to-late-acquired scans in preterm-born neonates, consistent with brain dysmaturation in this population. © 2015 by American Journal of Neuroradiology.

  7. A follow-up power analysis of the statistical tests used in the Journal of Research in Science Teaching

    NASA Astrophysics Data System (ADS)

    Woolley, Thomas W.; Dawson, George O.

    It has been two decades since the first power analysis of a psychological journal and 10 years since the Journal of Research in Science Teaching made its contribution to this debate. One purpose of this article is to investigate what power-related changes, if any, have occurred in science education research over the past decade as a result of the earlier survey. In addition, previous recommendations are expanded and expounded upon within the context of more recent work in this area. The absence of any consistent mode of presenting statistical results, as well as little change with regard to power-related issues are reported. Guidelines for reporting the minimal amount of information demanded for clear and independent evaluation of research results by readers are also proposed.

  8. Statistical summary and trend evaluation of air quality data for Cleveland, Ohio in 1967 to 1971: Total suspended particulate, nitrogen dioxide, and sulfur dioxide

    NASA Technical Reports Server (NTRS)

    Neustadter, H. E.; Sidik, S. M.; Burr, J. C., Jr.

    1972-01-01

    Air quality data for Cleveland, Ohio, for the period of 1967 to 1971 were collated and subjected to statistical analysis. The total suspended particulate component is lognormally distributed; while sulfur dioxide and nitrogen dioxide are reasonably approximated by lognormal distributions. Only sulfur dioxide, in some residential neighborhoods, meets Ohio air quality standards. Air quality has definitely improved in the industrial valley, while in the rest of the city, only sulfur dioxide has shown consistent improvement. A pollution index is introduced which displays directly the degree to which the environmental air conforms to mandated standards.

  9. Stokes-correlometry of polarization-inhomogeneous objects

    NASA Astrophysics Data System (ADS)

    Ushenko, O. G.; Dubolazov, A.; Bodnar, G. B.; Bachynskiy, V. T.; Vanchulyak, O.

    2018-01-01

    The paper consists of two parts. The first part presents short theoretical basics of the method of Stokes-correlometry description of optical anisotropy of biological tissues. It was provided experimentally measured coordinate distributions of modulus (MSV) and phase (PhSV) of complex Stokes vector of skeletal muscle tissue. It was defined the values and ranges of changes of statistic moments of the 1st-4th orders, which characterize the distributions of values of MSV and PhSV. The second part presents the data of statistic analysis of the distributions of modulus MSV and PhSV. It was defined the objective criteria of differentiation of samples with urinary incontinence.

  10. Network meta-analysis: application and practice using Stata.

    PubMed

    Shim, Sungryul; Yoon, Byung-Ho; Shin, In-Soo; Bae, Jong-Myon

    2017-01-01

    This review aimed to arrange the concepts of a network meta-analysis (NMA) and to demonstrate the analytical process of NMA using Stata software under frequentist framework. The NMA tries to synthesize evidences for a decision making by evaluating the comparative effectiveness of more than two alternative interventions for the same condition. Before conducting a NMA, 3 major assumptions-similarity, transitivity, and consistency-should be checked. The statistical analysis consists of 5 steps. The first step is to draw a network geometry to provide an overview of the network relationship. The second step checks the assumption of consistency. The third step is to make the network forest plot or interval plot in order to illustrate the summary size of comparative effectiveness among various interventions. The fourth step calculates cumulative rankings for identifying superiority among interventions. The last step evaluates publication bias or effect modifiers for a valid inference from results. The synthesized evidences through five steps would be very useful to evidence-based decision-making in healthcare. Thus, NMA should be activated in order to guarantee the quality of healthcare system.

  11. Statistical significance of trace evidence matches using independent physicochemical measurements

    NASA Astrophysics Data System (ADS)

    Almirall, Jose R.; Cole, Michael; Furton, Kenneth G.; Gettinby, George

    1997-02-01

    A statistical approach to the significance of glass evidence is proposed using independent physicochemical measurements and chemometrics. Traditional interpretation of the significance of trace evidence matches or exclusions relies on qualitative descriptors such as 'indistinguishable from,' 'consistent with,' 'similar to' etc. By performing physical and chemical measurements with are independent of one another, the significance of object exclusions or matches can be evaluated statistically. One of the problems with this approach is that the human brain is excellent at recognizing and classifying patterns and shapes but performs less well when that object is represented by a numerical list of attributes. Chemometrics can be employed to group similar objects using clustering algorithms and provide statistical significance in a quantitative manner. This approach is enhanced when population databases exist or can be created and the data in question can be evaluated given these databases. Since the selection of the variables used and their pre-processing can greatly influence the outcome, several different methods could be employed in order to obtain a more complete picture of the information contained in the data. Presently, we report on the analysis of glass samples using refractive index measurements and the quantitative analysis of the concentrations of the metals: Mg, Al, Ca, Fe, Mn, Ba, Sr, Ti and Zr. The extension of this general approach to fiber and paint comparisons also is discussed. This statistical approach should not replace the current interpretative approaches to trace evidence matches or exclusions but rather yields an additional quantitative measure. The lack of sufficient general population databases containing the needed physicochemical measurements and the potential for confusion arising from statistical analysis currently hamper this approach and ways of overcoming these obstacles are presented.

  12. Statistical Analysis of the Uncertainty in Pre-Flight Aerodynamic Database of a Hypersonic Vehicle

    NASA Astrophysics Data System (ADS)

    Huh, Lynn

    The objective of the present research was to develop a new method to derive the aerodynamic coefficients and the associated uncertainties for flight vehicles via post- flight inertial navigation analysis using data from the inertial measurement unit. Statistical estimates of vehicle state and aerodynamic coefficients are derived using Monte Carlo simulation. Trajectory reconstruction using the inertial navigation system (INS) is a simple and well used method. However, deriving realistic uncertainties in the reconstructed state and any associated parameters is not so straight forward. Extended Kalman filters, batch minimum variance estimation and other approaches have been used. However, these methods generally depend on assumed physical models, assumed statistical distributions (usually Gaussian) or have convergence issues for non-linear problems. The approach here assumes no physical models, is applicable to any statistical distribution, and does not have any convergence issues. The new approach obtains the statistics directly from a sufficient number of Monte Carlo samples using only the generally well known gyro and accelerometer specifications and could be applied to the systems of non-linear form and non-Gaussian distribution. When redundant data are available, the set of Monte Carlo simulations are constrained to satisfy the redundant data within the uncertainties specified for the additional data. The proposed method was applied to validate the uncertainty in the pre-flight aerodynamic database of the X-43A Hyper-X research vehicle. In addition to gyro and acceleration data, the actual flight data include redundant measurements of position and velocity from the global positioning system (GPS). The criteria derived from the blend of the GPS and INS accuracy was used to select valid trajectories for statistical analysis. The aerodynamic coefficients were derived from the selected trajectories by either direct extraction method based on the equations in dynamics, or by the inquiry of the pre-flight aerodynamic database. After the application of the proposed method to the case of the X-43A Hyper-X research vehicle, it was found that 1) there were consistent differences in the aerodynamic coefficients from the pre-flight aerodynamic database and post-flight analysis, 2) the pre-flight estimation of the pitching moment coefficients was significantly different from the post-flight analysis, 3) the type of distribution of the states from the Monte Carlo simulation were affected by that of the perturbation parameters, 4) the uncertainties in the pre-flight model were overestimated, 5) the range where the aerodynamic coefficients from the pre-flight aerodynamic database and post-flight analysis are in closest agreement is between Mach *.* and *.* and more data points may be needed between Mach * and ** in the pre-flight aerodynamic database, 6) selection criterion for valid trajectories from the Monte Carlo simulations was mostly driven by the horizontal velocity error, 7) the selection criterion must be based on reasonable model to ensure the validity of the statistics from the proposed method, and 8) the results from the proposed method applied to the two different flights with the identical geometry and similar flight profile were consistent.

  13. Disconcordance in Statistical Models of Bisphenol A and Chronic Disease Outcomes in NHANES 2003-08

    PubMed Central

    Casey, Martin F.; Neidell, Matthew

    2013-01-01

    Background Bisphenol A (BPA), a high production chemical commonly found in plastics, has drawn great attention from researchers due to the substance’s potential toxicity. Using data from three National Health and Nutrition Examination Survey (NHANES) cycles, we explored the consistency and robustness of BPA’s reported effects on coronary heart disease and diabetes. Methods And Findings We report the use of three different statistical models in the analysis of BPA: (1) logistic regression, (2) log-linear regression, and (3) dose-response logistic regression. In each variation, confounders were added in six blocks to account for demographics, urinary creatinine, source of BPA exposure, healthy behaviours, and phthalate exposure. Results were sensitive to the variations in functional form of our statistical models, but no single model yielded consistent results across NHANES cycles. Reported ORs were also found to be sensitive to inclusion/exclusion criteria. Further, observed effects, which were most pronounced in NHANES 2003-04, could not be explained away by confounding. Conclusions Limitations in the NHANES data and a poor understanding of the mode of action of BPA have made it difficult to develop informative statistical models. Given the sensitivity of effect estimates to functional form, researchers should report results using multiple specifications with different assumptions about BPA measurement, thus allowing for the identification of potential discrepancies in the data. PMID:24223205

  14. Observational Word Learning: Beyond Propose-But-Verify and Associative Bean Counting.

    PubMed

    Roembke, Tanja; McMurray, Bob

    2016-04-01

    Learning new words is difficult. In any naming situation, there are multiple possible interpretations of a novel word. Recent approaches suggest that learners may solve this problem by tracking co-occurrence statistics between words and referents across multiple naming situations (e.g. Yu & Smith, 2007), overcoming the ambiguity in any one situation. Yet, there remains debate around the underlying mechanisms. We conducted two experiments in which learners acquired eight word-object mappings using cross-situational statistics while eye-movements were tracked. These addressed four unresolved questions regarding the learning mechanism. First, eye-movements during learning showed evidence that listeners maintain multiple hypotheses for a given word and bring them all to bear in the moment of naming. Second, trial-by-trial analyses of accuracy suggested that listeners accumulate continuous statistics about word/object mappings, over and above prior hypotheses they have about a word. Third, consistent, probabilistic context can impede learning, as false associations between words and highly co-occurring referents are formed. Finally, a number of factors not previously considered in prior analysis impact observational word learning: knowledge of the foils, spatial consistency of the target object, and the number of trials between presentations of the same word. This evidence suggests that observational word learning may derive from a combination of gradual statistical or associative learning mechanisms and more rapid real-time processes such as competition, mutual exclusivity and even inference or hypothesis testing.

  15. Back to the basics: Identifying and addressing underlying challenges in achieving high quality and relevant health statistics for indigenous populations in Canada

    PubMed Central

    Smylie, Janet; Firestone, Michelle

    2015-01-01

    Canada is known internationally for excellence in both the quality and public policy relevance of its health and social statistics. There is a double standard however with respect to the relevance and quality of statistics for Indigenous populations in Canada. Indigenous specific health and social statistics gathering is informed by unique ethical, rights-based, policy and practice imperatives regarding the need for Indigenous participation and leadership in Indigenous data processes throughout the spectrum of indicator development, data collection, management, analysis and use. We demonstrate how current Indigenous data quality challenges including misclassification errors and non-response bias systematically contribute to a significant underestimate of inequities in health determinants, health status, and health care access between Indigenous and non-Indigenous people in Canada. The major quality challenge underlying these errors and biases is the lack of Indigenous specific identifiers that are consistent and relevant in major health and social data sources. The recent removal of an Indigenous identity question from the Canadian census has resulted in further deterioration of an already suboptimal system. A revision of core health data sources to include relevant, consistent, and inclusive Indigenous self-identification is urgently required. These changes need to be carried out in partnership with Indigenous peoples and their representative and governing organizations. PMID:26793283

  16. Hypothesis testing in functional linear regression models with Neyman's truncation and wavelet thresholding for longitudinal data.

    PubMed

    Yang, Xiaowei; Nie, Kun

    2008-03-15

    Longitudinal data sets in biomedical research often consist of large numbers of repeated measures. In many cases, the trajectories do not look globally linear or polynomial, making it difficult to summarize the data or test hypotheses using standard longitudinal data analysis based on various linear models. An alternative approach is to apply the approaches of functional data analysis, which directly target the continuous nonlinear curves underlying discretely sampled repeated measures. For the purposes of data exploration, many functional data analysis strategies have been developed based on various schemes of smoothing, but fewer options are available for making causal inferences regarding predictor-outcome relationships, a common task seen in hypothesis-driven medical studies. To compare groups of curves, two testing strategies with good power have been proposed for high-dimensional analysis of variance: the Fourier-based adaptive Neyman test and the wavelet-based thresholding test. Using a smoking cessation clinical trial data set, this paper demonstrates how to extend the strategies for hypothesis testing into the framework of functional linear regression models (FLRMs) with continuous functional responses and categorical or continuous scalar predictors. The analysis procedure consists of three steps: first, apply the Fourier or wavelet transform to the original repeated measures; then fit a multivariate linear model in the transformed domain; and finally, test the regression coefficients using either adaptive Neyman or thresholding statistics. Since a FLRM can be viewed as a natural extension of the traditional multiple linear regression model, the development of this model and computational tools should enhance the capacity of medical statistics for longitudinal data.

  17. 3Drefine: an interactive web server for efficient protein structure refinement.

    PubMed

    Bhattacharya, Debswapna; Nowotny, Jackson; Cao, Renzhi; Cheng, Jianlin

    2016-07-08

    3Drefine is an interactive web server for consistent and computationally efficient protein structure refinement with the capability to perform web-based statistical and visual analysis. The 3Drefine refinement protocol utilizes iterative optimization of hydrogen bonding network combined with atomic-level energy minimization on the optimized model using a composite physics and knowledge-based force fields for efficient protein structure refinement. The method has been extensively evaluated on blind CASP experiments as well as on large-scale and diverse benchmark datasets and exhibits consistent improvement over the initial structure in both global and local structural quality measures. The 3Drefine web server allows for convenient protein structure refinement through a text or file input submission, email notification, provided example submission and is freely available without any registration requirement. The server also provides comprehensive analysis of submissions through various energy and statistical feedback and interactive visualization of multiple refined models through the JSmol applet that is equipped with numerous protein model analysis tools. The web server has been extensively tested and used by many users. As a result, the 3Drefine web server conveniently provides a useful tool easily accessible to the community. The 3Drefine web server has been made publicly available at the URL: http://sysbio.rnet.missouri.edu/3Drefine/. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  18. Consistency of the Planck CMB data and ΛCDM cosmology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shafieloo, Arman; Hazra, Dhiraj Kumar, E-mail: shafieloo@kasi.re.kr, E-mail: dhiraj.kumar.hazra@apc.univ-paris7.fr

    We test the consistency between Planck temperature and polarization power spectra and the concordance model of Λ Cold Dark Matter cosmology (ΛCDM) within the framework of Crossing statistics. We find that Planck TT best fit ΛCDM power spectrum is completely consistent with EE power spectrum data while EE best fit ΛCDM power spectrum is not consistent with TT data. However, this does not point to any systematic or model-data discrepancy since in the Planck EE data, uncertainties are much larger compared to the TT data. We also investigate the possibility of any deviation from ΛCDM model analyzing the Planck 2015more » data. Results from TT, TE and EE data analysis indicate that no deviation is required beyond the flexibility of the concordance ΛCDM model. Our analysis thus rules out any strong evidence for beyond the concordance model in the Planck spectra data. We also report a mild amplitude difference comparing temperature and polarization data, where temperature data seems to have slightly lower amplitude than expected (consistently at all multiples), as we assume both temperature and polarization data are realizations of the same underlying cosmology.« less

  19. An Econometric Model for Estimating IQ Scores and Environmental Influences on the Pattern of IQ Scores Over Time.

    ERIC Educational Resources Information Center

    Kadane, Joseph B.; And Others

    This paper offers a preliminary analysis of the effects of a semi-segregated school system on the IQ's of its students. The basic data consist of IQ scores for fourth, sixth, and eighth grades and associated environmental data obtained from their school records. A statistical model is developed to analyze longitudinal data when both process error…

  20. The Outlook for Technological Change and Employment. Technology and the American Economy, Appendix Volume I.

    ERIC Educational Resources Information Center

    National Commission on Technology, Automation and Economic Progress, Washington, DC.

    Findings of a study of the nation's manpower requirements to 1975 are presented. Part I, on the employment outlook, consists of a 10-year projection of manpower requirements by occupation and by industry prepared by the Bureau of Labor Statistics and an analysis of the growth prospects and the state of fiscal policy in the United States economy as…

  1. Ion Channel Conductance Measurements on a Silicon-Based Platform

    DTIC Science & Technology

    2006-01-01

    calculated using the molecular dynamics code, GROMACS . Reasonable agreement is obtained in the simulated versus measured conductance over the range of...measurements of the lipid giga-seal characteristics have been performed, including AC conductance measurements and statistical analysis in order to...Dynamics kernel self-consistently coupled to Poisson equations using a P3M force field scheme and the GROMACS description of protein structure and

  2. The role of strategic forest inventories in aiding land management decision-making: Examples from the U.S

    Treesearch

    W. Keith Moser; Renate Bush; John D. Shaw; Mark H. Hansen; Mark D. Nelson

    2010-01-01

    A major challenge for today’s resource managers is the linking of standand landscape-scale dynamics. The U.S. Forest Service has made major investments in programs at both the stand- (national forest project) and landscape/regional (Forest Inventory and Analysis [FIA] program) levels. FIA produces the only comprehensive and consistent statistical information on the...

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hallerman, G.; Gray, R.J.

    An instrument for crushing-strength determinations of uncoated and pyrolytic-carbon-coated fuel particles (50 to 500 mu in diameter) was developed to relate the crushing strength of the particles to their fabricability. The instrument consists of a loading mechanism, load cell, and a power supply-readout unit. The information that can be obtained by statistical methods of the data analysis is illustrated by results on two batches of fuel particles. (auth)

  4. An assessment of training needs for the lumber manufacturing industry in the eastern United States

    Treesearch

    Joseph Denig; Scott Page; Yuhua Su; Karen Martinson

    2008-01-01

    A training needs assessment of the primary forest products industry was conducted for 33 eastern states. his publication presents in detail the statistical analysis of the study. Of the 2,570 lumber manufacturing companies, consisting of firms with more than six employees for the U.S. Department of Labor Standard Industrial Classification Code 2421, the response rate...

  5. Statistical software applications used in health services research: analysis of published studies in the U.S

    PubMed Central

    2011-01-01

    Background This study aims to identify the statistical software applications most commonly employed for data analysis in health services research (HSR) studies in the U.S. The study also examines the extent to which information describing the specific analytical software utilized is provided in published articles reporting on HSR studies. Methods Data were extracted from a sample of 1,139 articles (including 877 original research articles) published between 2007 and 2009 in three U.S. HSR journals, that were considered to be representative of the field based upon a set of selection criteria. Descriptive analyses were conducted to categorize patterns in statistical software usage in those articles. The data were stratified by calendar year to detect trends in software use over time. Results Only 61.0% of original research articles in prominent U.S. HSR journals identified the particular type of statistical software application used for data analysis. Stata and SAS were overwhelmingly the most commonly used software applications employed (in 46.0% and 42.6% of articles respectively). However, SAS use grew considerably during the study period compared to other applications. Stratification of the data revealed that the type of statistical software used varied considerably by whether authors were from the U.S. or from other countries. Conclusions The findings highlight a need for HSR investigators to identify more consistently the specific analytical software used in their studies. Knowing that information can be important, because different software packages might produce varying results, owing to differences in the software's underlying estimation methods. PMID:21977990

  6. Development of a funding, cost, and spending model for satellite projects

    NASA Technical Reports Server (NTRS)

    Johnson, Jesse P.

    1989-01-01

    The need for a predictive budget/funging model is obvious. The current models used by the Resource Analysis Office (RAO) are used to predict the total costs of satellite projects. An effort to extend the modeling capabilities from total budget analysis to total budget and budget outlays over time analysis was conducted. A statistical based and data driven methodology was used to derive and develop the model. Th budget data for the last 18 GSFC-sponsored satellite projects were analyzed and used to build a funding model which would describe the historical spending patterns. This raw data consisted of dollars spent in that specific year and their 1989 dollar equivalent. This data was converted to the standard format used by the RAO group and placed in a database. A simple statistical analysis was performed to calculate the gross statistics associated with project length and project cost ant the conditional statistics on project length and project cost. The modeling approach used is derived form the theory of embedded statistics which states that properly analyzed data will produce the underlying generating function. The process of funding large scale projects over extended periods of time is described by Life Cycle Cost Models (LCCM). The data was analyzed to find a model in the generic form of a LCCM. The model developed is based on a Weibull function whose parameters are found by both nonlinear optimization and nonlinear regression. In order to use this model it is necessary to transform the problem from a dollar/time space to a percentage of total budget/time space. This transformation is equivalent to moving to a probability space. By using the basic rules of probability, the validity of both the optimization and the regression steps are insured. This statistically significant model is then integrated and inverted. The resulting output represents a project schedule which relates the amount of money spent to the percentage of project completion.

  7. Thermal Dissociation and Roaming Isomerization of Nitromethane: Experiment and Theory.

    PubMed

    Annesley, Christopher J; Randazzo, John B; Klippenstein, Stephen J; Harding, Lawrence B; Jasper, Ahren W; Georgievskii, Yuri; Ruscic, Branko; Tranter, Robert S

    2015-07-16

    The thermal decomposition of nitromethane provides a classic example of the competition between roaming mediated isomerization and simple bond fission. A recent theoretical analysis suggests that as the pressure is increased from 2 to 200 Torr the product distribution undergoes a sharp transition from roaming dominated to bond-fission dominated. Laser schlieren densitometry is used to explore the variation in the effect of roaming on the density gradients for CH3NO2 decomposition in a shock tube for pressures of 30, 60, and 120 Torr at temperatures ranging from 1200 to 1860 K. A complementary theoretical analysis provides a novel exploration of the effects of roaming on the thermal decomposition kinetics. The analysis focuses on the roaming dynamics in a reduced dimensional space consisting of the rigid-body motions of the CH3 and NO2 radicals. A high-level reduced-dimensionality potential energy surface is developed from fits to large-scale multireference ab initio calculations. Rigid body trajectory simulations coupled with master equation kinetics calculations provide high-level a priori predictions for the thermal branching between roaming and dissociation. A statistical model provides a qualitative/semiquantitative interpretation of the results. Modeling efforts explore the relation between the predicted roaming branching and the observed gradients. Overall, the experiments are found to be fairly consistent with the theoretically proposed branching ratio, but they are also consistent with a no-roaming scenario and the underlying reasons are discussed. The theoretical predictions are also compared with prior theoretical predictions, with a related statistical model, and with the extant experimental data for the decomposition of CH3NO2, and for the reaction of CH3 with NO2.

  8. Physical activity and risk of pancreatic cancer: a systematic review and meta-analysis.

    PubMed

    Behrens, Gundula; Jochem, Carmen; Schmid, Daniela; Keimling, Marlen; Ricci, Cristian; Leitzmann, Michael F

    2015-04-01

    Physical activity may prevent pancreatic cancer by regulating body weight and decreasing insulin resistance, DNA damage, and chronic inflammation. Previous meta-analyses found inconsistent evidence for a protective effect of physical activity on pancreatic cancer but those studies did not investigate whether the association between physical activity and pancreatic cancer varies by smoking status, body mass index (BMI), or level of consistency of physical activity over time. To address these issues, we conducted an updated meta-analysis following the PRISMA guidelines among 30 distinct studies with a total of 10,501 pancreatic cancer cases. Random effects meta-analysis of cohort studies revealed a weak, statistically significant reduction in pancreatic cancer risk for high versus low levels of physical activity (relative risk (RR) 0.93, 95 % confidence interval (CI) 0.88-0.98). By comparison, case-control studies yielded a stronger, statistically significant risk reduction (RR 0.78, 95 % CI 0.66-0.94; p-difference by study design = 0.07). When focusing on cohort studies, physical activity summary risk estimates appeared to be more pronounced for consistent physical activity over time (RR 0.86, 95 % CI 0.76-0.97) than for recent past physical activity (RR 0.95, 95 % CI 0.90-1.01) or distant past physical activity (RR 0.95, 95 % CI 0.79-1.15, p-difference by timing in life of physical activity = 0.36). Physical activity summary risk estimates did not differ by smoking status or BMI. In conclusion, physical activity is not strongly associated with pancreatic cancer risk, and the relation is not modified by smoking status or BMI level. While overall findings were weak, we did find some suggestion of potential pancreatic cancer risk reduction with consistent physical activity over time.

  9. Validation of a proposal for evaluating hospital infection control programs.

    PubMed

    Silva, Cristiane Pavanello Rodrigues; Lacerda, Rúbia Aparecida

    2011-02-01

    To validate the construct and discriminant properties of a hospital infection prevention and control program. The program consisted of four indicators: technical-operational structure; operational prevention and control guidelines; epidemiological surveillance system; and prevention and control activities. These indicators, with previously validated content, were applied to 50 healthcare institutions in the city of São Paulo, Southeastern Brazil, in 2009. Descriptive statistics were used to characterize the hospitals and indicator scores, and Cronbach's α coefficient was used to evaluate the internal consistency. The discriminant validity was analyzed by comparing indicator scores between groups of hospitals: with versus without quality certification. The construct validity analysis was based on exploratory factor analysis with a tetrachoric correlation matrix. The indicators for the technical-operational structure and epidemiological surveillance presented almost 100% conformity in the whole sample. The indicators for the operational prevention and control guidelines and the prevention and control activities presented internal consistency ranging from 0.67 to 0.80. The discriminant validity of these indicators indicated higher and statistically significant mean conformity scores among the group of institutions with healthcare certification or accreditation processes. In the construct validation, two dimensions were identified for the operational prevention and control guidelines: recommendations for preventing hospital infection and recommendations for standardizing prophylaxis procedures, with good correlation between the analysis units that formed the guidelines. The same was found for the prevention and control activities: interfaces with treatment units and support units were identified. Validation of the measurement properties of the hospital infection prevention and control program indicators made it possible to develop a tool for evaluating these programs in an ethical and scientific manner in order to obtain a quality diagnosis in this field.

  10. Landau's statistical mechanics for quasi-particle models

    NASA Astrophysics Data System (ADS)

    Bannur, Vishnu M.

    2014-04-01

    Landau's formalism of statistical mechanics [following L. D. Landau and E. M. Lifshitz, Statistical Physics (Pergamon Press, Oxford, 1980)] is applied to the quasi-particle model of quark-gluon plasma. Here, one starts from the expression for pressure and develop all thermodynamics. It is a general formalism and consistent with our earlier studies [V. M. Bannur, Phys. Lett. B647, 271 (2007)] based on Pathria's formalism [following R. K. Pathria, Statistical Mechanics (Butterworth-Heinemann, Oxford, 1977)]. In Pathria's formalism, one starts from the expression for energy density and develop thermodynamics. Both the formalisms are consistent with thermodynamics and statistical mechanics. Under certain conditions, which are wrongly called thermodynamic consistent relation, we recover other formalism of quasi-particle system, like in M. I. Gorenstein and S. N. Yang, Phys. Rev. D52, 5206 (1995), widely studied in quark-gluon plasma.

  11. Cytomorphometric analysis of oral buccal mucosal smears in tobacco and arecanut chewers who abused with and without betel leaf.

    PubMed

    Noufal, Ahammed; George, Antony; Jose, Maji; Khader, Mohasin Abdul; Jayapalan, Cheriyanthal Sisupalan

    2014-01-01

    Tobacco in any form (smoking or chewing), arecanut chewing, and alcohol are considered to be the major extrinsic etiological factors for potentially malignant disorders of the oral cavity and for squamous cell carcinoma, the most common oral malignancy in India. An increase in nuclear diameter (ND) and nucleus-cell ratio (NCR) with a reduction in cell diameter (CD) are early cytological indicators of dysplastic change. The authors sought to identify cytomorphometric changes in ND, CD, and NCR of oral buccal cells in tobacco and arecanut chewers who chewed with or without betel leaf. Participants represented 3 groups. Group I consisted of 30 individuals who chewed tobacco and arecanut with betel leaf (BQT chewers). Group II consisted of 30 individuals who chewed tobacco and arecanut without betel leaf (Gutka chewers). Group III comprised 30 apparently healthy nonabusers. Cytological smears were prepared and stained with modified-Papanicolaou stain. Comparisons between Groups I and II and Groups II and III showed that ND was increased, with P values of .054 and .008, respectively, whereas a comparison of Groups I and III showed no statistical significance. Comparisons between Groups I and II and Groups II and III showed that CD was statistically reduced, with P values of .037 and <.000, respectively, whereas comparison of Groups I and III showed no statistical significance. Comparisons between Groups I and II and groups II and III showed that NCR was statistically increased, with P values of <.000, whereas a comparison of Groups I and III showed no statistical significance. CD, ND, and NCR showed statistically significant changes in Group II in comparison with Group I, which could indicate larger and earlier risk of carcinoma for Gutka chewers than in BQT chewers.

  12. A perceptual space of local image statistics.

    PubMed

    Victor, Jonathan D; Thengone, Daniel J; Rizvi, Syed M; Conte, Mary M

    2015-12-01

    Local image statistics are important for visual analysis of textures, surfaces, and form. There are many kinds of local statistics, including those that capture luminance distributions, spatial contrast, oriented segments, and corners. While sensitivity to each of these kinds of statistics have been well-studied, much less is known about visual processing when multiple kinds of statistics are relevant, in large part because the dimensionality of the problem is high and different kinds of statistics interact. To approach this problem, we focused on binary images on a square lattice - a reduced set of stimuli which nevertheless taps many kinds of local statistics. In this 10-parameter space, we determined psychophysical thresholds to each kind of statistic (16 observers) and all of their pairwise combinations (4 observers). Sensitivities and isodiscrimination contours were consistent across observers. Isodiscrimination contours were elliptical, implying a quadratic interaction rule, which in turn determined ellipsoidal isodiscrimination surfaces in the full 10-dimensional space, and made predictions for sensitivities to complex combinations of statistics. These predictions, including the prediction of a combination of statistics that was metameric to random, were verified experimentally. Finally, check size had only a mild effect on sensitivities over the range from 2.8 to 14min, but sensitivities to second- and higher-order statistics was substantially lower at 1.4min. In sum, local image statistics form a perceptual space that is highly stereotyped across observers, in which different kinds of statistics interact according to simple rules. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. A perceptual space of local image statistics

    PubMed Central

    Victor, Jonathan D.; Thengone, Daniel J.; Rizvi, Syed M.; Conte, Mary M.

    2015-01-01

    Local image statistics are important for visual analysis of textures, surfaces, and form. There are many kinds of local statistics, including those that capture luminance distributions, spatial contrast, oriented segments, and corners. While sensitivity to each of these kinds of statistics have been well-studied, much less is known about visual processing when multiple kinds of statistics are relevant, in large part because the dimensionality of the problem is high and different kinds of statistics interact. To approach this problem, we focused on binary images on a square lattice – a reduced set of stimuli which nevertheless taps many kinds of local statistics. In this 10-parameter space, we determined psychophysical thresholds to each kind of statistic (16 observers) and all of their pairwise combinations (4 observers). Sensitivities and isodiscrimination contours were consistent across observers. Isodiscrimination contours were elliptical, implying a quadratic interaction rule, which in turn determined ellipsoidal isodiscrimination surfaces in the full 10-dimensional space, and made predictions for sensitivities to complex combinations of statistics. These predictions, including the prediction of a combination of statistics that was metameric to random, were verified experimentally. Finally, check size had only a mild effect on sensitivities over the range from 2.8 to 14 min, but sensitivities to second- and higher-order statistics was substantially lower at 1.4 min. In sum, local image statistics forms a perceptual space that is highly stereotyped across observers, in which different kinds of statistics interact according to simple rules. PMID:26130606

  14. Application of spatial technology in malaria research & control: some new insights.

    PubMed

    Saxena, Rekha; Nagpal, B N; Srivastava, Aruna; Gupta, S K; Dash, A P

    2009-08-01

    Geographical information System (GIS) has emerged as the core of the spatial technology which integrates wide range of dataset available from different sources including Remote Sensing (RS) and Global Positioning System (GPS). Literature published during the decade (1998-2007) has been compiled and grouped into six categories according to the usage of the technology in malaria epidemiology. Different GIS modules like spatial data sources, mapping and geo-processing tools, distance calculation, digital elevation model (DEM), buffer zone and geo-statistical analysis have been investigated in detail, illustrated with examples as per the derived results. These GIS tools have contributed immensely in understanding the epidemiological processes of malaria and examples drawn have shown that GIS is now widely used for research and decision making in malaria control. Statistical data analysis currently is the most consistent and established set of tools to analyze spatial datasets. The desired future development of GIS is in line with the utilization of geo-statistical tools which combined with high quality data has capability to provide new insight into malaria epidemiology and the complexity of its transmission potential in endemic areas.

  15. Effects of perceived parental attitudes on children's views of smoking.

    PubMed

    Ozturk, Candan; Kahraman, Seniha; Bektas, Murat

    2013-01-01

    The aim of this study was to examine the effects of perceived parental attitudes on children's discernment of cigarettes. The study sample consisted of 250 children attending grades 6, 7 and 8. Data were collected via a socio-demographic survey questionnaire, the Parental Attitude Scale (PAS) and the Decisional Balance Scale (DBS). Data analysis covered percentages, medians, one-way analysis of variance (ANOVA) and post-hoc tests using a statistical package. There were 250 participants; 117 were male, 133 were female. The mean age was 13.1 ± 0.98 for the females and 13.3 ± 0.88 for the males. A statistically significant difference was found in the children's mean scores for 'pros' subscale on the Decisional Balance Scale (DBS) according to perceived parental attitudes (F=3.172, p=0.025). There were no statistically significant differences in the DBS 'cons' subscale scores by perceived parental attitudes. It was determined that while perceived parental attitudes affect children's views on advantages of smoking, they have no effect on children's views on its disadvantages.

  16. The relationship between knowledge of leadership and knowledge management practices in the food industry in Kurdistan province, Iran.

    PubMed

    Jad, Seyyed Mohammad Moosavi; Geravandi, Sahar; Mohammadi, Mohammad Javad; Alizadeh, Rashin; Sarvarian, Mohammad; Rastegarimehr, Babak; Afkar, Abolhasan; Yari, Ahmad Reza; Momtazan, Mahboobeh; Valipour, Aliasghar; Mahboubi, Mohammad; Karimyan, Azimeh; Mazraehkar, Alireza; Nejad, Ali Soleimani; Mohammadi, Hafez

    2017-12-01

    The aim of this study was to identify the relationship between the knowledge of leadership and knowledge management practices. This research strategy, in terms of quantity, procedure and obtain information, is descriptive and correlational. Statistical population, consist of all employees of a food industry in Kurdistan province of Iran, who were engaged in 2016 and their total number is about 1800 people. 316 employees in the Kurdistan food industry (Kurdistan FI) were selected, using Cochran formula. Non-random method and valid questions (standard) for measurement of the data are used. Reliability and validity were confirmed. Statistical analysis of the data was carried out, using SPSS 16. The statistical analysis of collected data showed the relationship between knowledge-oriented of leadership and knowledge management activities as mediator variables. The results of the data and test hypotheses suggest that knowledge management activities play an important role in the functioning of product innovation and the results showed that the activities of Knowledge Management (knowledge transfer, storage knowledge, application of knowledge, creation of knowledge) on performance of product innovation.

  17. A guide to statistical analysis in microbial ecology: a community-focused, living review of multivariate data analyses.

    PubMed

    Buttigieg, Pier Luigi; Ramette, Alban

    2014-12-01

    The application of multivariate statistical analyses has become a consistent feature in microbial ecology. However, many microbial ecologists are still in the process of developing a deep understanding of these methods and appreciating their limitations. As a consequence, staying abreast of progress and debate in this arena poses an additional challenge to many microbial ecologists. To address these issues, we present the GUide to STatistical Analysis in Microbial Ecology (GUSTA ME): a dynamic, web-based resource providing accessible descriptions of numerous multivariate techniques relevant to microbial ecologists. A combination of interactive elements allows users to discover and navigate between methods relevant to their needs and examine how they have been used by others in the field. We have designed GUSTA ME to become a community-led and -curated service, which we hope will provide a common reference and forum to discuss and disseminate analytical techniques relevant to the microbial ecology community. © 2014 The Authors. FEMS Microbiology Ecology published by John Wiley & Sons Ltd on behalf of Federation of European Microbiological Societies.

  18. Statistical monitoring of data quality and consistency in the Stomach Cancer Adjuvant Multi-institutional Trial Group Trial.

    PubMed

    Timmermans, Catherine; Doffagne, Erik; Venet, David; Desmet, Lieven; Legrand, Catherine; Burzykowski, Tomasz; Buyse, Marc

    2016-01-01

    Data quality may impact the outcome of clinical trials; hence, there is a need to implement quality control strategies for the data collected. Traditional approaches to quality control have primarily used source data verification during on-site monitoring visits, but these approaches are hugely expensive as well as ineffective. There is growing interest in central statistical monitoring (CSM) as an effective way to ensure data quality and consistency in multicenter clinical trials. CSM with SMART™ uses advanced statistical tools that help identify centers with atypical data patterns which might be the sign of an underlying quality issue. This approach was used to assess the quality and consistency of the data collected in the Stomach Cancer Adjuvant Multi-institutional Trial Group Trial, involving 1495 patients across 232 centers in Japan. In the Stomach Cancer Adjuvant Multi-institutional Trial Group Trial, very few atypical data patterns were found among the participating centers, and none of these patterns were deemed to be related to a quality issue that could significantly affect the outcome of the trial. CSM can be used to provide a check of the quality of the data from completed multicenter clinical trials before analysis, publication, and submission of the results to regulatory agencies. It can also form the basis of a risk-based monitoring strategy in ongoing multicenter trials. CSM aims at improving data quality in clinical trials while also reducing monitoring costs.

  19. Factor structure and diagnostic efficiency of the Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition, criteria for avoidant personality disorder in Hispanic men and women with substance use disorders.

    PubMed

    Becker, Daniel F; Añez, Luis Miguel; Paris, Manuel; Bedregal, Luis; Grilo, Carlos M

    2009-01-01

    This study examined the internal consistency, factor structure, and diagnostic efficiency of the Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition (DSM-IV), criteria for avoidant personality disorder (AVPD) and the extent to which these metrics may be affected by sex. Subjects were 130 monolingual Hispanic adults (90 men, 40 women) who had been admitted to a specialty clinic that provides psychiatric and substance abuse services to Spanish-speaking patients. All were reliably assessed with the Spanish-Language Version of the Diagnostic Interview for DSM-IV Personality Disorders. The AVPD diagnosis was determined by the best-estimate method. After evaluating internal consistency of the AVPD criterion set, an exploratory factor analysis was performed using principal components extraction. Afterward, diagnostic efficiency indices were calculated for all AVPD criteria. Subsequent analyses examined men and women separately. For the overall group, internal consistency of AVPD criteria was good. Exploratory factor analysis revealed a 1-factor solution (accounting for 70% of the variance), supporting the unidimensionality of the AVPD criterion set. The best inclusion criterion was "reluctance to take risks," whereas "interpersonally inhibited" was the best exclusion criterion and the best predictor overall. When men and women were examined separately, similar results were obtained for both internal consistency and factor structure, with slight variations noted between sexes in the patterning of diagnostic efficiency indices. These psychometric findings, which were similar for men and women, support the construct validity of the DSM-IV criteria for AVPD and may also have implications for the treatment of this particular clinical population.

  20. Non-Immunogenic Structurally and Biologically Intact Tissue Matrix Grafts for the Immediate Repair of Ballistic-Induced Vascular and Nerve Tissue Injury in Combat Casualty Care

    DTIC Science & Technology

    2005-07-01

    as an access graft is addressed using statistical methods below. Graft consistency can be defined statistically as the variance associated with the...addressed using statistical methods below. Graft consistency can be defined statistically as the variance associated with the sample of grafts tested in...measured using a refractometer (Brix % method). The equilibration data are shown in Graph 1. The results suggest the following equilibration scheme: 40% v/v

  1. Analysis of the Einstein sample of early-type galaxies

    NASA Technical Reports Server (NTRS)

    Eskridge, Paul B.; Fabbiano, Giuseppina

    1993-01-01

    The EINSTEIN galaxy catalog contains x-ray data for 148 early-type (E and SO) galaxies. A detailed analysis of the global properties of this sample are studied. By comparing the x-ray properties with other tracers of the ISM, as well as with observables related to the stellar dynamics and populations of the sample, we expect to determine more clearly the physical relationships that determine the evolution of early-type galaxies. Previous studies with smaller samples have explored the relationships between x-ray luminosity (L(sub x)) and luminosities in other bands. Using our larger sample and the statistical techniques of survival analysis, a number of these earlier analyses were repeated. For our full sample, a strong statistical correlation is found between L(sub X) and L(sub B) (the probability that the null hypothesis is upheld is P less than 10(exp -4) from a variety of rank correlation tests. Regressions with several algorithms yield consistent results.

  2. Psychometric properties of the Portuguese version of place attachment scale for youth in residential care.

    PubMed

    Magalhães, Eunice; Calheiros, María M

    2015-01-01

    Although the significant scientific advances on place attachment literature, no instruments exist specifically developed or adapted to residential care. 410 adolescents (11 - 18 years old) participated in this study. The place attachment scale evaluates five dimensions: Place identity, Place dependence, Institutional bonding, Caregivers bonding and Friend bonding. Data analysis included descriptive statistics, content validity, construct validity (Confirmatory Factor Analysis), concurrent validity with correlations with satisfaction with life and with institution, and reliability evidences. The relationship with individual characteristics and placement length was also verified. Content validity analysis revealed that more than half of the panellists perceive all the items as relevant to assess the construct in residential care. The structure with five dimensions revealed good fit statistics and concurrent validity evidences were found, with significant correlations with satisfaction with life and with the institution. Acceptable values of internal consistence and specific gender differences were found. The preliminary psychometric properties of this scale suggest it potential to be used with youth in care.

  3. Statistical analysis of cascading failures in power grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chertkov, Michael; Pfitzner, Rene; Turitsyn, Konstantin

    2010-12-01

    We introduce a new microscopic model of cascading failures in transmission power grids. This model accounts for automatic response of the grid to load fluctuations that take place on the scale of minutes, when optimum power flow adjustments and load shedding controls are unavailable. We describe extreme events, caused by load fluctuations, which cause cascading failures of loads, generators and lines. Our model is quasi-static in the causal, discrete time and sequential resolution of individual failures. The model, in its simplest realization based on the Directed Current description of the power flow problem, is tested on three standard IEEE systemsmore » consisting of 30, 39 and 118 buses. Our statistical analysis suggests a straightforward classification of cascading and islanding phases in terms of the ratios between average number of removed loads, generators and links. The analysis also demonstrates sensitivity to variations in line capacities. Future research challenges in modeling and control of cascading outages over real-world power networks are discussed.« less

  4. Revised Perturbation Statistics for the Global Scale Atmospheric Model

    NASA Technical Reports Server (NTRS)

    Justus, C. G.; Woodrum, A.

    1975-01-01

    Magnitudes and scales of atmospheric perturbations about the monthly mean for the thermodynamic variables and wind components are presented by month at various latitudes. These perturbation statistics are a revision of the random perturbation data required for the global scale atmospheric model program and are from meteorological rocket network statistical summaries in the 22 to 65 km height range and NASA grenade and pitot tube data summaries in the region up to 90 km. The observed perturbations in the thermodynamic variables were adjusted to make them consistent with constraints required by the perfect gas law and the hydrostatic equation. Vertical scales were evaluated by Buell's depth of pressure system equation and from vertical structure function analysis. Tables of magnitudes and vertical scales are presented for each month at latitude 10, 30, 50, 70, and 90 degrees.

  5. Gene flow analysis method, the D-statistic, is robust in a wide parameter space.

    PubMed

    Zheng, Yichen; Janke, Axel

    2018-01-08

    We evaluated the sensitivity of the D-statistic, a parsimony-like method widely used to detect gene flow between closely related species. This method has been applied to a variety of taxa with a wide range of divergence times. However, its parameter space and thus its applicability to a wide taxonomic range has not been systematically studied. Divergence time, population size, time of gene flow, distance of outgroup and number of loci were examined in a sensitivity analysis. The sensitivity study shows that the primary determinant of the D-statistic is the relative population size, i.e. the population size scaled by the number of generations since divergence. This is consistent with the fact that the main confounding factor in gene flow detection is incomplete lineage sorting by diluting the signal. The sensitivity of the D-statistic is also affected by the direction of gene flow, size and number of loci. In addition, we examined the ability of the f-statistics, [Formula: see text] and [Formula: see text], to estimate the fraction of a genome affected by gene flow; while these statistics are difficult to implement to practical questions in biology due to lack of knowledge of when the gene flow happened, they can be used to compare datasets with identical or similar demographic background. The D-statistic, as a method to detect gene flow, is robust against a wide range of genetic distances (divergence times) but it is sensitive to population size. The D-statistic should only be applied with critical reservation to taxa where population sizes are large relative to branch lengths in generations.

  6. STATISTICAL ANALYSIS OF TANK 5 FLOOR SAMPLE RESULTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shine, E.

    2012-03-14

    Sampling has been completed for the characterization of the residual material on the floor of Tank 5 in the F-Area Tank Farm at the Savannah River Site (SRS), near Aiken, SC. The sampling was performed by Savannah River Remediation (SRR) LLC using a stratified random sampling plan with volume-proportional compositing. The plan consisted of partitioning the residual material on the floor of Tank 5 into three non-overlapping strata: two strata enclosed accumulations, and a third stratum consisted of a thin layer of material outside the regions of the two accumulations. Each of three composite samples was constructed from five primarymore » sample locations of residual material on the floor of Tank 5. Three of the primary samples were obtained from the stratum containing the thin layer of material, and one primary sample was obtained from each of the two strata containing an accumulation. This report documents the statistical analyses of the analytical results for the composite samples. The objective of the analysis is to determine the mean concentrations and upper 95% confidence (UCL95) bounds for the mean concentrations for a set of analytes in the tank residuals. The statistical procedures employed in the analyses were consistent with the Environmental Protection Agency (EPA) technical guidance by Singh and others [2010]. Savannah River National Laboratory (SRNL) measured the sample bulk density, nonvolatile beta, gross alpha, radionuclide, inorganic, and anion concentrations three times for each of the composite samples. The analyte concentration data were partitioned into three separate groups for further analysis: analytes with every measurement above their minimum detectable concentrations (MDCs), analytes with no measurements above their MDCs, and analytes with a mixture of some measurement results above and below their MDCs. The means, standard deviations, and UCL95s were computed for the analytes in the two groups that had at least some measurements above their MDCs. The identification of distributions and the selection of UCL95 procedures generally followed the protocol in Singh, Armbya, and Singh [2010]. When all of an analyte's measurements lie below their MDCs, only a summary of the MDCs can be provided. The measurement results reported by SRNL are listed in Appendix A, and the results of this analysis are reported in Appendix B. The data were generally found to follow a normal distribution, and to be homogeneous across composite samples.« less

  7. Statistical Analysis of Tank 5 Floor Sample Results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shine, E. P.

    2013-01-31

    Sampling has been completed for the characterization of the residual material on the floor of Tank 5 in the F-Area Tank Farm at the Savannah River Site (SRS), near Aiken, SC. The sampling was performed by Savannah River Remediation (SRR) LLC using a stratified random sampling plan with volume-proportional compositing. The plan consisted of partitioning the residual material on the floor of Tank 5 into three non-overlapping strata: two strata enclosed accumulations, and a third stratum consisted of a thin layer of material outside the regions of the two accumulations. Each of three composite samples was constructed from five primarymore » sample locations of residual material on the floor of Tank 5. Three of the primary samples were obtained from the stratum containing the thin layer of material, and one primary sample was obtained from each of the two strata containing an accumulation. This report documents the statistical analyses of the analytical results for the composite samples. The objective of the analysis is to determine the mean concentrations and upper 95% confidence (UCL95) bounds for the mean concentrations for a set of analytes in the tank residuals. The statistical procedures employed in the analyses were consistent with the Environmental Protection Agency (EPA) technical guidance by Singh and others [2010]. Savannah River National Laboratory (SRNL) measured the sample bulk density, nonvolatile beta, gross alpha, and the radionuclide1, elemental, and chemical concentrations three times for each of the composite samples. The analyte concentration data were partitioned into three separate groups for further analysis: analytes with every measurement above their minimum detectable concentrations (MDCs), analytes with no measurements above their MDCs, and analytes with a mixture of some measurement results above and below their MDCs. The means, standard deviations, and UCL95s were computed for the analytes in the two groups that had at least some measurements above their MDCs. The identification of distributions and the selection of UCL95 procedures generally followed the protocol in Singh, Armbya, and Singh [2010]. When all of an analyte's measurements lie below their MDCs, only a summary of the MDCs can be provided. The measurement results reported by SRNL are listed, and the results of this analysis are reported. The data were generally found to follow a normal distribution, and to be homogenous across composite samples.« less

  8. Statistical Analysis Of Tank 5 Floor Sample Results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shine, E. P.

    2012-08-01

    Sampling has been completed for the characterization of the residual material on the floor of Tank 5 in the F-Area Tank Farm at the Savannah River Site (SRS), near Aiken, SC. The sampling was performed by Savannah River Remediation (SRR) LLC using a stratified random sampling plan with volume-proportional compositing. The plan consisted of partitioning the residual material on the floor of Tank 5 into three non-overlapping strata: two strata enclosed accumulations, and a third stratum consisted of a thin layer of material outside the regions of the two accumulations. Each of three composite samples was constructed from five primarymore » sample locations of residual material on the floor of Tank 5. Three of the primary samples were obtained from the stratum containing the thin layer of material, and one primary sample was obtained from each of the two strata containing an accumulation. This report documents the statistical analyses of the analytical results for the composite samples. The objective of the analysis is to determine the mean concentrations and upper 95% confidence (UCL95) bounds for the mean concentrations for a set of analytes in the tank residuals. The statistical procedures employed in the analyses were consistent with the Environmental Protection Agency (EPA) technical guidance by Singh and others [2010]. Savannah River National Laboratory (SRNL) measured the sample bulk density, nonvolatile beta, gross alpha, and the radionuclide, elemental, and chemical concentrations three times for each of the composite samples. The analyte concentration data were partitioned into three separate groups for further analysis: analytes with every measurement above their minimum detectable concentrations (MDCs), analytes with no measurements above their MDCs, and analytes with a mixture of some measurement results above and below their MDCs. The means, standard deviations, and UCL95s were computed for the analytes in the two groups that had at least some measurements above their MDCs. The identification of distributions and the selection of UCL95 procedures generally followed the protocol in Singh, Armbya, and Singh [2010]. When all of an analyte's measurements lie below their MDCs, only a summary of the MDCs can be provided. The measurement results reported by SRNL are listed in Appendix A, and the results of this analysis are reported in Appendix B. The data were generally found to follow a normal distribution, and to be homogenous across composite samples.« less

  9. Long-range correlation in cosmic microwave background radiation.

    PubMed

    Movahed, M Sadegh; Ghasemi, F; Rahvar, Sohrab; Tabar, M Reza Rahimi

    2011-08-01

    We investigate the statistical anisotropy and gaussianity of temperature fluctuations of Cosmic Microwave Background (CMB) radiation data from the Wilkinson Microwave Anisotropy Probe survey, using the Multifractal Detrended Fluctuation Analysis, Rescaled Range, and Scaled Windowed Variance methods. Multifractal Detrended Fluctuation Analysis shows that CMB fluctuations has a long-range correlation function with a multifractal behavior. By comparing the shuffled and surrogate series of CMB data, we conclude that the multifractality nature of the temperature fluctuation of CMB radiation is mainly due to the long-range correlations, and the map is consistent with a gaussian distribution.

  10. Auditory processing and phonological awareness skills of five-year-old children with and without musical experience.

    PubMed

    Escalda, Júlia; Lemos, Stela Maris Aguiar; França, Cecília Cavalieri

    2011-09-01

    To investigate the relations between musical experience, auditory processing and phonological awareness of groups of 5-year-old children with and without musical experience. Participants were 56 5-year-old subjects of both genders, 26 in the Study Group, consisting of children with musical experience, and 30 in the Control Group, consisting of children without musical experience. All participants were assessed with the Simplified Auditory Processing Assessment and Phonological Awareness Test and the data was statistically analyzed. There was a statistically significant difference between the results of the sequential memory test for verbal and non-verbal sounds with four stimuli, phonological awareness tasks of rhyme recognition, phonemic synthesis and phonemic deletion. Analysis of multiple binary logistic regression showed that, with exception of the sequential verbal memory with four syllables, the observed difference in subjects' performance was associated with their musical experience. Musical experience improves auditory and metalinguistic abilities of 5-year-old children.

  11. Ecological Momentary Assessments and Automated Time Series Analysis to Promote Tailored Health Care: A Proof-of-Principle Study.

    PubMed

    van der Krieke, Lian; Emerencia, Ando C; Bos, Elisabeth H; Rosmalen, Judith Gm; Riese, Harriëtte; Aiello, Marco; Sytema, Sjoerd; de Jonge, Peter

    2015-08-07

    Health promotion can be tailored by combining ecological momentary assessments (EMA) with time series analysis. This combined method allows for studying the temporal order of dynamic relationships among variables, which may provide concrete indications for intervention. However, application of this method in health care practice is hampered because analyses are conducted manually and advanced statistical expertise is required. This study aims to show how this limitation can be overcome by introducing automated vector autoregressive modeling (VAR) of EMA data and to evaluate its feasibility through comparisons with results of previously published manual analyses. We developed a Web-based open source application, called AutoVAR, which automates time series analyses of EMA data and provides output that is intended to be interpretable by nonexperts. The statistical technique we used was VAR. AutoVAR tests and evaluates all possible VAR models within a given combinatorial search space and summarizes their results, thereby replacing the researcher's tasks of conducting the analysis, making an informed selection of models, and choosing the best model. We compared the output of AutoVAR to the output of a previously published manual analysis (n=4). An illustrative example consisting of 4 analyses was provided. Compared to the manual output, the AutoVAR output presents similar model characteristics and statistical results in terms of the Akaike information criterion, the Bayesian information criterion, and the test statistic of the Granger causality test. Results suggest that automated analysis and interpretation of times series is feasible. Compared to a manual procedure, the automated procedure is more robust and can save days of time. These findings may pave the way for using time series analysis for health promotion on a larger scale. AutoVAR was evaluated using the results of a previously conducted manual analysis. Analysis of additional datasets is needed in order to validate and refine the application for general use.

  12. Ecological Momentary Assessments and Automated Time Series Analysis to Promote Tailored Health Care: A Proof-of-Principle Study

    PubMed Central

    Emerencia, Ando C; Bos, Elisabeth H; Rosmalen, Judith GM; Riese, Harriëtte; Aiello, Marco; Sytema, Sjoerd; de Jonge, Peter

    2015-01-01

    Background Health promotion can be tailored by combining ecological momentary assessments (EMA) with time series analysis. This combined method allows for studying the temporal order of dynamic relationships among variables, which may provide concrete indications for intervention. However, application of this method in health care practice is hampered because analyses are conducted manually and advanced statistical expertise is required. Objective This study aims to show how this limitation can be overcome by introducing automated vector autoregressive modeling (VAR) of EMA data and to evaluate its feasibility through comparisons with results of previously published manual analyses. Methods We developed a Web-based open source application, called AutoVAR, which automates time series analyses of EMA data and provides output that is intended to be interpretable by nonexperts. The statistical technique we used was VAR. AutoVAR tests and evaluates all possible VAR models within a given combinatorial search space and summarizes their results, thereby replacing the researcher’s tasks of conducting the analysis, making an informed selection of models, and choosing the best model. We compared the output of AutoVAR to the output of a previously published manual analysis (n=4). Results An illustrative example consisting of 4 analyses was provided. Compared to the manual output, the AutoVAR output presents similar model characteristics and statistical results in terms of the Akaike information criterion, the Bayesian information criterion, and the test statistic of the Granger causality test. Conclusions Results suggest that automated analysis and interpretation of times series is feasible. Compared to a manual procedure, the automated procedure is more robust and can save days of time. These findings may pave the way for using time series analysis for health promotion on a larger scale. AutoVAR was evaluated using the results of a previously conducted manual analysis. Analysis of additional datasets is needed in order to validate and refine the application for general use. PMID:26254160

  13. Statistical Analysis of Zebrafish Locomotor Response.

    PubMed

    Liu, Yiwen; Carmer, Robert; Zhang, Gaonan; Venkatraman, Prahatha; Brown, Skye Ashton; Pang, Chi-Pui; Zhang, Mingzhi; Ma, Ping; Leung, Yuk Fai

    2015-01-01

    Zebrafish larvae display rich locomotor behaviour upon external stimulation. The movement can be simultaneously tracked from many larvae arranged in multi-well plates. The resulting time-series locomotor data have been used to reveal new insights into neurobiology and pharmacology. However, the data are of large scale, and the corresponding locomotor behavior is affected by multiple factors. These issues pose a statistical challenge for comparing larval activities. To address this gap, this study has analyzed a visually-driven locomotor behaviour named the visual motor response (VMR) by the Hotelling's T-squared test. This test is congruent with comparing locomotor profiles from a time period. Different wild-type (WT) strains were compared using the test, which shows that they responded differently to light change at different developmental stages. The performance of this test was evaluated by a power analysis, which shows that the test was sensitive for detecting differences between experimental groups with sample numbers that were commonly used in various studies. In addition, this study investigated the effects of various factors that might affect the VMR by multivariate analysis of variance (MANOVA). The results indicate that the larval activity was generally affected by stage, light stimulus, their interaction, and location in the plate. Nonetheless, different factors affected larval activity differently over time, as indicated by a dynamical analysis of the activity at each second. Intriguingly, this analysis also shows that biological and technical repeats had negligible effect on larval activity. This finding is consistent with that from the Hotelling's T-squared test, and suggests that experimental repeats can be combined to enhance statistical power. Together, these investigations have established a statistical framework for analyzing VMR data, a framework that should be generally applicable to other locomotor data with similar structure.

  14. Statistical Analysis of Zebrafish Locomotor Response

    PubMed Central

    Zhang, Gaonan; Venkatraman, Prahatha; Brown, Skye Ashton; Pang, Chi-Pui; Zhang, Mingzhi; Ma, Ping; Leung, Yuk Fai

    2015-01-01

    Zebrafish larvae display rich locomotor behaviour upon external stimulation. The movement can be simultaneously tracked from many larvae arranged in multi-well plates. The resulting time-series locomotor data have been used to reveal new insights into neurobiology and pharmacology. However, the data are of large scale, and the corresponding locomotor behavior is affected by multiple factors. These issues pose a statistical challenge for comparing larval activities. To address this gap, this study has analyzed a visually-driven locomotor behaviour named the visual motor response (VMR) by the Hotelling’s T-squared test. This test is congruent with comparing locomotor profiles from a time period. Different wild-type (WT) strains were compared using the test, which shows that they responded differently to light change at different developmental stages. The performance of this test was evaluated by a power analysis, which shows that the test was sensitive for detecting differences between experimental groups with sample numbers that were commonly used in various studies. In addition, this study investigated the effects of various factors that might affect the VMR by multivariate analysis of variance (MANOVA). The results indicate that the larval activity was generally affected by stage, light stimulus, their interaction, and location in the plate. Nonetheless, different factors affected larval activity differently over time, as indicated by a dynamical analysis of the activity at each second. Intriguingly, this analysis also shows that biological and technical repeats had negligible effect on larval activity. This finding is consistent with that from the Hotelling’s T-squared test, and suggests that experimental repeats can be combined to enhance statistical power. Together, these investigations have established a statistical framework for analyzing VMR data, a framework that should be generally applicable to other locomotor data with similar structure. PMID:26437184

  15. Aerosol, a health hazard during ultrasonic scaling: A clinico-microbiological study.

    PubMed

    Singh, Akanksha; Shiva Manjunath, R G; Singla, Deepak; Bhattacharya, Hirak S; Sarkar, Arijit; Chandra, Neeraj

    2016-01-01

    Ultrasonic scaling is a routinely used treatment to remove plaque and calculus from tooth surfaces. These scalers use water as a coolant which is splattered during the vibration of the tip. The splatter when mixed with saliva and plaque of the patients causes the aerosol highly infectious and acts as a major risk factor for transmission of the disease. In spite of necessary protection, sometimes, the operator might get infected because of the infectious nature of the splatter. To evaluate the aerosol contamination produced during ultrasonic scaling by the help of microbiological analysis. This clinico-microbiological study consisted of twenty patients. Two agar plates were used for each patient; the first was kept at the center of the operatory room 20 min before the treatment while the second agar plate was kept 40 cm away from the patient's chest during the treatment. Both the agar plates were sent for microbiological analysis. The statistical analysis was done with the help of STATA 11.0 (StataCorp. 2013. Stata Statistical Software, Release 13. College Station, TX: StataCorp LP, 4905 Lakeway Drive College Station, Texas, USA). Statistical software was used for data analysis and the P < 0.001 was considered to be statistically significant. The results for bacterial count were highly significant when compared before and during the treatment. The Gram staining showed the presence of Staphylococcus and Streptococcus species in high numbers. The aerosols and splatters produced during dental procedures have the potential to spread infection to dental personnel. Therefore, proper precautions should be taken to minimize the risk of infection to the operator.

  16. Publication bias in situ.

    PubMed

    Phillips, Carl V

    2004-08-05

    Publication bias, as typically defined, refers to the decreased likelihood of studies' results being published when they are near the null, not statistically significant, or otherwise "less interesting." But choices about how to analyze the data and which results to report create a publication bias within the published results, a bias I label "publication bias in situ" (PBIS). PBIS may create much greater bias in the literature than traditionally defined publication bias (the failure to publish any result from a study). The causes of PBIS are well known, consisting of various decisions about reporting that are influenced by the data. But its impact is not generally appreciated, and very little attention is devoted to it. What attention there is consists largely of rules for statistical analysis that are impractical and do not actually reduce the bias in reported estimates. PBIS cannot be reduced by statistical tools because it is not fundamentally a problem of statistics, but rather of non-statistical choices and plain language interpretations. PBIS should be recognized as a phenomenon worthy of study - it is extremely common and probably has a huge impact on results reported in the literature - and there should be greater systematic efforts to identify and reduce it. The paper presents examples, including results of a recent HIV vaccine trial, that show how easily PBIS can have a large impact on reported results, as well as how there can be no simple answer to it. PBIS is a major problem, worthy of substantially more attention than it receives. There are ways to reduce the bias, but they are very seldom employed because they are largely unrecognized.

  17. Integrated cosmological probes: concordance quantified

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nicola, Andrina; Amara, Adam; Refregier, Alexandre, E-mail: andrina.nicola@phys.ethz.ch, E-mail: adam.amara@phys.ethz.ch, E-mail: alexandre.refregier@phys.ethz.ch

    2017-10-01

    Assessing the consistency of parameter constraints derived from different cosmological probes is an important way to test the validity of the underlying cosmological model. In an earlier work [1], we computed constraints on cosmological parameters for ΛCDM from an integrated analysis of CMB temperature anisotropies and CMB lensing from Planck, galaxy clustering and weak lensing from SDSS, weak lensing from DES SV as well as Type Ia supernovae and Hubble parameter measurements. In this work, we extend this analysis and quantify the concordance between the derived constraints and those derived by the Planck Collaboration as well as WMAP9, SPT andmore » ACT. As a measure for consistency, we use the Surprise statistic [2], which is based on the relative entropy. In the framework of a flat ΛCDM cosmological model, we find all data sets to be consistent with one another at a level of less than 1σ. We highlight that the relative entropy is sensitive to inconsistencies in the models that are used in different parts of the analysis. In particular, inconsistent assumptions for the neutrino mass break its invariance on the parameter choice. When consistent model assumptions are used, the data sets considered in this work all agree with each other and ΛCDM, without evidence for tensions.« less

  18. Application of Multivariate Statistical Analysis to Biomarkers in Se-Turkey Crude Oils

    NASA Astrophysics Data System (ADS)

    Gürgey, K.; Canbolat, S.

    2017-11-01

    Twenty-four crude oil samples were collected from the 24 oil fields distributed in different districts of SE-Turkey. API and Sulphur content (%), Stable Carbon Isotope, Gas Chromatography (GC), and Gas Chromatography-Mass Spectrometry (GC-MS) data were used to construct a geochemical data matrix. The aim of this study is to examine the genetic grouping or correlations in the crude oil samples, hence the number of source rocks present in the SE-Turkey. To achieve these aims, two of the multivariate statistical analysis techniques (Principle Component Analysis [PCA] and Cluster Analysis were applied to data matrix of 24 samples and 8 source specific biomarker variables/parameters. The results showed that there are 3 genetically different oil groups: Batman-Nusaybin Oils, Adıyaman-Kozluk Oils and Diyarbakir Oils, in addition to a one mixed group. These groupings imply that at least, three different source rocks are present in South-Eastern (SE) Turkey. Grouping of the crude oil samples appears to be consistent with the geographic locations of the oils fields, subsurface stratigraphy as well as geology of the area.

  19. Statistical analysis of dynamic fibrils observed from NST/BBSO observations

    NASA Astrophysics Data System (ADS)

    Gopalan Priya, Thambaje; Su, Jiang-Tao; Chen, Jie; Deng, Yuan-Yong; Prasad Choudhury, Debi

    2018-02-01

    We present the results obtained from the analysis of dynamic fibrils in NOAA active region (AR) 12132, using high resolution Hα observations from the New Solar Telescope operating at Big Bear Solar Observatory. The dynamic fibrils are seen to be moving up and down, and most of these dynamic fibrils are periodic and have a jet-like appearance. We found from our observations that the fibrils follow almost perfect parabolic paths in many cases. A statistical analysis on the properties of the parabolic paths showing an analysis on deceleration, maximum velocity, duration and kinetic energy of these fibrils is presented here. We found the average maximum velocity to be around 15 kms‑1 and mean deceleration to be around 100 ms‑2. The observed deceleration appears to be a fraction of gravity of the Sun and is not compatible with the path of ballistic motion due to gravity of the Sun. We found a positive correlation between deceleration and maximum velocity. This correlation is consistent with simulations done earlier on magnetoacoustic shock waves propagating upward.

  20. Evaluating statistical consistency in the ocean model component of the Community Earth System Model (pyCECT v2.0)

    NASA Astrophysics Data System (ADS)

    Baker, Allison H.; Hu, Yong; Hammerling, Dorit M.; Tseng, Yu-heng; Xu, Haiying; Huang, Xiaomeng; Bryan, Frank O.; Yang, Guangwen

    2016-07-01

    The Parallel Ocean Program (POP), the ocean model component of the Community Earth System Model (CESM), is widely used in climate research. Most current work in CESM-POP focuses on improving the model's efficiency or accuracy, such as improving numerical methods, advancing parameterization, porting to new architectures, or increasing parallelism. Since ocean dynamics are chaotic in nature, achieving bit-for-bit (BFB) identical results in ocean solutions cannot be guaranteed for even tiny code modifications, and determining whether modifications are admissible (i.e., statistically consistent with the original results) is non-trivial. In recent work, an ensemble-based statistical approach was shown to work well for software verification (i.e., quality assurance) on atmospheric model data. The general idea of the ensemble-based statistical consistency testing is to use a qualitative measurement of the variability of the ensemble of simulations as a metric with which to compare future simulations and make a determination of statistical distinguishability. The capability to determine consistency without BFB results boosts model confidence and provides the flexibility needed, for example, for more aggressive code optimizations and the use of heterogeneous execution environments. Since ocean and atmosphere models have differing characteristics in term of dynamics, spatial variability, and timescales, we present a new statistical method to evaluate ocean model simulation data that requires the evaluation of ensemble means and deviations in a spatial manner. In particular, the statistical distribution from an ensemble of CESM-POP simulations is used to determine the standard score of any new model solution at each grid point. Then the percentage of points that have scores greater than a specified threshold indicates whether the new model simulation is statistically distinguishable from the ensemble simulations. Both ensemble size and composition are important. Our experiments indicate that the new POP ensemble consistency test (POP-ECT) tool is capable of distinguishing cases that should be statistically consistent with the ensemble and those that should not, as well as providing a simple, subjective and systematic way to detect errors in CESM-POP due to the hardware or software stack, positively contributing to quality assurance for the CESM-POP code.

  1. Statistical learning in reading: variability in irrelevant letters helps children learn phonics skills.

    PubMed

    Apfelbaum, Keith S; Hazeltine, Eliot; McMurray, Bob

    2013-07-01

    Early reading abilities are widely considered to derive in part from statistical learning of regularities between letters and sounds. Although there is substantial evidence from laboratory work to support this, how it occurs in the classroom setting has not been extensively explored; there are few investigations of how statistics among letters and sounds influence how children actually learn to read or what principles of statistical learning may improve learning. We examined 2 conflicting principles that may apply to learning grapheme-phoneme-correspondence (GPC) regularities for vowels: (a) variability in irrelevant units may help children derive invariant relationships and (b) similarity between words may force children to use a deeper analysis of lexical structure. We trained 224 first-grade students on a small set of GPC regularities for vowels, embedded in words with either high or low consonant similarity, and tested their generalization to novel tasks and words. Variability offered a consistent benefit over similarity for trained and new words in both trained and new tasks.

  2. Computer-aided auditing of prescription drug claims.

    PubMed

    Iyengar, Vijay S; Hermiz, Keith B; Natarajan, Ramesh

    2014-09-01

    We describe a methodology for identifying and ranking candidate audit targets from a database of prescription drug claims. The relevant audit targets may include various entities such as prescribers, patients and pharmacies, who exhibit certain statistical behavior indicative of potential fraud and abuse over the prescription claims during a specified period of interest. Our overall approach is consistent with related work in statistical methods for detection of fraud and abuse, but has a relative emphasis on three specific aspects: first, based on the assessment of domain experts, certain focus areas are selected and data elements pertinent to the audit analysis in each focus area are identified; second, specialized statistical models are developed to characterize the normalized baseline behavior in each focus area; and third, statistical hypothesis testing is used to identify entities that diverge significantly from their expected behavior according to the relevant baseline model. The application of this overall methodology to a prescription claims database from a large health plan is considered in detail.

  3. Leadership in nursing and patient satisfaction in hospital context.

    PubMed

    Nunes, Elisabete Maria Garcia Teles; Gaspar, Maria Filomena Mendes

    2016-06-01

    Objectives to know the quality of the leadership relationship from the perspective of a chief nurse and nurse, patient satisfaction, the relationship between the quality of the relationship perceived for both and patient satisfaction. Methods a quantitative, transverse and correlational approach. Non-probabilistic convenience sample consists of 15 chief nurses, 342 nurses, 273 patients. Data collected at the Central Lisbon Hospital Center, between January and March 2013, through the LMX-7, CLMX-7 and SUCEH21 scales. Statistical analysis was performed through SPSS ® Statistics 19. Results the chief nurse considers the quality of the leadership relationship good, the nurses consider it satisfactory, patients are considered to be satisfied with nursing care; there is a statistically significant correlation between the quality of the leadership relationship from the perspective of chief nurses and patient satisfaction, there is no statistically significant correlation between the quality of the leadership relationship in the nurse's perspective and satisfaction. Conclusion the chief nurse has a major role in patient satisfaction.

  4. The Development of Statistics Textbook Supported with ICT and Portfolio-Based Assessment

    NASA Astrophysics Data System (ADS)

    Hendikawati, Putriaji; Yuni Arini, Florentina

    2016-02-01

    This research was development research that aimed to develop and produce a Statistics textbook model that supported with information and communication technology (ICT) and Portfolio-Based Assessment. This book was designed for students of mathematics at the college to improve students’ ability in mathematical connection and communication. There were three stages in this research i.e. define, design, and develop. The textbooks consisted of 10 chapters which each chapter contains introduction, core materials and include examples and exercises. The textbook developed phase begins with the early stages of designed the book (draft 1) which then validated by experts. Revision of draft 1 produced draft 2 which then limited test for readability test book. Furthermore, revision of draft 2 produced textbook draft 3 which simulated on a small sample to produce a valid model textbook. The data were analysed with descriptive statistics. The analysis showed that the Statistics textbook model that supported with ICT and Portfolio-Based Assessment valid and fill up the criteria of practicality.

  5. A hierarchical fuzzy rule-based approach to aphasia diagnosis.

    PubMed

    Akbarzadeh-T, Mohammad-R; Moshtagh-Khorasani, Majid

    2007-10-01

    Aphasia diagnosis is a particularly challenging medical diagnostic task due to the linguistic uncertainty and vagueness, inconsistencies in the definition of aphasic syndromes, large number of measurements with imprecision, natural diversity and subjectivity in test objects as well as in opinions of experts who diagnose the disease. To efficiently address this diagnostic process, a hierarchical fuzzy rule-based structure is proposed here that considers the effect of different features of aphasia by statistical analysis in its construction. This approach can be efficient for diagnosis of aphasia and possibly other medical diagnostic applications due to its fuzzy and hierarchical reasoning construction. Initially, the symptoms of the disease which each consists of different features are analyzed statistically. The measured statistical parameters from the training set are then used to define membership functions and the fuzzy rules. The resulting two-layered fuzzy rule-based system is then compared with a back propagating feed-forward neural network for diagnosis of four Aphasia types: Anomic, Broca, Global and Wernicke. In order to reduce the number of required inputs, the technique is applied and compared on both comprehensive and spontaneous speech tests. Statistical t-test analysis confirms that the proposed approach uses fewer Aphasia features while also presenting a significant improvement in terms of accuracy.

  6. [New design of the Health Survey of Catalonia (Spain, 2010-2014): a step forward in health planning and evaluation].

    PubMed

    Alcañiz-Zanón, Manuela; Mompart-Penina, Anna; Guillén-Estany, Montserrat; Medina-Bustos, Antonia; Aragay-Barbany, Josep M; Brugulat-Guiteras, Pilar; Tresserras-Gaju, Ricard

    2014-01-01

    This article presents the genesis of the Health Survey of Catalonia (Spain, 2010-2014) with its semiannual subsamples and explains the basic characteristics of its multistage sampling design. In comparison with previous surveys, the organizational advantages of this new statistical operation include rapid data availability and the ability to continuously monitor the population. The main benefits are timeliness in the production of indicators and the possibility of introducing new topics through the supplemental questionnaire as a function of needs. Limitations consist of the complexity of the sample design and the lack of longitudinal follow-up of the sample. Suitable sampling weights for each specific subsample are necessary for any statistical analysis of micro-data. Accuracy in the analysis of territorial disaggregation or population subgroups increases if annual samples are accumulated. Copyright © 2013 SESPAS. Published by Elsevier Espana. All rights reserved.

  7. Molecular dynamics simulations and statistical coupling analysis reveal functional coevolution network of oncogenic mutations in the CDKN2A-CDK6 complex.

    PubMed

    Wang, Jingwen; Zhao, Yuqi; Wang, Yanjie; Huang, Jingfei

    2013-01-16

    Coevolution between proteins is crucial for understanding protein-protein interaction. Simultaneous changes allow a protein complex to maintain its overall structural-functional integrity. In this study, we combined statistical coupling analysis (SCA) and molecular dynamics simulations on the CDK6-CDKN2A protein complex to evaluate coevolution between proteins. We reconstructed an inter-protein residue coevolution network, consisting of 37 residues and 37 interactions. It shows that most of the coevolved residue pairs are spatially proximal. When the mutations happened, the stable local structures were broken up and thus the protein interaction was decreased or inhibited, with a following increased risk of melanoma. The identification of inter-protein coevolved residues in the CDK6-CDKN2A complex can be helpful for designing protein engineering experiments. Copyright © 2012 Federation of European Biochemical Societies. Published by Elsevier B.V. All rights reserved.

  8. Fluorescent biopsy of biological tissues in differentiation of benign and malignant tumors of prostate

    NASA Astrophysics Data System (ADS)

    Trifoniuk, L. I.; Ushenko, Yu. A.; Sidor, M. I.; Minzer, O. P.; Gritsyuk, M. V.; Novakovskaya, O. Y.

    2014-08-01

    The work consists of investigation results of diagnostic efficiency of a new azimuthally stable Mueller-matrix method of analysis of laser autofluorescence coordinate distributions of biological tissues histological sections. A new model of generalized optical anisotropy of biological tissues protein networks is proposed in order to define the processes of laser autofluorescence. The influence of complex mechanisms of both phase anisotropy (linear birefringence and optical activity) and linear (circular) dichroism is taken into account. The interconnections between the azimuthally stable Mueller-matrix elements characterizing laser autofluorescence and different mechanisms of optical anisotropy are determined. The statistic analysis of coordinate distributions of such Mueller-matrix rotation invariants is proposed. Thereupon the quantitative criteria (statistic moments of the 1st to the 4th order) of differentiation of histological sections of uterus wall tumor - group 1 (dysplasia) and group 2 (adenocarcinoma) are estimated.

  9. Correlation between Post-LASIK Starburst Symptom and Ocular Wavefront Aberrations

    NASA Astrophysics Data System (ADS)

    Liu, Yong-Ji; Mu, Guo-Guang; Wang, Zhao-Qi; Wang-Yan

    2006-06-01

    Monochromatic aberrations in post laser in-situ keratomileusis (LASIK) eyes are measured. The data are categorized into reference group and starburst group according to the visual symptoms. Statistic analysis has been made to find the correlation between the ocular wavefront aberrations and the starburst symptom. The rms aberrations of the 3rd and 4th orders for the starburst group are significantly larger than those for the reference group. The starburst symptom shows a strong correlation with vertical coma, total coma, spherical aberrations. For 3-mm pupil size and 5.8-mm pupil size, the modulation transfer function (MTF) of the starburst group are lower than those of the reference group, but their visual acuities are close. MTF and PSF analyses are made for two groups, and the results are consistent with the statistical analysis, which means the difference between the two groups is mainly due to the third- and fourth-order Zernike aberrations.

  10. Multiple Statistical Models Based Analysis of Causative Factors and Loess Landslides in Tianshui City, China

    NASA Astrophysics Data System (ADS)

    Su, Xing; Meng, Xingmin; Ye, Weilin; Wu, Weijiang; Liu, Xingrong; Wei, Wanhong

    2018-03-01

    Tianshui City is one of the mountainous cities that are threatened by severe geo-hazards in Gansu Province, China. Statistical probability models have been widely used in analyzing and evaluating geo-hazards such as landslide. In this research, three approaches (Certainty Factor Method, Weight of Evidence Method and Information Quantity Method) were adopted to quantitively analyze the relationship between the causative factors and the landslides, respectively. The source data used in this study are including the SRTM DEM and local geological maps in the scale of 1:200,000. 12 causative factors (i.e., altitude, slope, aspect, curvature, plan curvature, profile curvature, roughness, relief amplitude, and distance to rivers, distance to faults, distance to roads, and the stratum lithology) were selected to do correlation analysis after thorough investigation of geological conditions and historical landslides. The results indicate that the outcomes of the three models are fairly consistent.

  11. System of polarization correlometry of polycrystalline layers of urine in the differentiation stage of diabetes

    NASA Astrophysics Data System (ADS)

    Ushenko, Yu. O.; Pashkovskaya, N. V.; Marchuk, Y. F.; Dubolazov, O. V.; Savich, V. O.

    2015-08-01

    The work consists of investigation results of diagnostic efficiency of a new azimuthally stable Muellermatrix method of analysis of laser autofluorescence coordinate distributions of biological liquid layers. A new model of generalized optical anisotropy of biological tissues protein networks is proposed in order to define the processes of laser autofluorescence. The influence of complex mechanisms of both phase anisotropy (linear birefringence and optical activity) and linear (circular) dichroism is taken into account. The interconnections between the azimuthally stable Mueller-matrix elements characterizing laser autofluorescence and different mechanisms of optical anisotropy are determined. The statistic analysis of coordinate distributions of such Mueller-matrix rotation invariants is proposed. Thereupon the quantitative criteria (statistic moments of the 1st to the 4th order) of differentiation of human urine polycrystalline layers for the sake of diagnosing and differentiating cholelithiasis with underlying chronic cholecystitis (group 1) and diabetes mellitus of degree II (group 2) are estimated.

  12. Enhanced Higgs boson to τ(+)τ(-) search with deep learning.

    PubMed

    Baldi, P; Sadowski, P; Whiteson, D

    2015-03-20

    The Higgs boson is thought to provide the interaction that imparts mass to the fundamental fermions, but while measurements at the Large Hadron Collider (LHC) are consistent with this hypothesis, current analysis techniques lack the statistical power to cross the traditional 5σ significance barrier without more data. Deep learning techniques have the potential to increase the statistical power of this analysis by automatically learning complex, high-level data representations. In this work, deep neural networks are used to detect the decay of the Higgs boson to a pair of tau leptons. A Bayesian optimization algorithm is used to tune the network architecture and training algorithm hyperparameters, resulting in a deep network of eight nonlinear processing layers that improves upon the performance of shallow classifiers even without the use of features specifically engineered by physicists for this application. The improvement in discovery significance is equivalent to an increase in the accumulated data set of 25%.

  13. Adaptive statistical iterative reconstruction: reducing dose while preserving image quality in the pediatric head CT examination.

    PubMed

    McKnight, Colin D; Watcharotone, Kuanwong; Ibrahim, Mohannad; Christodoulou, Emmanuel; Baer, Aaron H; Parmar, Hemant A

    2014-08-01

    Over the last decade there has been escalating concern regarding the increasing radiation exposure stemming from CT exams, particularly in children. Adaptive statistical iterative reconstruction (ASIR) is a relatively new and promising tool to reduce radiation dose while preserving image quality. While encouraging results have been found in adult head and chest and body imaging, validation of this technique in pediatric population is limited. The objective of our study was to retrospectively compare the image quality and radiation dose of pediatric head CT examinations obtained with ASIR compared to pediatric head CT examinations without ASIR in a large patient population. Retrospective analysis was performed on 82 pediatric head CT examinations. This group included 33 pediatric head CT examinations obtained with ASIR and 49 pediatric head CT examinations without ASIR. Computed tomography dose index (CTDIvol) was recorded on all examinations. Quantitative analysis consisted of standardized measurement of attenuation and the standard deviation at the bilateral centrum semiovale and cerebellar white matter to evaluate objective noise. Qualitative analysis consisted of independent assessment by two radiologists in a blinded manner of gray-white differentiation, sharpness and overall diagnostic quality. The average CTDIvol value of the ASIR group was 21.8 mGy (SD = 4.0) while the average CTDIvol for the non-ASIR group was 29.7 mGy (SD = 13.8), reflecting a statistically significant reduction in CTDIvol in the ASIR group (P < 0.01). There were statistically significant reductions in CTDI for the 3- to 12-year-old ASIR group as compared to the 3- to 12-year-old non-ASIR group (21.5 mGy vs. 30.0 mGy; P = 0.004) as well as statistically significant reductions in CTDI for the >12-year-old ASIR group as compared to the >12-year-old non-ASIR group (29.7 mGy vs. 49.9 mGy; P = 0.0002). Quantitative analysis revealed no significant difference in the homogeneity of variance in the ASIR group compared to the non-ASIR group. Radiologist assessment of gray-white differentiation, sharpness and overall diagnostic quality in ASIR examinations was not substantially different compared to non-ASIR examinations. The use of ASIR in pediatric head CT examinations allows for a 28% CTDIvol reduction in the 3- to 12-year-old age group and a 48% reduction in the >12-year-old age group without substantially affecting image quality.

  14. A consistent framework for Horton regression statistics that leads to a modified Hack's law

    USGS Publications Warehouse

    Furey, P.R.; Troutman, B.M.

    2008-01-01

    A statistical framework is introduced that resolves important problems with the interpretation and use of traditional Horton regression statistics. The framework is based on a univariate regression model that leads to an alternative expression for Horton ratio, connects Horton regression statistics to distributional simple scaling, and improves the accuracy in estimating Horton plot parameters. The model is used to examine data for drainage area A and mainstream length L from two groups of basins located in different physiographic settings. Results show that confidence intervals for the Horton plot regression statistics are quite wide. Nonetheless, an analysis of covariance shows that regression intercepts, but not regression slopes, can be used to distinguish between basin groups. The univariate model is generalized to include n > 1 dependent variables. For the case where the dependent variables represent ln A and ln L, the generalized model performs somewhat better at distinguishing between basin groups than two separate univariate models. The generalized model leads to a modification of Hack's law where L depends on both A and Strahler order ??. Data show that ?? plays a statistically significant role in the modified Hack's law expression. ?? 2008 Elsevier B.V.

  15. Reproducibility-optimized test statistic for ranking genes in microarray studies.

    PubMed

    Elo, Laura L; Filén, Sanna; Lahesmaa, Riitta; Aittokallio, Tero

    2008-01-01

    A principal goal of microarray studies is to identify the genes showing differential expression under distinct conditions. In such studies, the selection of an optimal test statistic is a crucial challenge, which depends on the type and amount of data under analysis. While previous studies on simulated or spike-in datasets do not provide practical guidance on how to choose the best method for a given real dataset, we introduce an enhanced reproducibility-optimization procedure, which enables the selection of a suitable gene- anking statistic directly from the data. In comparison with existing ranking methods, the reproducibilityoptimized statistic shows good performance consistently under various simulated conditions and on Affymetrix spike-in dataset. Further, the feasibility of the novel statistic is confirmed in a practical research setting using data from an in-house cDNA microarray study of asthma-related gene expression changes. These results suggest that the procedure facilitates the selection of an appropriate test statistic for a given dataset without relying on a priori assumptions, which may bias the findings and their interpretation. Moreover, the general reproducibilityoptimization procedure is not limited to detecting differential expression only but could be extended to a wide range of other applications as well.

  16. Effect of open rhinoplasty on the smile line.

    PubMed

    Tabrizi, Reza; Mirmohamadsadeghi, Hoori; Daneshjoo, Danadokht; Zare, Samira

    2012-05-01

    Open rhinoplasty is an esthetic surgical technique that is becoming increasingly popular, and can affect the nose and upper lip compartments. The aim of this study was to evaluate the effect of open rhinoplasty on tooth show and the smile line. The study participants were 61 patients with a mean age of 24.3 years (range, 17.2 to 39.6 years). The surgical procedure consisted of an esthetic open rhinoplasty without alar resection. Analysis of tooth show was limited to pre- and postoperative (at 12 months) tooth show measurements at rest and the maximum smile with a ruler (when participants held their heads naturally). Statistical analyses were performed with SPSS 13.0, and paired-sample t tests were used to compare tooth show means before and after the operation. Analysis of the rest position showed no statistically significant change in tooth show (P = .15), but analysis of participants' maximum smile data showed a statistically significant increase in tooth show after surgery (P < .05). In contrast, Pearson correlation analysis showed a positive relation between rhinoplasty and tooth show increases in maximum smile, especially in subjects with high smile lines. This study shows that the nasolabial compartment is a single unit and any change in 1 part may influence the other parts. Further studies should be conducted to investigate these interactions. Copyright © 2012 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.

  17. Detailed Spectral Analysis of the 260 ks XMM-Newton Data of 1E 1207.4-5209 and Significance of a 2.1 keV Absorption Feature

    NASA Astrophysics Data System (ADS)

    Mori, Kaya; Chonko, James C.; Hailey, Charles J.

    2005-10-01

    We have reanalyzed the 260 ks XMM-Newton observation of 1E 1207.4-5209. There are several significant improvements over previous work. First, a much broader range of physically plausible spectral models was used. Second, we have used a more rigorous statistical analysis. The standard F-distribution was not employed, but rather the exact finite statistics F-distribution was determined by Monte Carlo simulations. This approach was motivated by the recent work of Protassov and coworkers and Freeman and coworkers. They demonstrated that the standard F-distribution is not even asymptotically correct when applied to assess the significance of additional absorption features in a spectrum. With our improved analysis we do not find a third and fourth spectral feature in 1E 1207.4-5209 but only the two broad absorption features previously reported. Two additional statistical tests, one line model dependent and the other line model independent, confirmed our modified F-test analysis. For all physically plausible continuum models in which the weak residuals are strong enough to fit, the residuals occur at the instrument Au M edge. As a sanity check we confirmed that the residuals are consistent in strength and position with the instrument Au M residuals observed in 3C 273.

  18. Spiritual and ceremonial plants in North America: an assessment of Moerman's ethnobotanical database comparing Residual, Binomial, Bayesian and Imprecise Dirichlet Model (IDM) analysis.

    PubMed

    Turi, Christina E; Murch, Susan J

    2013-07-09

    Ethnobotanical research and the study of plants used for rituals, ceremonies and to connect with the spirit world have led to the discovery of many novel psychoactive compounds such as nicotine, caffeine, and cocaine. In North America, spiritual and ceremonial uses of plants are well documented and can be accessed online via the University of Michigan's Native American Ethnobotany Database. The objective of the study was to compare Residual, Bayesian, Binomial and Imprecise Dirichlet Model (IDM) analyses of ritual, ceremonial and spiritual plants in Moerman's ethnobotanical database and to identify genera that may be good candidates for the discovery of novel psychoactive compounds. The database was queried with the following format "Family Name AND Ceremonial OR Spiritual" for 263 North American botanical families. Spiritual and ceremonial flora consisted of 86 families with 517 species belonging to 292 genera. Spiritual taxa were then grouped further into ceremonial medicines and items categories. Residual, Bayesian, Binomial and IDM analysis were performed to identify over and under-utilized families. The 4 statistical approaches were in good agreement when identifying under-utilized families but large families (>393 species) were underemphasized by Binomial, Bayesian and IDM approaches for over-utilization. Residual, Binomial, and IDM analysis identified similar families as over-utilized in the medium (92-392 species) and small (<92 species) classes. The families Apiaceae, Asteraceae, Ericacea, Pinaceae and Salicaceae were identified as significantly over-utilized as ceremonial medicines in medium and large sized families. Analysis of genera within the Apiaceae and Asteraceae suggest that the genus Ligusticum and Artemisia are good candidates for facilitating the discovery of novel psychoactive compounds. The 4 statistical approaches were not consistent in the selection of over-utilization of flora. Residual analysis revealed overall trends that were supported by Binomial analysis when separated into small, medium and large families. The Bayesian, Binomial and IDM approaches identified different genera as potentially important. Species belonging to the genus Artemisia and Ligusticum were most consistently identified and may be valuable in future studies of the ethnopharmacology. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  19. About the necessity to manage events coded with MedDRA prior to statistical analysis: proposal of a strategy with application to a randomized clinical trial, ANRS 099 ALIZE.

    PubMed

    Journot, Valérie; Tabuteau, Sophie; Collin, Fidéline; Molina, Jean-Michel; Chene, Geneviève; Rancinan, Corinne

    2008-03-01

    Since 2003, the Medical Dictionary for Regulatory Activities (MedDRA) is the regulatory standard for safety report in clinical trials in the European Community. Yet, we found no published example of a practical experience for a scientifically oriented statistical analysis of events coded with MedDRA. We took advantage of a randomized trial in HIV-infected patients with MedDRA-coded events to explain the difficulties encountered during the events analysis and the strategy developed to report events consistently with trial-specific objectives. MedDRA has a rich hierarchical structure, which allows the grouping of coded terms into 5 levels, the highest being "System Organ Class" (SOC). Each coded term may be related to several SOCs, among which one primary SOC is defined. We developed a new general 5-step strategy to select a SOC as trial primary SOC, consistently with trial-specific objectives for this analysis. We applied it to the ANRS 099 ALIZE trial, where all events were coded with MedDRA version 3.0. We compared the MedDRA and the ALIZE primary SOCs. In the ANRS 099 ALIZE trial, 355 patients were recruited, and 3,722 events were reported and documented, among which 35% had multiple SOCs (2 to 4). We applied the proposed 5-step strategy. Altogether, 23% of MedDRA primary SOCs were modified, mainly from MedDRA primary SOCs "Investigations" (69%) and "Ear and labyrinth disorders" (6%), for the ALIZE primary SOCs "Hepatobiliary disorders" (35%), "Musculoskeletal and connective tissue disorders" (21%), and "Gastrointestinal disorders" (15%). MedDRA largely enhanced in size and complexity with versioning and the development of Standardized MedDRA Queries. Yet, statisticians should not systematically rely on primary SOCs proposed by MedDRA to report events. A simple general 5-step strategy to re-classify events consistently with the trial-specific objectives might be useful in HIV trials as well as in other fields.

  20. Factorial validity and internal consistency of the motivational climate in physical education scale.

    PubMed

    Soini, Markus; Liukkonen, Jarmo; Watt, Anthony; Yli-Piipari, Sami; Jaakkola, Timo

    2014-01-01

    The aim of the study was to examine the construct validity and internal consistency of the Motivational Climate in Physical Education Scale (MCPES). A key element of the development process of the scale was establishing a theoretical framework that integrated the dimensions of task- and ego involving climates in conjunction with autonomy, and social relatedness supporting climates. These constructs were adopted from the self-determination and achievement goal theories. A sample of Finnish Grade 9 students, comprising 2,594 girls and 1,803 boys, completed the 18-item MCPES during one physical education class. The results of the study demonstrated that participants had highest mean in task-involving climate and the lowest in autonomy climate and ego-involving climate. Additionally, autonomy, social relatedness, and task- involving climates were significantly and strongly correlated with each other, whereas the ego- involving climate had low or negligible correlations with the other climate dimensions.The construct validity of the MCPES was analyzed using confirmatory factor analysis. The statistical fit of the four-factor model consisting of motivational climate factors supporting perceived autonomy, social relatedness, task-involvement, and ego-involvement was satisfactory. The results of the reliability analysis showed acceptable internal consistencies for all four dimensions. The Motivational Climate in Physical Education Scale can be considered as psychometrically valid tool to measure motivational climate in Finnish Grade 9 students. Key PointsThis study developed Motivational Climate in School Physical Education Scale (MCPES). During the development process of the scale, the theoretical framework using dimensions of task- and ego involving as well as autonomy, and social relatedness supporting climates was constructed. These constructs were adopted from the self-determination and achievement goal theories.The statistical fit of the four-factor model of the MCPES consisting of motivational climate factors supporting perceived autonomy, social relatedness, task-involvement, and ego-involvement was satisfactory. Additionally, the results of the reliability analysis showed acceptable internal consistencies for all four dimensions.The results of the study demonstrated that participants had highest mean in task-involving climate and the lowest in autonomy climate.Autonomy, social relatedness, and task climate were significantly and strongly correlated with each other, whereas the ego climate factor had low or negligible correlations with the other three factors.

  1. Factorial Validity and Internal Consistency of the Motivational Climate in Physical Education Scale

    PubMed Central

    Soini, Markus; Liukkonen, Jarmo; Watt, Anthony; Yli-Piipari, Sami; Jaakkola, Timo

    2014-01-01

    The aim of the study was to examine the construct validity and internal consistency of the Motivational Climate in Physical Education Scale (MCPES). A key element of the development process of the scale was establishing a theoretical framework that integrated the dimensions of task- and ego involving climates in conjunction with autonomy, and social relatedness supporting climates. These constructs were adopted from the self-determination and achievement goal theories. A sample of Finnish Grade 9 students, comprising 2,594 girls and 1,803 boys, completed the 18-item MCPES during one physical education class. The results of the study demonstrated that participants had highest mean in task-involving climate and the lowest in autonomy climate and ego-involving climate. Additionally, autonomy, social relatedness, and task- involving climates were significantly and strongly correlated with each other, whereas the ego- involving climate had low or negligible correlations with the other climate dimensions.The construct validity of the MCPES was analyzed using confirmatory factor analysis. The statistical fit of the four-factor model consisting of motivational climate factors supporting perceived autonomy, social relatedness, task-involvement, and ego-involvement was satisfactory. The results of the reliability analysis showed acceptable internal consistencies for all four dimensions. The Motivational Climate in Physical Education Scale can be considered as psychometrically valid tool to measure motivational climate in Finnish Grade 9 students. Key Points This study developed Motivational Climate in School Physical Education Scale (MCPES). During the development process of the scale, the theoretical framework using dimensions of task- and ego involving as well as autonomy, and social relatedness supporting climates was constructed. These constructs were adopted from the self-determination and achievement goal theories. The statistical fit of the four-factor model of the MCPES consisting of motivational climate factors supporting perceived autonomy, social relatedness, task-involvement, and ego-involvement was satisfactory. Additionally, the results of the reliability analysis showed acceptable internal consistencies for all four dimensions. The results of the study demonstrated that participants had highest mean in task-involving climate and the lowest in autonomy climate. Autonomy, social relatedness, and task climate were significantly and strongly correlated with each other, whereas the ego climate factor had low or negligible correlations with the other three factors. PMID:24570617

  2. Prevalence of consistent condom use with various types of sex partners and associated factors among money boys in Changsha, China.

    PubMed

    Wang, Lian-Hong; Yan, Jin; Yang, Guo-Li; Long, Shuo; Yu, Yong; Wu, Xi-Lin

    2015-04-01

    Money boys with inconsistent condom use (less than 100% of the time) are at high risk of infection by human immunodeficiency virus (HIV) or sexually transmitted infection (STI), but relatively little research has examined their risk behaviors. We investigated the prevalence of consistent condom use (100% of the time) and associated factors among money boys. A cross-sectional study using a structured questionnaire was conducted among money boys in Changsha, China, between July 2012 and January 2013. Independent variables included socio-demographic data, substance abuse history, work characteristics, and self-reported HIV and STI history. Dependent variables included the consistent condom use with different types of sex partners. Among the participants, 82.4% used condoms consistently with male clients, 80.2% with male sex partners, and 77.1% with female sex partners in the past 3 months. A multiple stepwise logistic regression model identified four statistically significant factors associated with lower likelihoods of consistent condom use with male clients: age group, substance abuse, lack of an "employment" arrangement, and having no HIV test within the prior 6 months. In a similar model, only one factor associated significantly with lower likelihoods of consistent condom use with male sex partners was identified in multiple stepwise logistic regression analyses: having no HIV test within the prior six months. As for female sex partners, two significant variables were statistically significant in the multiple stepwise logistic regression analysis: having no HIV test within the prior 6 months and having STI history. Interventions which are linked with more realistic and acceptable HIV prevention methods are greatly warranted and should increase risk awareness and the behavior of consistent condom use in both commercial and personal relationship. © 2015 International Society for Sexual Medicine.

  3. Assessing the statistical significance of the achieved classification error of classifiers constructed using serum peptide profiles, and a prescription for random sampling repeated studies for massive high-throughput genomic and proteomic studies.

    PubMed

    Lyons-Weiler, James; Pelikan, Richard; Zeh, Herbert J; Whitcomb, David C; Malehorn, David E; Bigbee, William L; Hauskrecht, Milos

    2005-01-01

    Peptide profiles generated using SELDI/MALDI time of flight mass spectrometry provide a promising source of patient-specific information with high potential impact on the early detection and classification of cancer and other diseases. The new profiling technology comes, however, with numerous challenges and concerns. Particularly important are concerns of reproducibility of classification results and their significance. In this work we describe a computational validation framework, called PACE (Permutation-Achieved Classification Error), that lets us assess, for a given classification model, the significance of the Achieved Classification Error (ACE) on the profile data. The framework compares the performance statistic of the classifier on true data samples and checks if these are consistent with the behavior of the classifier on the same data with randomly reassigned class labels. A statistically significant ACE increases our belief that a discriminative signal was found in the data. The advantage of PACE analysis is that it can be easily combined with any classification model and is relatively easy to interpret. PACE analysis does not protect researchers against confounding in the experimental design, or other sources of systematic or random error. We use PACE analysis to assess significance of classification results we have achieved on a number of published data sets. The results show that many of these datasets indeed possess a signal that leads to a statistically significant ACE.

  4. Ethics in Service to the American People

    DTIC Science & Technology

    2014-06-13

    standardized punishments to self regulate. An attempt was made to acquire statistical analysis of ethical behavior from the Center for the Army...fruition? What would it consist of, and what standards would be set that if violated, would result in corresponding punishment ? The idea to self -regulate...fell five votes shy of breaking a filibuster. The Military Justice Improvement Act moves the decision whether to prosecute any crime punishable by

  5. Parallels between Objective Indicators and Subjective Perceptions of Quality of Life: A Study of Metropolitan and County Areas in Taiwan

    ERIC Educational Resources Information Center

    Liao, Pei-shan

    2009-01-01

    This study explores the consistency between objective indicators and subjective perceptions of quality of life in a ranking of survey data for cities and counties in Taiwan. Data used for analysis included the Statistical Yearbook of Hsiens and Municipalities and the Survey on Living Conditions of Citizens in Taiwan, both given for the year 2000.…

  6. Sensitivity analysis of helicopter IMC decelerating steep approach and landing performance to navigation system parameters

    NASA Technical Reports Server (NTRS)

    Karmali, M. S.; Phatak, A. V.

    1982-01-01

    Results of a study to investigate, by means of a computer simulation, the performance sensitivity of helicopter IMC DSAL operations as a function of navigation system parameters are presented. A mathematical model representing generically a navigation system is formulated. The scenario simulated consists of a straight in helicopter approach to landing along a 6 deg glideslope. The deceleration magnitude chosen is 03g. The navigation model parameters are varied and the statistics of the total system errors (TSE) computed. These statistics are used to determine the critical navigation system parameters that affect the performance of the closed-loop navigation, guidance and control system of a UH-1H helicopter.

  7. Mueller matrix mapping of biological polycrystalline layers using reference wave

    NASA Astrophysics Data System (ADS)

    Dubolazov, A.; Ushenko, O. G.; Ushenko, Yu. O.; Pidkamin, L. Y.; Sidor, M. I.; Grytsyuk, M.; Prysyazhnyuk, P. V.

    2018-01-01

    The paper consists of two parts. The first part is devoted to the short theoretical basics of the method of differential Mueller-matrix description of properties of partially depolarizing layers. It was provided the experimentally measured maps of differential matrix of the 1st order of polycrystalline structure of the histological section of brain tissue. It was defined the statistical moments of the 1st-4th orders, which characterize the distribution of matrix elements. In the second part of the paper it was provided the data of statistic analysis of birefringence and dichroism of the histological sections of mice liver tissue (normal and with diabetes). It were defined the objective criteria of differential diagnostics of diabetes.

  8. Sequential analysis as a tool for detection of amikacin ototoxicity in the treatment of multidrug-resistant tuberculosis.

    PubMed

    Vasconcelos, Karla Anacleto de; Frota, Silvana Maria Monte Coelho; Ruffino-Netto, Antonio; Kritski, Afrânio Lineu

    2018-04-01

    To investigate early detection of amikacin-induced ototoxicity in a population treated for multidrug-resistant tuberculosis (MDR-TB), by means of three different tests: pure-tone audiometry (PTA); high-frequency audiometry (HFA); and distortion-product otoacoustic emission (DPOAE) testing. This was a longitudinal prospective cohort study involving patients aged 18-69 years with a diagnosis of MDR-TB who had to receive amikacin for six months as part of their antituberculosis drug regimen for the first time. Hearing was assessed before treatment initiation and at two and six months after treatment initiation. Sequential statistics were used to analyze the results. We included 61 patients, but the final population consisted of 10 patients (7 men and 3 women) because of sequential analysis. Comparison of the test results obtained at two and six months after treatment initiation with those obtained at baseline revealed that HFA at two months and PTA at six months detected hearing threshold shifts consistent with ototoxicity. However, DPOAE testing did not detect such shifts. The statistical method used in this study makes it possible to conclude that, over the six-month period, amikacin-associated hearing threshold shifts were detected by HFA and PTA, and that DPOAE testing was not efficient in detecting such shifts.

  9. Measuring determinants of career satisfaction of anesthesiologists: validation of a survey instrument.

    PubMed

    Afonso, Anoushka M; Diaz, James H; Scher, Corey S; Beyl, Robbie A; Nair, Singh R; Kaye, Alan David

    2013-06-01

    To measure the parameter of job satisfaction among anesthesiologists. Survey instrument. Academic anesthesiology departments in the United States. 320 anesthesiologists who attended the annual meeting of the ASA in 2009 (95% response rate). The anonymous 50-item survey collected information on 26 independent demographic variables and 24 dependent ranked variables of career satisfaction among practicing anesthesiologists. Mean survey scores were calculated for each demographic variable and tested for statistically significant differences by analysis of variance. Questions within each domain that were internally consistent with each other within domains were identified by Cronbach's alpha ≥ 0.7. P-values ≤ 0.05 were considered statistically significant. Cronbach's alpha analysis showed strong internal consistency for 10 dependent outcome questions in the practice factor-related domain (α = 0.72), 6 dependent outcome questions in the peer factor-related domain (α = 0.71), and 8 dependent outcome questions in the personal factor-related domain (α = 0.81). Although age was not a variable, full-time status, early satisfaction within the first 5 years of practice, working with respected peers, and personal choice factors were all significantly associated with anesthesiologist job satisfaction. Improvements in factors related to job satisfaction among anesthesiologists may lead to higher early and current career satisfaction. Copyright © 2013 Elsevier Inc. All rights reserved.

  10. The interpretation of simultaneous soft X-ray spectroscopic and imaging observations of an active region. [in solar corona

    NASA Technical Reports Server (NTRS)

    Davis, J. M.; Gerassimenko, M.; Krieger, A. S.; Vaiana, G. S.

    1975-01-01

    Simultaneous soft X-ray spectroscopic and broad-band imaging observations of an active region have been analyzed together to determine the parameters which describe the coronal plasma. From the spectroscopic data, models of temperature-emission measure-elemental abundance have been constructed which provide acceptable statistical fits. By folding these possible models through the imaging analysis, models which are not self-consistent can be rejected. In this way, only the oxygen, neon, and iron abundances of Pottasch (1967), combined with either an isothermal or exponential temperature-emission-measure model, are consistent with both sets of data. Contour maps of electron temperature and density for the active region have been constructed from the imaging data. The implications of the analysis for the determination of coronal abundances and for future satellite experiments are discussed.

  11. Local sensitivity analysis for inverse problems solved by singular value decomposition

    USGS Publications Warehouse

    Hill, M.C.; Nolan, B.T.

    2010-01-01

    Local sensitivity analysis provides computationally frugal ways to evaluate models commonly used for resource management, risk assessment, and so on. This includes diagnosing inverse model convergence problems caused by parameter insensitivity and(or) parameter interdependence (correlation), understanding what aspects of the model and data contribute to measures of uncertainty, and identifying new data likely to reduce model uncertainty. Here, we consider sensitivity statistics relevant to models in which the process model parameters are transformed using singular value decomposition (SVD) to create SVD parameters for model calibration. The statistics considered include the PEST identifiability statistic, and combined use of the process-model parameter statistics composite scaled sensitivities and parameter correlation coefficients (CSS and PCC). The statistics are complimentary in that the identifiability statistic integrates the effects of parameter sensitivity and interdependence, while CSS and PCC provide individual measures of sensitivity and interdependence. PCC quantifies correlations between pairs or larger sets of parameters; when a set of parameters is intercorrelated, the absolute value of PCC is close to 1.00 for all pairs in the set. The number of singular vectors to include in the calculation of the identifiability statistic is somewhat subjective and influences the statistic. To demonstrate the statistics, we use the USDA’s Root Zone Water Quality Model to simulate nitrogen fate and transport in the unsaturated zone of the Merced River Basin, CA. There are 16 log-transformed process-model parameters, including water content at field capacity (WFC) and bulk density (BD) for each of five soil layers. Calibration data consisted of 1,670 observations comprising soil moisture, soil water tension, aqueous nitrate and bromide concentrations, soil nitrate concentration, and organic matter content. All 16 of the SVD parameters could be estimated by regression based on the range of singular values. Identifiability statistic results varied based on the number of SVD parameters included. Identifiability statistics calculated for four SVD parameters indicate the same three most important process-model parameters as CSS/PCC (WFC1, WFC2, and BD2), but the order differed. Additionally, the identifiability statistic showed that BD1 was almost as dominant as WFC1. The CSS/PCC analysis showed that this results from its high correlation with WCF1 (-0.94), and not its individual sensitivity. Such distinctions, combined with analysis of how high correlations and(or) sensitivities result from the constructed model, can produce important insights into, for example, the use of sensitivity analysis to design monitoring networks. In conclusion, the statistics considered identified similar important parameters. They differ because (1) with CSS/PCC can be more awkward because sensitivity and interdependence are considered separately and (2) identifiability requires consideration of how many SVD parameters to include. A continuing challenge is to understand how these computationally efficient methods compare with computationally demanding global methods like Markov-Chain Monte Carlo given common nonlinear processes and the often even more nonlinear models.

  12. Detection of changes of high-frequency activity by statistical time-frequency analysis in epileptic spikes

    PubMed Central

    Kobayashi, Katsuhiro; Jacobs, Julia; Gotman, Jean

    2013-01-01

    Objective A novel type of statistical time-frequency analysis was developed to elucidate changes of high-frequency EEG activity associated with epileptic spikes. Methods The method uses the Gabor Transform and detects changes of power in comparison to background activity using t-statistics that are controlled by the false discovery rate (FDR) to correct type I error of multiple testing. The analysis was applied to EEGs recorded at 2000 Hz from three patients with mesial temporal lobe epilepsy. Results Spike-related increase of high-frequency oscillations (HFOs) was clearly shown in the FDR-controlled t-spectra: it was most dramatic in spikes recorded from the hippocampus when the hippocampus was the seizure onset zone (SOZ). Depression of fast activity was observed immediately after the spikes, especially consistently in the discharges from the hippocampal SOZ. It corresponded to the slow wave part in case of spike-and-slow-wave complexes, but it was noted even in spikes without apparent slow waves. In one patient, a gradual increase of power above 200 Hz preceded spikes. Conclusions FDR-controlled t-spectra clearly detected the spike-related changes of HFOs that were unclear in standard power spectra. Significance We developed a promising tool to study the HFOs that may be closely linked to the pathophysiology of epileptogenesis. PMID:19394892

  13. Application of the Statistical ICA Technique in the DANCE Data Analysis

    NASA Astrophysics Data System (ADS)

    Baramsai, Bayarbadrakh; Jandel, M.; Bredeweg, T. A.; Rusev, G.; Walker, C. L.; Couture, A.; Mosby, S.; Ullmann, J. L.; Dance Collaboration

    2015-10-01

    The Detector for Advanced Neutron Capture Experiments (DANCE) at the Los Alamos Neutron Science Center is used to improve our understanding of the neutron capture reaction. DANCE is a highly efficient 4 π γ-ray detector array consisting of 160 BaF2 crystals which make it an ideal tool for neutron capture experiments. The (n, γ) reaction Q-value equals to the sum energy of all γ-rays emitted in the de-excitation cascades from the excited capture state to the ground state. The total γ-ray energy is used to identify reactions on different isotopes as well as the background. However, it's challenging to identify contribution in the Esum spectra from different isotopes with the similar Q-values. Recently we have tested the applicability of modern statistical methods such as Independent Component Analysis (ICA) to identify and separate different (n, γ) reaction yields on different isotopes that are present in the target material. ICA is a recently developed computational tool for separating multidimensional data into statistically independent additive subcomponents. In this conference talk, we present some results of the application of ICA algorithms and its modification for the DANCE experimental data analysis. This research is supported by the U. S. Department of Energy, Office of Science, Nuclear Physics under the Early Career Award No. LANL20135009.

  14. Statistical analysis plan for the Pneumatic CompREssion for PreVENting Venous Thromboembolism (PREVENT) trial: a study protocol for a randomized controlled trial.

    PubMed

    Arabi, Yaseen; Al-Hameed, Fahad; Burns, Karen E A; Mehta, Sangeeta; Alsolamy, Sami; Almaani, Mohammed; Mandourah, Yasser; Almekhlafi, Ghaleb A; Al Bshabshe, Ali; Finfer, Simon; Alshahrani, Mohammed; Khalid, Imran; Mehta, Yatin; Gaur, Atul; Hawa, Hassan; Buscher, Hergen; Arshad, Zia; Lababidi, Hani; Al Aithan, Abdulsalam; Jose, Jesna; Abdukahil, Sheryl Ann I; Afesh, Lara Y; Dbsawy, Maamoun; Al-Dawood, Abdulaziz

    2018-03-15

    The Pneumatic CompREssion for Preventing VENous Thromboembolism (PREVENT) trial evaluates the effect of adjunctive intermittent pneumatic compression (IPC) with pharmacologic thromboprophylaxis compared to pharmacologic thromboprophylaxis alone on venous thromboembolism (VTE) in critically ill adults. In this multicenter randomized trial, critically ill patients receiving pharmacologic thromboprophylaxis will be randomized to an IPC or a no IPC (control) group. The primary outcome is "incident" proximal lower-extremity deep vein thrombosis (DVT) within 28 days after randomization. Radiologists interpreting the lower-extremity ultrasonography will be blinded to intervention allocation, whereas the patients and treating team will be unblinded. The trial has 80% power to detect a 3% absolute risk reduction in the rate of proximal DVT from 7% to 4%. Consistent with international guidelines, we have developed a detailed plan to guide the analysis of the PREVENT trial. This plan specifies the statistical methods for the evaluation of primary and secondary outcomes, and defines covariates for adjusted analyses a priori. Application of this statistical analysis plan to the PREVENT trial will facilitate unbiased analyses of clinical data. ClinicalTrials.gov , ID: NCT02040103 . Registered on 3 November 2013; Current controlled trials, ID: ISRCTN44653506 . Registered on 30 October 2013.

  15. Survival analysis and classification methods for forest fire size

    PubMed Central

    2018-01-01

    Factors affecting wildland-fire size distribution include weather, fuels, and fire suppression activities. We present a novel application of survival analysis to quantify the effects of these factors on a sample of sizes of lightning-caused fires from Alberta, Canada. Two events were observed for each fire: the size at initial assessment (by the first fire fighters to arrive at the scene) and the size at “being held” (a state when no further increase in size is expected). We developed a statistical classifier to try to predict cases where there will be a growth in fire size (i.e., the size at “being held” exceeds the size at initial assessment). Logistic regression was preferred over two alternative classifiers, with covariates consistent with similar past analyses. We conducted survival analysis on the group of fires exhibiting a size increase. A screening process selected three covariates: an index of fire weather at the day the fire started, the fuel type burning at initial assessment, and a factor for the type and capabilities of the method of initial attack. The Cox proportional hazards model performed better than three accelerated failure time alternatives. Both fire weather and fuel type were highly significant, with effects consistent with known fire behaviour. The effects of initial attack method were not statistically significant, but did suggest a reverse causality that could arise if fire management agencies were to dispatch resources based on a-priori assessment of fire growth potentials. We discuss how a more sophisticated analysis of larger data sets could produce unbiased estimates of fire suppression effect under such circumstances. PMID:29320497

  16. A comparison of the views of extension agents and farmers regarding extension education courses in Dezful, Iran

    NASA Astrophysics Data System (ADS)

    Nazarzadeh Zare, Mohsen; Dorrani, Kamal; Gholamali Lavasani, Masoud

    2012-11-01

    Background and purpose : This study examines the views of farmers and extension agents participating in extension education courses in Dezful, Iran, with regard to problems with these courses. It relies upon a descriptive methodology, using a survey as its instrument. Sample : The statistical population consisted of 5060 farmers and 50 extension agents; all extension agents were studied owing to their small population and a sample of 466 farmers was selected based on the stratified ratio sampling method. For the data analysis, statistical procedures including the t-test and factor analysis were used. Results : The results of factor analysis on the views of farmers indicated that these courses have problems such as inadequate use of instructional materials by extension agents, insufficient employment of knowledgeable and experienced extension agents, bad and inconvenient timing of courses for farmers, lack of logical connection between one curriculum and prior ones, negligence in considering the opinions of farmers in arranging the courses, and lack of information about the time of courses. The findings of factor analysis on the views of extension agents indicated that these courses suffer from problems such as use of consistent methods of instruction for teaching curricula, and lack of continuity between courses and their levels and content. Conclusions : Recommendations include: listening to the views of farmers when planning extension courses; providing audiovisual aids, pamphlets and CDs; arranging courses based on convenient timing for farmers; using incentives to encourage participation; and employing extension agents with knowledge of the latest agricultural issues.

  17. Survival analysis and classification methods for forest fire size.

    PubMed

    Tremblay, Pier-Olivier; Duchesne, Thierry; Cumming, Steven G

    2018-01-01

    Factors affecting wildland-fire size distribution include weather, fuels, and fire suppression activities. We present a novel application of survival analysis to quantify the effects of these factors on a sample of sizes of lightning-caused fires from Alberta, Canada. Two events were observed for each fire: the size at initial assessment (by the first fire fighters to arrive at the scene) and the size at "being held" (a state when no further increase in size is expected). We developed a statistical classifier to try to predict cases where there will be a growth in fire size (i.e., the size at "being held" exceeds the size at initial assessment). Logistic regression was preferred over two alternative classifiers, with covariates consistent with similar past analyses. We conducted survival analysis on the group of fires exhibiting a size increase. A screening process selected three covariates: an index of fire weather at the day the fire started, the fuel type burning at initial assessment, and a factor for the type and capabilities of the method of initial attack. The Cox proportional hazards model performed better than three accelerated failure time alternatives. Both fire weather and fuel type were highly significant, with effects consistent with known fire behaviour. The effects of initial attack method were not statistically significant, but did suggest a reverse causality that could arise if fire management agencies were to dispatch resources based on a-priori assessment of fire growth potentials. We discuss how a more sophisticated analysis of larger data sets could produce unbiased estimates of fire suppression effect under such circumstances.

  18. Explorations in statistics: the log transformation.

    PubMed

    Curran-Everett, Douglas

    2018-06-01

    Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This thirteenth installment of Explorations in Statistics explores the log transformation, an established technique that rescales the actual observations from an experiment so that the assumptions of some statistical analysis are better met. A general assumption in statistics is that the variability of some response Y is homogeneous across groups or across some predictor variable X. If the variability-the standard deviation-varies in rough proportion to the mean value of Y, a log transformation can equalize the standard deviations. Moreover, if the actual observations from an experiment conform to a skewed distribution, then a log transformation can make the theoretical distribution of the sample mean more consistent with a normal distribution. This is important: the results of a one-sample t test are meaningful only if the theoretical distribution of the sample mean is roughly normal. If we log-transform our observations, then we want to confirm the transformation was useful. We can do this if we use the Box-Cox method, if we bootstrap the sample mean and the statistic t itself, and if we assess the residual plots from the statistical model of the actual and transformed sample observations.

  19. Event time analysis of longitudinal neuroimage data.

    PubMed

    Sabuncu, Mert R; Bernal-Rusiel, Jorge L; Reuter, Martin; Greve, Douglas N; Fischl, Bruce

    2014-08-15

    This paper presents a method for the statistical analysis of the associations between longitudinal neuroimaging measurements, e.g., of cortical thickness, and the timing of a clinical event of interest, e.g., disease onset. The proposed approach consists of two steps, the first of which employs a linear mixed effects (LME) model to capture temporal variation in serial imaging data. The second step utilizes the extended Cox regression model to examine the relationship between time-dependent imaging measurements and the timing of the event of interest. We demonstrate the proposed method both for the univariate analysis of image-derived biomarkers, e.g., the volume of a structure of interest, and the exploratory mass-univariate analysis of measurements contained in maps, such as cortical thickness and gray matter density. The mass-univariate method employs a recently developed spatial extension of the LME model. We applied our method to analyze structural measurements computed using FreeSurfer, a widely used brain Magnetic Resonance Image (MRI) analysis software package. We provide a quantitative and objective empirical evaluation of the statistical performance of the proposed method on longitudinal data from subjects suffering from Mild Cognitive Impairment (MCI) at baseline. Copyright © 2014 Elsevier Inc. All rights reserved.

  20. Non-parametric model selection for subject-specific topological organization of resting-state functional connectivity.

    PubMed

    Ferrarini, Luca; Veer, Ilya M; van Lew, Baldur; Oei, Nicole Y L; van Buchem, Mark A; Reiber, Johan H C; Rombouts, Serge A R B; Milles, J

    2011-06-01

    In recent years, graph theory has been successfully applied to study functional and anatomical connectivity networks in the human brain. Most of these networks have shown small-world topological characteristics: high efficiency in long distance communication between nodes, combined with highly interconnected local clusters of nodes. Moreover, functional studies performed at high resolutions have presented convincing evidence that resting-state functional connectivity networks exhibits (exponentially truncated) scale-free behavior. Such evidence, however, was mostly presented qualitatively, in terms of linear regressions of the degree distributions on log-log plots. Even when quantitative measures were given, these were usually limited to the r(2) correlation coefficient. However, the r(2) statistic is not an optimal estimator of explained variance, when dealing with (truncated) power-law models. Recent developments in statistics have introduced new non-parametric approaches, based on the Kolmogorov-Smirnov test, for the problem of model selection. In this work, we have built on this idea to statistically tackle the issue of model selection for the degree distribution of functional connectivity at rest. The analysis, performed at voxel level and in a subject-specific fashion, confirmed the superiority of a truncated power-law model, showing high consistency across subjects. Moreover, the most highly connected voxels were found to be consistently part of the default mode network. Our results provide statistically sound support to the evidence previously presented in literature for a truncated power-law model of resting-state functional connectivity. Copyright © 2010 Elsevier Inc. All rights reserved.

  1. 'Chain pooling' model selection as developed for the statistical analysis of a rotor burst protection experiment

    NASA Technical Reports Server (NTRS)

    Holms, A. G.

    1977-01-01

    A statistical decision procedure called chain pooling had been developed for model selection in fitting the results of a two-level fixed-effects full or fractional factorial experiment not having replication. The basic strategy included the use of one nominal level of significance for a preliminary test and a second nominal level of significance for the final test. The subject has been reexamined from the point of view of using as many as three successive statistical model deletion procedures in fitting the results of a single experiment. The investigation consisted of random number studies intended to simulate the results of a proposed aircraft turbine-engine rotor-burst-protection experiment. As a conservative approach, population model coefficients were chosen to represent a saturated 2 to the 4th power experiment with a distribution of parameter values unfavorable to the decision procedures. Three model selection strategies were developed.

  2. Statistical patterns of visual search for hidden objects

    PubMed Central

    Credidio, Heitor F.; Teixeira, Elisângela N.; Reis, Saulo D. S.; Moreira, André A.; Andrade Jr, José S.

    2012-01-01

    The movement of the eyes has been the subject of intensive research as a way to elucidate inner mechanisms of cognitive processes. A cognitive task that is rather frequent in our daily life is the visual search for hidden objects. Here we investigate through eye-tracking experiments the statistical properties associated with the search of target images embedded in a landscape of distractors. Specifically, our results show that the twofold process of eye movement, composed of sequences of fixations (small steps) intercalated by saccades (longer jumps), displays characteristic statistical signatures. While the saccadic jumps follow a log-normal distribution of distances, which is typical of multiplicative processes, the lengths of the smaller steps in the fixation trajectories are consistent with a power-law distribution. Moreover, the present analysis reveals a clear transition between a directional serial search to an isotropic random movement as the difficulty level of the searching task is increased. PMID:23226829

  3. Statistical Optics

    NASA Astrophysics Data System (ADS)

    Goodman, Joseph W.

    2000-07-01

    The Wiley Classics Library consists of selected books that have become recognized classics in their respective fields. With these new unabridged and inexpensive editions, Wiley hopes to extend the life of these important works by making them available to future generations of mathematicians and scientists. Currently available in the Series: T. W. Anderson The Statistical Analysis of Time Series T. S. Arthanari & Yadolah Dodge Mathematical Programming in Statistics Emil Artin Geometric Algebra Norman T. J. Bailey The Elements of Stochastic Processes with Applications to the Natural Sciences Robert G. Bartle The Elements of Integration and Lebesgue Measure George E. P. Box & Norman R. Draper Evolutionary Operation: A Statistical Method for Process Improvement George E. P. Box & George C. Tiao Bayesian Inference in Statistical Analysis R. W. Carter Finite Groups of Lie Type: Conjugacy Classes and Complex Characters R. W. Carter Simple Groups of Lie Type William G. Cochran & Gertrude M. Cox Experimental Designs, Second Edition Richard Courant Differential and Integral Calculus, Volume I RIchard Courant Differential and Integral Calculus, Volume II Richard Courant & D. Hilbert Methods of Mathematical Physics, Volume I Richard Courant & D. Hilbert Methods of Mathematical Physics, Volume II D. R. Cox Planning of Experiments Harold S. M. Coxeter Introduction to Geometry, Second Edition Charles W. Curtis & Irving Reiner Representation Theory of Finite Groups and Associative Algebras Charles W. Curtis & Irving Reiner Methods of Representation Theory with Applications to Finite Groups and Orders, Volume I Charles W. Curtis & Irving Reiner Methods of Representation Theory with Applications to Finite Groups and Orders, Volume II Cuthbert Daniel Fitting Equations to Data: Computer Analysis of Multifactor Data, Second Edition Bruno de Finetti Theory of Probability, Volume I Bruno de Finetti Theory of Probability, Volume 2 W. Edwards Deming Sample Design in Business Research

  4. Corrosion Analysis of an Experimental Noble Alloy on Commercially Pure Titanium Dental Implants

    PubMed Central

    Bortagaray, Manuel Alberto; Ibañez, Claudio Arturo Antonio; Ibañez, Maria Constanza; Ibañez, Juan Carlos

    2016-01-01

    Objective: To determine whether the Noble Bond® Argen® alloy was electrochemically suitable for the manufacturing of prosthetic superstructures over commercially pure titanium (c.p. Ti) implants. Also, the electrolytic corrosion effects over three types of materials used on prosthetic suprastructures that were coupled with titanium implants were analysed: Noble Bond® (Argen®), Argelite 76sf +® (Argen®), and commercially pure titanium. Materials and Methods: 15 samples were studied, consisting in 1 abutment and one c.p. titanium implant each. They were divided into three groups, namely: Control group: five c.p Titanium abutments (B&W®), Test group 1: five Noble Bond® (Argen®) cast abutments and, Test group 2: five Argelite 76sf +® (Argen®) abutments. In order to observe the corrosion effects, the surface topography was imaged using a confocal microscope. Thus, three metric parameters (Sa: Arithmetical mean height of the surface. Sp: Maximum height of peaks. Sv: Maximum height of valleys.), were measured at three different areas: abutment neck, implant neck and implant body. The samples were immersed in artificial saliva for 3 months, after which the procedure was repeated. The metric parameters were compared by statistical analysis. Results: The analysis of the Sa at the level of the implant neck, abutment neck and implant body, showed no statistically significant differences on combining c.p. Ti implants with the three studied alloys. The Sp showed no statistically significant differences between the three alloys. The Sv showed no statistically significant differences between the three alloys. Conclusion: The effects of electrogalvanic corrosion on each of the materials used when they were in contact with c.p. Ti showed no statistically significant differences. PMID:27733875

  5. Combined statistical analyses for long-term stability data with multiple storage conditions: a simulation study.

    PubMed

    Almalik, Osama; Nijhuis, Michiel B; van den Heuvel, Edwin R

    2014-01-01

    Shelf-life estimation usually requires that at least three registration batches are tested for stability at multiple storage conditions. The shelf-life estimates are often obtained by linear regression analysis per storage condition, an approach implicitly suggested by ICH guideline Q1E. A linear regression analysis combining all data from multiple storage conditions was recently proposed in the literature when variances are homogeneous across storage conditions. The combined analysis is expected to perform better than the separate analysis per storage condition, since pooling data would lead to an improved estimate of the variation and higher numbers of degrees of freedom, but this is not evident for shelf-life estimation. Indeed, the two approaches treat the observed initial batch results, the intercepts in the model, and poolability of batches differently, which may eliminate or reduce the expected advantage of the combined approach with respect to the separate approach. Therefore, a simulation study was performed to compare the distribution of simulated shelf-life estimates on several characteristics between the two approaches and to quantify the difference in shelf-life estimates. In general, the combined statistical analysis does estimate the true shelf life more consistently and precisely than the analysis per storage condition, but it did not outperform the separate analysis in all circumstances.

  6. Measurement Consistency from Magnetic Resonance Images

    PubMed Central

    Chung, Dongjun; Chung, Moo K.; Durtschi, Reid B.; Lindell, R. Gentry; Vorperian, Houri K.

    2010-01-01

    Rationale and Objectives In quantifying medical images, length-based measurements are still obtained manually. Due to possible human error, a measurement protocol is required to guarantee the consistency of measurements. In this paper, we review various statistical techniques that can be used in determining measurement consistency. The focus is on detecting a possible measurement bias and determining the robustness of the procedures to outliers. Materials and Methods We review correlation analysis, linear regression, Bland-Altman method, paired t-test, and analysis of variance (ANOVA). These techniques were applied to measurements, obtained by two raters, of head and neck structures from magnetic resonance images (MRI). Results The correlation analysis and the linear regression were shown to be insufficient for detecting measurement inconsistency. They are also very sensitive to outliers. The widely used Bland-Altman method is a visualization technique so it lacks the numerical quantification. The paired t-test tends to be sensitive to small measurement bias. On the other hand, ANOVA performs well even under small measurement bias. Conclusion In almost all cases, using only one method is insufficient and it is recommended to use several methods simultaneously. In general, ANOVA performs the best. PMID:18790405

  7. Care dependency of hospitalized children: testing the Care Dependency Scale for Paediatrics in a cross-cultural comparison.

    PubMed

    Tork, Hanan; Dassen, Theo; Lohrmann, Christa

    2009-02-01

    This paper is a report of a study to examine the psychometric properties of the Care Dependency Scale for Paediatrics in Germany and Egypt and to compare the care dependency of school-age children in both countries. Cross-cultural differences in care dependency of older adults have been documented in the literature, but little is known about the differences and similarities with regard to children's care dependency in different cultures. A convenience sample of 258 school-aged children from Germany and Egypt participated in the study in 2005. The reliability of the Care Dependency Scale for Paediatrics was assessed in terms of internal consistency and interrater reliability. Factor analysis (principal component analysis) was employed to verify the construct validity. A Visual Analogue Scale was used to investigate the criterion-related validity. Good internal consistency was detected both for the Arabic and German versions. Factor analysis revealed one factor for both versions. A Pearson's correlation between the Care Dependency Scale for Paediatrics and Visual Analogue Scale was statistically significant for both versions indicating criterion-related validity. Statistically significant differences between the participants were detected regarding the mean sum score on the Care Dependency Scale for Paediatrics. The Care Dependency Scale for Paediatrics is a reliable and valid tool for assessing the care dependency of children and is recommended for assessing the care dependency of children from different ethnic origins. Differences in care dependency between German and Egyptian children were detected, which might be due to cultural differences.

  8. Potential predictors of risk sexual behavior among private college students in Mekelle City, North Ethiopia.

    PubMed

    Gebresllasie, Fanna; Tsadik, Mache; Berhane, Eyoel

    2017-01-01

    Risk sexual practice among students from public universities/colleges is common in Ethiopia. However, little has been known about risk sexual behavior of students in private colleges where more students are potentially enrolled. Therefore, this study aimed to assess the magnitude of risky sexual behaviors and predictors among students of Private Colleges in Mekelle City. A mixed design of both quantitative and qualitative methods was used among 627 randomly selected students of private colleges from February to march 2013. Self administered questionnaire and focus group discussion was used to collect data. A thematic content analysis was used for the qualitative part. For the quantitative study, Univariate, Bivariate and multivariable analysis was made using SPSS version 16 statistical package and p value less than 0.05 was used as cut off point for a statistical significance. Among the total 590 respondents, 151 (29.1%) have ever had sex. Among the sexually active students, 30.5% reported having had multiple sexual partners and consistent condom use was nearly 39%. In multivariable logistic regression analysis, variables such as sex, age group, sex last twelve months and condom use last twelve months was found significantly associated with risky sexual behavior. The findings of qualitative and quantitative study showed consistency in presence of risk factors. Finding of this study showed sexual risk behaviors is high among private colleges such as multiple sexual partners and substance use. So that colleges should emphasis on promoting healthy sexual and reproductive health programs.

  9. Turkish Version of Kolcaba's Immobilization Comfort Questionnaire: A Validity and Reliability Study.

    PubMed

    Tosun, Betül; Aslan, Özlem; Tunay, Servet; Akyüz, Aygül; Özkan, Hüseyin; Bek, Doğan; Açıksöz, Semra

    2015-12-01

    The purpose of this study was to determine the validity and reliability of the Turkish version of the Immobilization Comfort Questionnaire (ICQ). The sample used in this methodological study consisted of 121 patients undergoing lower extremity arthroscopy in a training and research hospital. The validity study of the questionnaire assessed language validity, structural validity and criterion validity. Structural validity was evaluated via exploratory factor analysis. Criterion validity was evaluated by assessing the correlation between the visual analog scale (VAS) scores (i.e., the comfort and pain VAS scores) and the ICQ scores using Spearman's correlation test. The Kaiser-Meyer-Olkin coefficient and Bartlett's test of sphericity were used to determine the suitability of the data for factor analysis. Internal consistency was evaluated to determine reliability. The data were analyzed with SPSS version 15.00 for Windows. Descriptive statistics were presented as frequencies, percentages, means and standard deviations. A p value ≤ .05 was considered statistically significant. A moderate positive correlation was found between the ICQ scores and the VAS comfort scores; a moderate negative correlation was found between the ICQ and the VAS pain measures in the criterion validity analysis. Cronbach α values of .75 and .82 were found for the first and second measurements, respectively. The findings of this study reveal that the ICQ is a valid and reliable tool for assessing the comfort of patients in Turkey who are immobilized because of lower extremity orthopedic problems. Copyright © 2015. Published by Elsevier B.V.

  10. Cultural Adaptation of the Cardiff Acne Disability Index to a Hindi Speaking Population: A Pilot Study.

    PubMed

    Gupta, Aayush; Sharma, Yugal K; Dash, K; Verma, Sampurna

    2015-01-01

    Acne vulgaris is known to impair many aspects of the quality of life (QoL) of its patients. To translate the Cardiff Acne Disability Index (CADI) from English into Hindi and to assess its validity and reliability in Hindi speaking patients with acne from India. Hindi version of CADI, translated and linguistically validated as per published international guidelines, along with a previously translated Hindi version of dermatology life quality index (DLQI) and a demographic questionnaire were administered to acne patients. The internal consistency reliability of the Hindi version of CADI and its concurrent validity were assessed by Cronbach's alpha co-efficient and Spearman's correlation co-efficient respectively. Construct validity was examined by factor analysis. Statistical analysis was carried out using the Statistical Package for the Social Sciences (SPSS) version 20 (SPSS Inc., Chicago, IL, USA) for Windows. One hundred Hindi speaking patients with various grades of acne participated in the study. Hindi version of CADI showed high internal consistency reliability (Cronbach's alpha co-efficient = 0.722). Mean item-to-total correlation co-efficient ranged from 0.502 to 0.760. Concurrent validity of the scale was supported by a significant correlation with the Hindi DLQI. Factor analysis revealed the presence of two dimensions underlying the factor structure of the scale. Hindi CADI is equivalent to the original English version and constitutes a reliable and valid tool for clinical assessment of the impact of acne on QoL.

  11. Managing Complexity in Evidence Analysis: A Worked Example in Pediatric Weight Management.

    PubMed

    Parrott, James Scott; Henry, Beverly; Thompson, Kyle L; Ziegler, Jane; Handu, Deepa

    2018-05-02

    Nutrition interventions are often complex and multicomponent. Typical approaches to meta-analyses that focus on individual causal relationships to provide guideline recommendations are not sufficient to capture this complexity. The objective of this study is to describe the method of meta-analysis used for the Pediatric Weight Management (PWM) Guidelines update and provide a worked example that can be applied in other areas of dietetics practice. The effects of PWM interventions were examined for body mass index (BMI), body mass index z-score (BMIZ), and waist circumference at four different time periods. For intervention-level effects, intervention types were identified empirically using multiple correspondence analysis paired with cluster analysis. Pooled effects of identified types were examined using random effects meta-analysis models. Differences in effects among types were examined using meta-regression. Context-level effects are examined using qualitative comparative analysis. Three distinct types (or families) of PWM interventions were identified: medical nutrition, behavioral, and missing components. Medical nutrition and behavioral types showed statistically significant improvements in BMIZ across all time points. Results were less consistent for BMI and waist circumference, although four distinct patterns of weight status change were identified. These varied by intervention type as well as outcome measure. Meta-regression indicated statistically significant differences between the medical nutrition and behavioral types vs the missing component type for both BMIZ and BMI, although the pattern varied by time period and intervention type. Qualitative comparative analysis identified distinct configurations of context characteristics at each time point that were consistent with positive outcomes among the intervention types. Although analysis of individual causal relationships is invaluable, this approach is inadequate to capture the complexity of dietetics practice. An alternative approach that integrates intervention-level with context-level meta-analyses may provide deeper understanding in the development of practice guidelines. Copyright © 2018 Academy of Nutrition and Dietetics. Published by Elsevier Inc. All rights reserved.

  12. [Consistency study of PowerPlex 21 kit and Goldeneye 20A kit and forensic application].

    PubMed

    Ren, He; Liu, Ying; Zhang, Qing-Xia; Jiao, Zhang-Ping

    2014-06-01

    To ensure the consistency of genotype results for PowerPlex 21 kit and Goldeneye 20A kit. The STR loci were amplified in DNA samples from 205 unrelated individuals in Beijing Han population. And consistency of 19 overlap STR loci typing were observed. The genetic polymorphism of D1S1656 locus was obtained. All 19 overlap loci typing showed consistent. The proportion of peak height of heterozygous loci in two kits showed no statistical difference (P > 0.05). The observed heterozygosis of D1S1656 was 0.878. The discrimination power was 0.949. The excluding probability of paternity of triplet was 0.751. The excluding probability of paternity of diploid was 0.506. The polymorphism information content was 0.810. PowerPlex 21 kit and Goldeneye 20A kit present a good consistency. The primer design is reasonable. The polymorphism of D1S1656 is good. The two kits can be used for human genetic analysis, paternity test, and individual identification in forensic practice.

  13. Recognition of battery aging variations for LiFePO4 batteries in 2nd use applications combining incremental capacity analysis and statistical approaches

    NASA Astrophysics Data System (ADS)

    Jiang, Yan; Jiang, Jiuchun; Zhang, Caiping; Zhang, Weige; Gao, Yang; Guo, Qipei

    2017-08-01

    To assess the economic benefits of battery reuse, the consistency and aging characteristics of a retired LiFePO4 battery pack are studied in this paper. The consistency of battery modules is analyzed from the perspective of the capacity and the internal resistance. Test results indicate that battery module parameter dispersion increases along with battery aging. However, battery modules with better capacity consistency doesn't ensure better resistance consistency. Then the aging characteristics of the battery pack are analyzed and the main results are as follow: (1) Weibull and normal distribution are feasible to fit the capacity and resistance distribution of battery modules respectively; (2) SOC imbalance is the dominating factor in the capacity fading process of the battery pack; (3) By employing the incremental capacity (IC) and IC peak area analysis, a consistency evaluation method representing the aging mechanism variations of the battery modules is proposed and then an accurate battery screening strategy is put forward. This study not only provides data support for evaluating economic benefits of retired batteries but also presents a method to recognize the battery aging variations, which is helpful for rapid evaluation and screening of retired batteries for 2nd use.

  14. Overcoming bias in estimating the volume-outcome relationship.

    PubMed

    Tsai, Alexander C; Votruba, Mark; Bridges, John F P; Cebul, Randall D

    2006-02-01

    To examine the effect of hospital volume on 30-day mortality for patients with congestive heart failure (CHF) using administrative and clinical data in conventional regression and instrumental variables (IV) estimation models. The primary data consisted of longitudinal information on comorbid conditions, vital signs, clinical status, and laboratory test results for 21,555 Medicare-insured patients aged 65 years and older hospitalized for CHF in northeast Ohio in 1991-1997. The patient was the primary unit of analysis. We fit a linear probability model to the data to assess the effects of hospital volume on patient mortality within 30 days of admission. Both administrative and clinical data elements were included for risk adjustment. Linear distances between patients and hospitals were used to construct the instrument, which was then used to assess the endogeneity of hospital volume. When only administrative data elements were included in the risk adjustment model, the estimated volume-outcome effect was statistically significant (p=.029) but small in magnitude. The estimate was markedly attenuated in magnitude and statistical significance when clinical data were added to the model as risk adjusters (p=.39). IV estimation shifted the estimate in a direction consistent with selective referral, but we were unable to reject the consistency of the linear probability estimates. Use of only administrative data for volume-outcomes research may generate spurious findings. The IV analysis further suggests that conventional estimates of the volume-outcome relationship may be contaminated by selective referral effects. Taken together, our results suggest that efforts to concentrate hospital-based CHF care in high-volume hospitals may not reduce mortality among elderly patients.

  15. Statistical Model to Analyze Quantitative Proteomics Data Obtained by 18O/16O Labeling and Linear Ion Trap Mass Spectrometry

    PubMed Central

    Jorge, Inmaculada; Navarro, Pedro; Martínez-Acedo, Pablo; Núñez, Estefanía; Serrano, Horacio; Alfranca, Arántzazu; Redondo, Juan Miguel; Vázquez, Jesús

    2009-01-01

    Statistical models for the analysis of protein expression changes by stable isotope labeling are still poorly developed, particularly for data obtained by 16O/18O labeling. Besides large scale test experiments to validate the null hypothesis are lacking. Although the study of mechanisms underlying biological actions promoted by vascular endothelial growth factor (VEGF) on endothelial cells is of considerable interest, quantitative proteomics studies on this subject are scarce and have been performed after exposing cells to the factor for long periods of time. In this work we present the largest quantitative proteomics study to date on the short term effects of VEGF on human umbilical vein endothelial cells by 18O/16O labeling. Current statistical models based on normality and variance homogeneity were found unsuitable to describe the null hypothesis in a large scale test experiment performed on these cells, producing false expression changes. A random effects model was developed including four different sources of variance at the spectrum-fitting, scan, peptide, and protein levels. With the new model the number of outliers at scan and peptide levels was negligible in three large scale experiments, and only one false protein expression change was observed in the test experiment among more than 1000 proteins. The new model allowed the detection of significant protein expression changes upon VEGF stimulation for 4 and 8 h. The consistency of the changes observed at 4 h was confirmed by a replica at a smaller scale and further validated by Western blot analysis of some proteins. Most of the observed changes have not been described previously and are consistent with a pattern of protein expression that dynamically changes over time following the evolution of the angiogenic response. With this statistical model the 18O labeling approach emerges as a very promising and robust alternative to perform quantitative proteomics studies at a depth of several thousand proteins. PMID:19181660

  16. A Statistical Assessment of Information, Knowledge and Attitudes of Medical Students Regarding Contraception Use.

    PubMed

    Simionescu, Anca A; Horobet, Alexandra; Belascu, Lucian

    2017-12-01

    To evaluate how contraception use is linked to information, knowledge and attitudes towards family planning and contraception of medical students. This is a voluntary cross-sectional study using an anonymous questionnaire applied to 62 medical students. The questionnaire had the following main structure: characteristics of the studied population, information on contraception, knowledge about contraception methods, attitudes regarding family planning and contraception, and contraception use. Statistical analysis was performed using STATISTICA 8.0 software and statistical significance of the data was verified using the t-statistic test. The survey had a 95% response rate. Seventy seven percent of the studied population consisted of females aged between 20-40 years, with 85.50% of them being 20-25 years old. The overwhelming majority of respondents believed it was important to be informed on the subject and considered themselves to be well informed on contraception. The internet and courses are the main sources of information. Of all respondents, 75.41% had routine discussions with their partners regarding contraception, 53.23% talked about it with family members and 46.77% with their physician; 90.16% had at least one gynecological examination and 47.54% got themselves tested for sexually transmitted diseases. The condom and the contraceptive pill were the main contraceptive methods for the respondents. Romanian medical students share similar features to their peers in European developed countries. We used a statistical analysis to demonstrate that information, knowledge and attitudes on contraception are closely linked to contraceptive choice.

  17. Awareness, Attitude, and Knowledge of Basic Life Support among Medical, Dental, and Nursing Faculties and Students in the University Hospital.

    PubMed

    Sangamesh, N C; Vidya, K C; Pathi, Jugajyoti; Singh, Arpita

    2017-01-01

    To assess the awareness, attitude, and knowledge about basic life support (BLS) among medical, dental, and nursing students and faculties and the proposal of BLS skills in the academic curriculum of undergraduate (UG) course. Recognition, prevention, and effective management of life-threatening emergencies are the responsibility of health-care professionals. These situations can be successfully managed by proper knowledge and training of the BLS skills. These life-saving maneuvers can be given through the structured resuscitation programs, which are lacking in the academic curriculum. A questionnaire study consisting of 20 questions was conducted among 659 participants in the Kalinga Institute of Dental Sciences, Kalinga Institute of Medical Sciences, KIIT University. Medical junior residents, BDS faculties, interns, nursing faculties, and 3 rd -year and final-year UG students from both medical and dental colleges were chosen. The statistical analysis was carried out using SPSS software version 20.0 (Armonk, NY:IBM Corp). After collecting the data, the values were statistically analyzed and tabulated. Statistical analysis was performed using Mann-Whitney U-test. The results with P < 0.05 were considered statistically significant. Our participants were aware of BLS, showed positive attitude toward it, whereas the knowledge about BLS was lacking, with the statistically significant P value. By introducing BLS regularly in the academic curriculum and by routine hands on workshops, all the health-care providers should be well versed with the BLS skills for effectively managing the life-threatening emergencies.

  18. A basic analysis toolkit for biological sequences

    PubMed Central

    Giancarlo, Raffaele; Siragusa, Alessandro; Siragusa, Enrico; Utro, Filippo

    2007-01-01

    This paper presents a software library, nicknamed BATS, for some basic sequence analysis tasks. Namely, local alignments, via approximate string matching, and global alignments, via longest common subsequence and alignments with affine and concave gap cost functions. Moreover, it also supports filtering operations to select strings from a set and establish their statistical significance, via z-score computation. None of the algorithms is new, but although they are generally regarded as fundamental for sequence analysis, they have not been implemented in a single and consistent software package, as we do here. Therefore, our main contribution is to fill this gap between algorithmic theory and practice by providing an extensible and easy to use software library that includes algorithms for the mentioned string matching and alignment problems. The library consists of C/C++ library functions as well as Perl library functions. It can be interfaced with Bioperl and can also be used as a stand-alone system with a GUI. The software is available at under the GNU GPL. PMID:17877802

  19. ParallABEL: an R library for generalized parallelization of genome-wide association studies.

    PubMed

    Sangket, Unitsa; Mahasirimongkol, Surakameth; Chantratita, Wasun; Tandayya, Pichaya; Aulchenko, Yurii S

    2010-04-29

    Genome-Wide Association (GWA) analysis is a powerful method for identifying loci associated with complex traits and drug response. Parts of GWA analyses, especially those involving thousands of individuals and consuming hours to months, will benefit from parallel computation. It is arduous acquiring the necessary programming skills to correctly partition and distribute data, control and monitor tasks on clustered computers, and merge output files. Most components of GWA analysis can be divided into four groups based on the types of input data and statistical outputs. The first group contains statistics computed for a particular Single Nucleotide Polymorphism (SNP), or trait, such as SNP characterization statistics or association test statistics. The input data of this group includes the SNPs/traits. The second group concerns statistics characterizing an individual in a study, for example, the summary statistics of genotype quality for each sample. The input data of this group includes individuals. The third group consists of pair-wise statistics derived from analyses between each pair of individuals in the study, for example genome-wide identity-by-state or genomic kinship analyses. The input data of this group includes pairs of SNPs/traits. The final group concerns pair-wise statistics derived for pairs of SNPs, such as the linkage disequilibrium characterisation. The input data of this group includes pairs of individuals. We developed the ParallABEL library, which utilizes the Rmpi library, to parallelize these four types of computations. ParallABEL library is not only aimed at GenABEL, but may also be employed to parallelize various GWA packages in R. The data set from the North American Rheumatoid Arthritis Consortium (NARAC) includes 2,062 individuals with 545,080, SNPs' genotyping, was used to measure ParallABEL performance. Almost perfect speed-up was achieved for many types of analyses. For example, the computing time for the identity-by-state matrix was linearly reduced from approximately eight hours to one hour when ParallABEL employed eight processors. Executing genome-wide association analysis using the ParallABEL library on a computer cluster is an effective way to boost performance, and simplify the parallelization of GWA studies. ParallABEL is a user-friendly parallelization of GenABEL.

  20. Assessing Attitudes towards Statistics among Medical Students: Psychometric Properties of the Serbian Version of the Survey of Attitudes Towards Statistics (SATS)

    PubMed Central

    Stanisavljevic, Dejana; Trajkovic, Goran; Marinkovic, Jelena; Bukumiric, Zoran; Cirkovic, Andja; Milic, Natasa

    2014-01-01

    Background Medical statistics has become important and relevant for future doctors, enabling them to practice evidence based medicine. Recent studies report that students’ attitudes towards statistics play an important role in their statistics achievements. The aim of the study was to test the psychometric properties of the Serbian version of the Survey of Attitudes Towards Statistics (SATS) in order to acquire a valid instrument to measure attitudes inside the Serbian educational context. Methods The validation study was performed on a cohort of 417 medical students who were enrolled in an obligatory introductory statistics course. The SATS adaptation was based on an internationally accepted methodology for translation and cultural adaptation. Psychometric properties of the Serbian version of the SATS were analyzed through the examination of factorial structure and internal consistency. Results Most medical students held positive attitudes towards statistics. The average total SATS score was above neutral (4.3±0.8), and varied from 1.9 to 6.2. Confirmatory factor analysis validated the six-factor structure of the questionnaire (Affect, Cognitive Competence, Value, Difficulty, Interest and Effort). Values for fit indices TLI (0.940) and CFI (0.961) were above the cut-off of ≥0.90. The RMSEA value of 0.064 (0.051–0.078) was below the suggested value of ≤0.08. Cronbach’s alpha of the entire scale was 0.90, indicating scale reliability. In a multivariate regression model, self-rating of ability in mathematics and current grade point average were significantly associated with the total SATS score after adjusting for age and gender. Conclusion Present study provided the evidence for the appropriate metric properties of the Serbian version of SATS. Confirmatory factor analysis validated the six-factor structure of the scale. The SATS might be reliable and a valid instrument for identifying medical students’ attitudes towards statistics in the Serbian educational context. PMID:25405489

  1. Assessing attitudes towards statistics among medical students: psychometric properties of the Serbian version of the Survey of Attitudes Towards Statistics (SATS).

    PubMed

    Stanisavljevic, Dejana; Trajkovic, Goran; Marinkovic, Jelena; Bukumiric, Zoran; Cirkovic, Andja; Milic, Natasa

    2014-01-01

    Medical statistics has become important and relevant for future doctors, enabling them to practice evidence based medicine. Recent studies report that students' attitudes towards statistics play an important role in their statistics achievements. The aim of the study was to test the psychometric properties of the Serbian version of the Survey of Attitudes Towards Statistics (SATS) in order to acquire a valid instrument to measure attitudes inside the Serbian educational context. The validation study was performed on a cohort of 417 medical students who were enrolled in an obligatory introductory statistics course. The SATS adaptation was based on an internationally accepted methodology for translation and cultural adaptation. Psychometric properties of the Serbian version of the SATS were analyzed through the examination of factorial structure and internal consistency. Most medical students held positive attitudes towards statistics. The average total SATS score was above neutral (4.3±0.8), and varied from 1.9 to 6.2. Confirmatory factor analysis validated the six-factor structure of the questionnaire (Affect, Cognitive Competence, Value, Difficulty, Interest and Effort). Values for fit indices TLI (0.940) and CFI (0.961) were above the cut-off of ≥0.90. The RMSEA value of 0.064 (0.051-0.078) was below the suggested value of ≤0.08. Cronbach's alpha of the entire scale was 0.90, indicating scale reliability. In a multivariate regression model, self-rating of ability in mathematics and current grade point average were significantly associated with the total SATS score after adjusting for age and gender. Present study provided the evidence for the appropriate metric properties of the Serbian version of SATS. Confirmatory factor analysis validated the six-factor structure of the scale. The SATS might be reliable and a valid instrument for identifying medical students' attitudes towards statistics in the Serbian educational context.

  2. Statistical performance and information content of time lag analysis and redundancy analysis in time series modeling.

    PubMed

    Angeler, David G; Viedma, Olga; Moreno, José M

    2009-11-01

    Time lag analysis (TLA) is a distance-based approach used to study temporal dynamics of ecological communities by measuring community dissimilarity over increasing time lags. Despite its increased use in recent years, its performance in comparison with other more direct methods (i.e., canonical ordination) has not been evaluated. This study fills this gap using extensive simulations and real data sets from experimental temporary ponds (true zooplankton communities) and landscape studies (landscape categories as pseudo-communities) that differ in community structure and anthropogenic stress history. Modeling time with a principal coordinate of neighborhood matrices (PCNM) approach, the canonical ordination technique (redundancy analysis; RDA) consistently outperformed the other statistical tests (i.e., TLAs, Mantel test, and RDA based on linear time trends) using all real data. In addition, the RDA-PCNM revealed different patterns of temporal change, and the strength of each individual time pattern, in terms of adjusted variance explained, could be evaluated, It also identified species contributions to these patterns of temporal change. This additional information is not provided by distance-based methods. The simulation study revealed better Type I error properties of the canonical ordination techniques compared with the distance-based approaches when no deterministic component of change was imposed on the communities. The simulation also revealed that strong emphasis on uniform deterministic change and low variability at other temporal scales is needed to result in decreased statistical power of the RDA-PCNM approach relative to the other methods. Based on the statistical performance of and information content provided by RDA-PCNM models, this technique serves ecologists as a powerful tool for modeling temporal change of ecological (pseudo-) communities.

  3. Heavy flavor decay of Zγ at CDF

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Timothy M. Harrington-Taber

    2013-01-01

    Diboson production is an important and frequently measured parameter of the Standard Model. This analysis considers the previously neglected pmore » $$\\bar{p}$$ →Z γ→ b$$\\bar{b}$$ channel, as measured at the Collider Detector at Fermilab. Using the entire Tevatron Run II dataset, the measured result is consistent with Standard Model predictions, but the statistical error associated with this method of measurement limits the strength of this correlation.« less

  4. Publication bias in situ

    PubMed Central

    Phillips, Carl V

    2004-01-01

    Background Publication bias, as typically defined, refers to the decreased likelihood of studies' results being published when they are near the null, not statistically significant, or otherwise "less interesting." But choices about how to analyze the data and which results to report create a publication bias within the published results, a bias I label "publication bias in situ" (PBIS). Discussion PBIS may create much greater bias in the literature than traditionally defined publication bias (the failure to publish any result from a study). The causes of PBIS are well known, consisting of various decisions about reporting that are influenced by the data. But its impact is not generally appreciated, and very little attention is devoted to it. What attention there is consists largely of rules for statistical analysis that are impractical and do not actually reduce the bias in reported estimates. PBIS cannot be reduced by statistical tools because it is not fundamentally a problem of statistics, but rather of non-statistical choices and plain language interpretations. PBIS should be recognized as a phenomenon worthy of study – it is extremely common and probably has a huge impact on results reported in the literature – and there should be greater systematic efforts to identify and reduce it. The paper presents examples, including results of a recent HIV vaccine trial, that show how easily PBIS can have a large impact on reported results, as well as how there can be no simple answer to it. Summary PBIS is a major problem, worthy of substantially more attention than it receives. There are ways to reduce the bias, but they are very seldom employed because they are largely unrecognized. PMID:15296515

  5. Fels-Rand: an Xlisp-Stat program for the comparative analysis of data under phylogenetic uncertainty.

    PubMed

    Blomberg, S

    2000-11-01

    Currently available programs for the comparative analysis of phylogenetic data do not perform optimally when the phylogeny is not completely specified (i.e. the phylogeny contains polytomies). Recent literature suggests that a better way to analyse the data would be to create random trees from the known phylogeny that are fully-resolved but consistent with the known tree. A computer program is presented, Fels-Rand, that performs such analyses. A randomisation procedure is used to generate trees that are fully resolved but whose structure is consistent with the original tree. Statistics are then calculated on a large number of these randomly-generated trees. Fels-Rand uses the object-oriented features of Xlisp-Stat to manipulate internal tree representations. Xlisp-Stat's dynamic graphing features are used to provide heuristic tools to aid in analysis, particularly outlier analysis. The usefulness of Xlisp-Stat as a system for phylogenetic computation is discussed. Available from the author or at http://www.uq.edu.au/~ansblomb/Fels-Rand.sit.hqx. Xlisp-Stat is available from http://stat.umn.edu/~luke/xls/xlsinfo/xlsinfo.html. s.blomberg@abdn.ac.uk

  6. Statistical analysis of content of Cs-137 in soils in Bansko-Razlog region

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kobilarov, R. G., E-mail: rkobi@tu-sofia.bg

    Statistical analysis of the data set consisting of the activity concentrations of {sup 137}Cs in soils in Bansko–Razlog region is carried out in order to establish the dependence of the deposition and the migration of {sup 137}Cs on the soil type. The descriptive statistics and the test of normality show that the data set have not normal distribution. Positively skewed distribution and possible outlying values of the activity of {sup 137}Cs in soils were observed. After reduction of the effects of outliers, the data set is divided into two parts, depending on the soil type. Test of normality of themore » two new data sets shows that they have a normal distribution. Ordinary kriging technique is used to characterize the spatial distribution of the activity of {sup 137}Cs over an area covering 40 km{sup 2} (whole Razlog valley). The result (a map of the spatial distribution of the activity concentration of {sup 137}Cs) can be used as a reference point for future studies on the assessment of radiological risk to the population and the erosion of soils in the study area.« less

  7. Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition, paranoid personality disorder diagnosis: a unitary or a two-dimensional construct?

    PubMed

    Falkum, Erik; Pedersen, Geir; Karterud, Sigmund

    2009-01-01

    This article examines reliability and validity aspects of the Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition (DSM-IV) paranoid personality disorder (PPD) diagnosis. Patients with personality disorders (n = 930) from the Norwegian network of psychotherapeutic day hospitals, of which 114 had PPD, were included in the study. Frequency distribution, chi(2), correlations, reliability statistics, exploratory, and confirmatory factor analyses were performed. The distribution of PPD criteria revealed no distinct boundary between patients with and without PPD. Diagnostic category membership was obtained in 37 of 64 theoretically possible ways. The PPD criteria formed a separate factor in a principal component analysis, whereas a confirmatory factor analysis indicated that the DSM-IV PPD construct consists of 2 separate dimensions as follows: suspiciousness and hostility. The reliability of the unitary PPD scale was only 0.70, probably partly due to the apparent 2-dimensionality of the construct. Persistent unwarranted doubts about the loyalty of friends had the highest diagnostic efficiency, whereas unwarranted accusations of infidelity of partner had particularly poor indicator properties. The reliability and validity of the unitary PPD construct may be questioned. The 2-dimensional PPD model should be further explored.

  8. Statistical Methodology for the Analysis of Repeated Duration Data in Behavioral Studies.

    PubMed

    Letué, Frédérique; Martinez, Marie-José; Samson, Adeline; Vilain, Anne; Vilain, Coriandre

    2018-03-15

    Repeated duration data are frequently used in behavioral studies. Classical linear or log-linear mixed models are often inadequate to analyze such data, because they usually consist of nonnegative and skew-distributed variables. Therefore, we recommend use of a statistical methodology specific to duration data. We propose a methodology based on Cox mixed models and written under the R language. This semiparametric model is indeed flexible enough to fit duration data. To compare log-linear and Cox mixed models in terms of goodness-of-fit on real data sets, we also provide a procedure based on simulations and quantile-quantile plots. We present two examples from a data set of speech and gesture interactions, which illustrate the limitations of linear and log-linear mixed models, as compared to Cox models. The linear models are not validated on our data, whereas Cox models are. Moreover, in the second example, the Cox model exhibits a significant effect that the linear model does not. We provide methods to select the best-fitting models for repeated duration data and to compare statistical methodologies. In this study, we show that Cox models are best suited to the analysis of our data set.

  9. Speech disorders did not correlate with age at onset of Parkinson's disease.

    PubMed

    Dias, Alice Estevo; Barbosa, Maira Tonidandel; Limongi, João Carlos Papaterra; Barbosa, Egberto Reis

    2016-02-01

    Speech disorders are common manifestations of Parkinson´s disease. Objective To compare speech articulation in patients according to age at onset of the disease. Methods Fifty patients was divided into two groups: Group I consisted of 30 patients with age at onset between 40 and 55 years; Group II consisted of 20 patients with age at onset after 65 years. All patients were evaluated based on the Unified Parkinson's Disease Rating Scale scores, Hoehn and Yahr scale and speech evaluation by perceptual and acoustical analysis. Results There was no statistically significant difference between the two groups regarding neurological involvement and speech characteristics. Correlation analysis indicated differences in speech articulation in relation to staging and axial scores of rigidity and bradykinesia for middle and late-onset. Conclusions Impairment of speech articulation did not correlate with age at onset of disease, but was positively related with disease duration and higher scores in both groups.

  10. Disability Measurement for Korean Community-Dwelling Adults With Stroke: Item-Level Psychometric Analysis of the Korean Longitudinal Study of Ageing

    PubMed Central

    2018-01-01

    Objective To investigate the psychometric properties of the activities of daily living (ADL) instrument used in the analysis of Korean Longitudinal Study of Ageing (KLoSA) dataset. Methods A retrospective study was carried out involving 2006 KLoSA records of community-dwelling adults diagnosed with stroke. The ADL instrument used for the analysis of KLoSA included 17 items, which were analyzed using Rasch modeling to develop a robust outcome measure. The unidimensionality of the ADL instrument was examined based on confirmatory factor analysis with a one-factor model. Item-level psychometric analysis of the ADL instrument included fit statistics, internal consistency, precision, and the item difficulty hierarchy. Results The study sample included a total of 201 community-dwelling adults (1.5% of the Korean population with an age over 45 years; mean age=70.0 years, SD=9.7) having a history of stroke. The ADL instrument demonstrated unidimensional construct. Two misfit items, money management (mean square [MnSq]=1.56, standardized Z-statistics [ZSTD]=2.3) and phone use (MnSq=1.78, ZSTD=2.3) were removed from the analysis. The remaining 15 items demonstrated good item fit, high internal consistency (person reliability=0.91), and good precision (person strata=3.48). The instrument precisely estimated person measures within a wide range of theta (−4.75 logits < θ < 3.97 logits) and a reliability of 0.9, with a conceptual hierarchy of item difficulty. Conclusion The findings indicate that the 15 ADL items met Rasch expectations of unidimensionality and demonstrated good psychometric properties. It is proposed that the validated ADL instrument can be used as a primary outcome measure for assessing longitudinal disability trajectories in the Korean adult population and can be employed for comparative analysis of international disability across national aging studies. PMID:29765888

  11. Weighted Statistical Binning: Enabling Statistically Consistent Genome-Scale Phylogenetic Analyses

    PubMed Central

    Bayzid, Md Shamsuzzoha; Mirarab, Siavash; Boussau, Bastien; Warnow, Tandy

    2015-01-01

    Because biological processes can result in different loci having different evolutionary histories, species tree estimation requires multiple loci from across multiple genomes. While many processes can result in discord between gene trees and species trees, incomplete lineage sorting (ILS), modeled by the multi-species coalescent, is considered to be a dominant cause for gene tree heterogeneity. Coalescent-based methods have been developed to estimate species trees, many of which operate by combining estimated gene trees, and so are called "summary methods". Because summary methods are generally fast (and much faster than more complicated coalescent-based methods that co-estimate gene trees and species trees), they have become very popular techniques for estimating species trees from multiple loci. However, recent studies have established that summary methods can have reduced accuracy in the presence of gene tree estimation error, and also that many biological datasets have substantial gene tree estimation error, so that summary methods may not be highly accurate in biologically realistic conditions. Mirarab et al. (Science 2014) presented the "statistical binning" technique to improve gene tree estimation in multi-locus analyses, and showed that it improved the accuracy of MP-EST, one of the most popular coalescent-based summary methods. Statistical binning, which uses a simple heuristic to evaluate "combinability" and then uses the larger sets of genes to re-calculate gene trees, has good empirical performance, but using statistical binning within a phylogenomic pipeline does not have the desirable property of being statistically consistent. We show that weighting the re-calculated gene trees by the bin sizes makes statistical binning statistically consistent under the multispecies coalescent, and maintains the good empirical performance. Thus, "weighted statistical binning" enables highly accurate genome-scale species tree estimation, and is also statistically consistent under the multi-species coalescent model. New data used in this study are available at DOI: http://dx.doi.org/10.6084/m9.figshare.1411146, and the software is available at https://github.com/smirarab/binning. PMID:26086579

  12. An Integrated Analysis of the Physiological Effects of Space Flight: Executive Summary

    NASA Technical Reports Server (NTRS)

    Leonard, J. I.

    1985-01-01

    A large array of models were applied in a unified manner to solve problems in space flight physiology. Mathematical simulation was used as an alternative way of looking at physiological systems and maximizing the yield from previous space flight experiments. A medical data analysis system was created which consist of an automated data base, a computerized biostatistical and data analysis system, and a set of simulation models of physiological systems. Five basic models were employed: (1) a pulsatile cardiovascular model; (2) a respiratory model; (3) a thermoregulatory model; (4) a circulatory, fluid, and electrolyte balance model; and (5) an erythropoiesis regulatory model. Algorithms were provided to perform routine statistical tests, multivariate analysis, nonlinear regression analysis, and autocorrelation analysis. Special purpose programs were prepared for rank correlation, factor analysis, and the integration of the metabolic balance data.

  13. Properties of different selection signature statistics and a new strategy for combining them.

    PubMed

    Ma, Y; Ding, X; Qanbari, S; Weigend, S; Zhang, Q; Simianer, H

    2015-11-01

    Identifying signatures of recent or ongoing selection is of high relevance in livestock population genomics. From a statistical perspective, determining a proper testing procedure and combining various test statistics is challenging. On the basis of extensive simulations in this study, we discuss the statistical properties of eight different established selection signature statistics. In the considered scenario, we show that a reasonable power to detect selection signatures is achieved with high marker density (>1 SNP/kb) as obtained from sequencing, while rather small sample sizes (~15 diploid individuals) appear to be sufficient. Most selection signature statistics such as composite likelihood ratio and cross population extended haplotype homozogysity have the highest power when fixation of the selected allele is reached, while integrated haplotype score has the highest power when selection is ongoing. We suggest a novel strategy, called de-correlated composite of multiple signals (DCMS) to combine different statistics for detecting selection signatures while accounting for the correlation between the different selection signature statistics. When examined with simulated data, DCMS consistently has a higher power than most of the single statistics and shows a reliable positional resolution. We illustrate the new statistic to the established selective sweep around the lactase gene in human HapMap data providing further evidence of the reliability of this new statistic. Then, we apply it to scan selection signatures in two chicken samples with diverse skin color. Our analysis suggests that a set of well-known genes such as BCO2, MC1R, ASIP and TYR were involved in the divergent selection for this trait.

  14. 3D Mueller-matrix mapping of biological optically anisotropic networks

    NASA Astrophysics Data System (ADS)

    Ushenko, O. G.; Ushenko, V. O.; Bodnar, G. B.; Zhytaryuk, V. G.; Prydiy, O. G.; Koval, G.; Lukashevich, I.; Vanchuliak, O.

    2018-01-01

    The paper consists of two parts. The first part presents short theoretical basics of the method of azimuthally-invariant Mueller-matrix description of optical anisotropy of biological tissues. It was provided experimentally measured coordinate distributions of Mueller-matrix invariants (MMI) of linear and circular birefringences of skeletal muscle tissue. It was defined the values of statistic moments, which characterize the distributions of amplitudes of wavelet coefficients of MMI at different scales of scanning. The second part presents the data of statistic analysis of the distributions of amplitude of wavelet coefficients of the distributions of linear birefringence of myocardium tissue died after the infarction and ischemic heart disease. It was defined the objective criteria of differentiation of the cause of death.

  15. Multiscale polarization diagnostics of birefringent networks in problems of necrotic changes diagnostics

    NASA Astrophysics Data System (ADS)

    Sakhnovskiy, M. Yu.; Ushenko, Yu. O.; Ushenko, V. O.; Besaha, R. N.; Pavlyukovich, N.; Pavlyukovich, O.

    2018-01-01

    The paper consists of two parts. The first part presents short theoretical basics of the method of azimuthally-invariant Mueller-matrix description of optical anisotropy of biological tissues. It was provided experimentally measured coordinate distributions of Mueller-matrix invariants (MMI) of linear and circular birefringences of skeletal muscle tissue. It was defined the values of statistic moments, which characterize the distributions of amplitudes of wavelet coefficients of MMI at different scales of scanning. The second part presents the data of statistic analysis of the distributions of amplitude of wavelet coefficients of the distributions of linear birefringence of myocardium tissue died after the infarction and ischemic heart disease. It was defined the objective criteria of differentiation of the cause of death.

  16. Things fall apart: biological species form unconnected parsimony networks.

    PubMed

    Hart, Michael W; Sunday, Jennifer

    2007-10-22

    The generality of operational species definitions is limited by problematic definitions of between-species divergence. A recent phylogenetic species concept based on a simple objective measure of statistically significant genetic differentiation uses between-species application of statistical parsimony networks that are typically used for population genetic analysis within species. Here we review recent phylogeographic studies and reanalyse several mtDNA barcoding studies using this method. We found that (i) alignments of DNA sequences typically fall apart into a separate subnetwork for each Linnean species (but with a higher rate of true positives for mtDNA data) and (ii) DNA sequences from single species typically stick together in a single haplotype network. Departures from these patterns are usually consistent with hybridization or cryptic species diversity.

  17. [The incidence of craniomandibular disorders in patients with cervical dysfunctions. A clinico-statistical assessment].

    PubMed

    Carossa, S; Catapano, S; Previgliano, V; Preti, G

    1993-05-01

    The aim of this research was to measure the incidence of craniomandibular disorders in a group of patients with functional-type cervical alterations. The group consisted of 50 patients undergoing treatment for disorders of the cervical sectors of the spine. Each patient was subjected to a medical examination to investigate the presence of CMD signs or symptoms. From the data statistical analysis a higher percentage of cases with muscular and joint pain, limited mouth opening, deviation and deflection, were found in comparison with the percentage found among the general population. This demonstrates an overloading of the entire masticatory apparatus. Joint noise was less frequent, probably due to its exclusion from our sample of patients with arthrosis-type degenerative pathology.

  18. [Development and application of emergency medical information management system].

    PubMed

    Wang, Fang; Zhu, Baofeng; Chen, Jianrong; Wang, Jian; Gu, Chaoli; Liu, Buyun

    2011-03-01

    To meet the needs of clinical practice of rescuing critical illness and develop the information management system of the emergency medicine. Microsoft Visual FoxPro, which is one of Microsoft's visual programming tool, is used to develop computer-aided system included the information management system of the emergency medicine. The system mainly consists of the module of statistic analysis, the module of quality control of emergency rescue, the module of flow path of emergency rescue, the module of nursing care in emergency rescue, and the module of rescue training. It can realize the system management of emergency medicine and,process and analyze the emergency statistical data. This system is practical. It can optimize emergency clinical pathway, and meet the needs of clinical rescue.

  19. Kinetic analysis of single molecule FRET transitions without trajectories

    NASA Astrophysics Data System (ADS)

    Schrangl, Lukas; Göhring, Janett; Schütz, Gerhard J.

    2018-03-01

    Single molecule Förster resonance energy transfer (smFRET) is a popular tool to study biological systems that undergo topological transitions on the nanometer scale. smFRET experiments typically require recording of long smFRET trajectories and subsequent statistical analysis to extract parameters such as the states' lifetimes. Alternatively, analysis of probability distributions exploits the shapes of smFRET distributions at well chosen exposure times and hence works without the acquisition of time traces. Here, we describe a variant that utilizes statistical tests to compare experimental datasets with Monte Carlo simulations. For a given model, parameters are varied to cover the full realistic parameter space. As output, the method yields p-values which quantify the likelihood for each parameter setting to be consistent with the experimental data. The method provides suitable results even if the actual lifetimes differ by an order of magnitude. We also demonstrated the robustness of the method to inaccurately determine input parameters. As proof of concept, the new method was applied to the determination of transition rate constants for Holliday junctions.

  20. The Effectiveness of Computer-Assisted Instruction to Teach Physical Examination to Students and Trainees in the Health Sciences Professions: A Systematic Review and Meta-Analysis.

    PubMed

    Tomesko, Jennifer; Touger-Decker, Riva; Dreker, Margaret; Zelig, Rena; Parrott, James Scott

    2017-01-01

    To explore knowledge and skill acquisition outcomes related to learning physical examination (PE) through computer-assisted instruction (CAI) compared with a face-to-face (F2F) approach. A systematic literature review and meta-analysis published between January 2001 and December 2016 was conducted. Databases searched included Medline, Cochrane, CINAHL, ERIC, Ebsco, Scopus, and Web of Science. Studies were synthesized by study design, intervention, and outcomes. Statistical analyses included DerSimonian-Laird random-effects model. In total, 7 studies were included in the review, and 5 in the meta-analysis. There were no statistically significant differences for knowledge (mean difference [MD] = 5.39, 95% confidence interval [CI]: -2.05 to 12.84) or skill acquisition (MD = 0.35, 95% CI: -5.30 to 6.01). The evidence does not suggest a strong consistent preference for either CAI or F2F instruction to teach students/trainees PE. Further research is needed to identify conditions which examine knowledge and skill acquisition outcomes that favor one mode of instruction over the other.

  1. Artificial neural network models for prediction of cardiovascular autonomic dysfunction in general Chinese population

    PubMed Central

    2013-01-01

    Background The present study aimed to develop an artificial neural network (ANN) based prediction model for cardiovascular autonomic (CA) dysfunction in the general population. Methods We analyzed a previous dataset based on a population sample consisted of 2,092 individuals aged 30–80 years. The prediction models were derived from an exploratory set using ANN analysis. Performances of these prediction models were evaluated in the validation set. Results Univariate analysis indicated that 14 risk factors showed statistically significant association with CA dysfunction (P < 0.05). The mean area under the receiver-operating curve was 0.762 (95% CI 0.732–0.793) for prediction model developed using ANN analysis. The mean sensitivity, specificity, positive and negative predictive values were similar in the prediction models was 0.751, 0.665, 0.330 and 0.924, respectively. All HL statistics were less than 15.0. Conclusion ANN is an effective tool for developing prediction models with high value for predicting CA dysfunction among the general population. PMID:23902963

  2. Systematic review and meta-analysis of glyphosate exposure and risk of lymphohematopoietic cancers

    PubMed Central

    Chang, Ellen T.; Delzell, Elizabeth

    2016-01-01

    ABSTRACT This systematic review and meta-analysis rigorously examines the relationship between glyphosate exposure and risk of lymphohematopoietic cancer (LHC) including NHL, Hodgkin lymphoma (HL), multiple myeloma (MM), and leukemia. Meta-relative risks (meta-RRs) were positive and marginally statistically significant for the association between any versus no use of glyphosate and risk of NHL (meta-RR = 1.3, 95% confidence interval (CI) = 1.0–1.6, based on six studies) and MM (meta-RR = 1.4, 95% CI = 1.0–1.9; four studies). Associations were statistically null for HL (meta-RR = 1.1, 95% CI = 0.7–1.6; two studies), leukemia (meta-RR = 1.0, 95% CI = 0.6–1.5; three studies), and NHL subtypes except B-cell lymphoma (two studies each). Bias and confounding may account for observed associations. Meta-analysis is constrained by few studies and a crude exposure metric, while the overall body of literature is methodologically limited and findings are not strong or consistent. Thus, a causal relationship has not been established between glyphosate exposure and risk of any type of LHC. PMID:27015139

  3. Systematic review and meta-analysis of glyphosate exposure and risk of lymphohematopoietic cancers.

    PubMed

    Chang, Ellen T; Delzell, Elizabeth

    2016-01-01

    This systematic review and meta-analysis rigorously examines the relationship between glyphosate exposure and risk of lymphohematopoietic cancer (LHC) including NHL, Hodgkin lymphoma (HL), multiple myeloma (MM), and leukemia. Meta-relative risks (meta-RRs) were positive and marginally statistically significant for the association between any versus no use of glyphosate and risk of NHL (meta-RR = 1.3, 95% confidence interval (CI) = 1.0-1.6, based on six studies) and MM (meta-RR = 1.4, 95% CI = 1.0-1.9; four studies). Associations were statistically null for HL (meta-RR = 1.1, 95% CI = 0.7-1.6; two studies), leukemia (meta-RR = 1.0, 95% CI = 0.6-1.5; three studies), and NHL subtypes except B-cell lymphoma (two studies each). Bias and confounding may account for observed associations. Meta-analysis is constrained by few studies and a crude exposure metric, while the overall body of literature is methodologically limited and findings are not strong or consistent. Thus, a causal relationship has not been established between glyphosate exposure and risk of any type of LHC.

  4. Inferring Instantaneous, Multivariate and Nonlinear Sensitivities for the Analysis of Feedback Processes in a Dynamical System: Lorenz Model Case Study

    NASA Technical Reports Server (NTRS)

    Aires, Filipe; Rossow, William B.; Hansen, James E. (Technical Monitor)

    2001-01-01

    A new approach is presented for the analysis of feedback processes in a nonlinear dynamical system by observing its variations. The new methodology consists of statistical estimates of the sensitivities between all pairs of variables in the system based on a neural network modeling of the dynamical system. The model can then be used to estimate the instantaneous, multivariate and nonlinear sensitivities, which are shown to be essential for the analysis of the feedbacks processes involved in the dynamical system. The method is described and tested on synthetic data from the low-order Lorenz circulation model where the correct sensitivities can be evaluated analytically.

  5. Focal activation of primary visual cortex following supra-choroidal electrical stimulation of the retina: Intrinsic signal imaging and linear model analysis.

    PubMed

    Cloherty, Shaun L; Hietanen, Markus A; Suaning, Gregg J; Ibbotson, Michael R

    2010-01-01

    We performed optical intrinsic signal imaging of cat primary visual cortex (Area 17 and 18) while delivering bipolar electrical stimulation to the retina by way of a supra-choroidal electrode array. Using a general linear model (GLM) analysis we identified statistically significant (p < 0.01) activation in a localized region of cortex following supra-threshold electrical stimulation at a single retinal locus. (1) demonstrate that intrinsic signal imaging combined with linear model analysis provides a powerful tool for assessing cortical responses to prosthetic stimulation, and (2) confirm that supra-choroidal electrical stimulation can achieve localized activation of the cortex consistent with focal activation of the retina.

  6. Effectiveness of senna vs polyethylene glycol as laxative therapy in children with constipation related to anorectal malformation.

    PubMed

    Santos-Jasso, Karla Alejandra; Arredondo-García, José Luis; Maza-Vallejos, Jorge; Lezama-Del Valle, Pablo

    2017-01-01

    Constipation is present in 80% of children with corrected anorectal malformations, usually associated to rectal dilation and hypomotility. Osmotic laxatives are routinely used for idiopathic constipation. Senna is a stimulant laxative that produces contractions improving colonic motility without affecting the stool consistency. We designed this trial to study the effectiveness of Senna versus polyethylene glycol for the treatment of constipation in children with anorectal malformation. A randomized controlled crossover design clinical trial, including a washout period, was conducted, including children with corrected anorectal malformations with fecal continence and constipation. The sample size was calculated for proportions (n=28) according to available data for Senna. Effectiveness of laxative therapy was measured with a three variable construct: 1) daily bowel movement, 2) fecal soiling, 3) a "clean" abdominal x-ray. Data analysis included descriptive statistics and a Fisher's exact test for the outcome variable (effectiveness). The study was terminated early because the interim analysis showed a clear benefit toward Senna (p = 0.026). The sample showed a normal statistical distribution for the variables age and presence of megarectum. The maximum daily dose of Senna (sennosides A and B) was 38.7mg and 17g for polyethylene glycol. No adverse effects were identified. Therapy with Senna should be the laxative treatment of choice as part of a bowel management program in children with repaired anorectal malformations and constipation, since the stimulation of colonic propulsion waves could lead to stool evacuation without modification of its consistency which can affect fecal continence. I - randomized controlled trial with adequate statistical power. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. Regression Rates Following the Treatment of Aggressive Posterior Retinopathy of Prematurity with Bevacizumab Versus Laser: 8-Year Retrospective Analysis

    PubMed Central

    Nicoară, Simona D.; Ştefănuţ, Anne C.; Nascutzy, Constanta; Zaharie, Gabriela C.; Toader, Laura E.; Drugan, Tudor C.

    2016-01-01

    Background Retinopathy is a serious complication related to prematurity and a leading cause of childhood blindness. The aggressive posterior form of retinopathy of prematurity (APROP) has a worse anatomical and functional outcome following laser therapy, as compared with the classic form of the disease. The main outcome measures are the APROP regression rate, structural outcomes, and complications associated with intravitreal bevacizumab (IVB) versus laser photocoagulation in APROP. Material/Methods This is a retrospective case series that includes infants with APROP who received either IVB or laser photocoagulation and had a follow-up of at least 60 weeks (for the laser photocoagulation group) and 80 weeks (for the IVB group). In the first group, laser photocoagulation of the retina was carried out and in the second group, 1 bevacizumab injection was administered intravitreally. The following parameters were analyzed in each group: sex, gestational age, birth weight, postnatal age and postmenstrual age at treatment, APROP regression, sequelae, and complications. Statistical analysis was performed using Microsoft Excel and IBM SPSS (version 23.0). Results The laser photocoagulation group consisted of 6 premature infants (12 eyes) and the IVB group consisted of 17 premature infants (34 eyes). Within the laser photocoagulation group, the evolution was favorable in 9 eyes (75%) and unfavorable in 3 eyes (25%). Within the IVB group, APROP regressed in 29 eyes (85.29%) and failed to regress in 5 eyes (14.71%). These differences are statistically significant, as proved by the McNemar test (P<0.001). Conclusions The IVB group had a statistically significant better outcome compared with the laser photocoagulation group, in APROP in our series. PMID:27062023

  8. Analysis of radiographic bone parameters throughout the surgical lengthening and deformity correction of extremities.

    PubMed

    Atanasov, Nenad; Poposka, Anastasika; Samardziski, Milan; Kamnar, Viktor

    2014-01-01

    Radiographic examination of extremities in surgical lengthening and/or correction of deformities is of crucial importance for the assessment of new bone formation. The purpose of this study is to confirm the diagnostic value of radiography in precise detection of bone parameters in various lengthening or correction stages in patients treated by limb-lengthening and deformity correction. 50 patients were treated by the Ilizarov method of limb lengthening or deformity correction at the University Orthopaedic Surgery Clinic in Skopje, and analysed over the period from 2006 to 2012. The patients were divided into two groups. The first group consisted of 27 patients with limb-lengthening because of congenital shortening. The second group consisted of 23 patients treated for acquired limb deformities. The results in both groups were received in three stages of new bone formation and were based on the appearance of 3 radiographic parameters at the distraction/compression site. The differences between the presence of all radiographic bone parameters in different stages of new bone formation were statistically signficant in both groups, especially the presence of the cortical margin in the first group (Cochran Q=34.43, df=2, p=0.00000). The comparative analysis between the two groups showed a statistically significant difference in the presence of initial bone elements and cystic formations only in the first stage. Almost no statistical significance in the differences between both groups of patients with regard to 3 radiographic parameters in 3 stages of new bone formation, indicates a minor influence of the etiopathogenetic background on the new bone formation in patients treated by gradual lengthening or correction of limb deformities.

  9. Do Countries Consistently Engage in Misinforming the International Community about Their Efforts to Combat Money Laundering? Evidence Using Benford’s Law

    PubMed Central

    2017-01-01

    Indicators of compliance and efficiency in combatting money laundering, collected by EUROSTAT, are plagued with shortcomings. In this paper, I have carried out a forensic analysis on a 2003–2010 dataset of indicators of compliance and efficiency in combatting money laundering, that European Union member states self-reported to EUROSTAT, and on the basis of which, their efforts were evaluated. I used Benford’s law to detect any anomalous statistical patterns and found that statistical anomalies were also consistent with strategic manipulation. According to Benford’s law, if we pick a random sample of numbers representing natural processes, and look at the distribution of the first digits of these numbers, we see that, contrary to popular belief, digit 1 occurs most often, then digit 2, and so on, with digit 9 occurring in less than 5% of the sample. Without prior knowledge of Benford’s law, since people are not intuitively good at creating truly random numbers, deviations thereof can capture strategic alterations. In order to eliminate other sources of deviation, I have compared deviations in situations where incentives and opportunities for manipulation existed and in situations where they did not. While my results are not a conclusive proof of strategic manipulation, they signal that countries that faced incentives and opportunities to misinform the international community about their efforts to combat money laundering may have manipulated these indicators. Finally, my analysis points to the high potential for disruption that the manipulation of national statistics has, and calls for the acknowledgment that strategic manipulation can be an unintended consequence of the international community’s pressure on countries to put combatting money laundering on the top of their national agenda. PMID:28122058

  10. Do Countries Consistently Engage in Misinforming the International Community about Their Efforts to Combat Money Laundering? Evidence Using Benford's Law.

    PubMed

    Deleanu, Ioana Sorina

    2017-01-01

    Indicators of compliance and efficiency in combatting money laundering, collected by EUROSTAT, are plagued with shortcomings. In this paper, I have carried out a forensic analysis on a 2003-2010 dataset of indicators of compliance and efficiency in combatting money laundering, that European Union member states self-reported to EUROSTAT, and on the basis of which, their efforts were evaluated. I used Benford's law to detect any anomalous statistical patterns and found that statistical anomalies were also consistent with strategic manipulation. According to Benford's law, if we pick a random sample of numbers representing natural processes, and look at the distribution of the first digits of these numbers, we see that, contrary to popular belief, digit 1 occurs most often, then digit 2, and so on, with digit 9 occurring in less than 5% of the sample. Without prior knowledge of Benford's law, since people are not intuitively good at creating truly random numbers, deviations thereof can capture strategic alterations. In order to eliminate other sources of deviation, I have compared deviations in situations where incentives and opportunities for manipulation existed and in situations where they did not. While my results are not a conclusive proof of strategic manipulation, they signal that countries that faced incentives and opportunities to misinform the international community about their efforts to combat money laundering may have manipulated these indicators. Finally, my analysis points to the high potential for disruption that the manipulation of national statistics has, and calls for the acknowledgment that strategic manipulation can be an unintended consequence of the international community's pressure on countries to put combatting money laundering on the top of their national agenda.

  11. Application of Semiparametric Spline Regression Model in Analyzing Factors that In uence Population Density in Central Java

    NASA Astrophysics Data System (ADS)

    Sumantari, Y. D.; Slamet, I.; Sugiyanto

    2017-06-01

    Semiparametric regression is a statistical analysis method that consists of parametric and nonparametric regression. There are various approach techniques in nonparametric regression. One of the approach techniques is spline. Central Java is one of the most densely populated province in Indonesia. Population density in this province can be modeled by semiparametric regression because it consists of parametric and nonparametric component. Therefore, the purpose of this paper is to determine the factors that in uence population density in Central Java using the semiparametric spline regression model. The result shows that the factors which in uence population density in Central Java is Family Planning (FP) active participants and district minimum wage.

  12. Statistical analysis of long-term monitoring data for persistent organic pollutants in the atmosphere at 20 monitoring stations broadly indicates declining concentrations.

    PubMed

    Kong, Deguo; MacLeod, Matthew; Hung, Hayley; Cousins, Ian T

    2014-11-04

    During recent decades concentrations of persistent organic pollutants (POPs) in the atmosphere have been monitored at multiple stations worldwide. We used three statistical methods to analyze a total of 748 time series of selected POPs in the atmosphere to determine if there are statistically significant reductions in levels of POPs that have had control actions enacted to restrict or eliminate manufacture, use and emissions. Significant decreasing trends were identified in 560 (75%) of the 748 time series collected from the Arctic, North America, and Europe, indicating that the atmospheric concentrations of these POPs are generally decreasing, consistent with the overall effectiveness of emission control actions. Statistically significant trends in synthetic time series could be reliably identified with the improved Mann-Kendall (iMK) test and the digital filtration (DF) technique in time series longer than 5 years. The temporal trends of new (or emerging) POPs in the atmosphere are often unclear because time series are too short. A statistical detrending method based on the iMK test was not able to identify abrupt changes in the rates of decline of atmospheric POP concentrations encoded into synthetic time series.

  13. The Relationship Between Surface Curvature and Abdominal Aortic Aneurysm Wall Stress.

    PubMed

    de Galarreta, Sergio Ruiz; Cazón, Aitor; Antón, Raúl; Finol, Ender A

    2017-08-01

    The maximum diameter (MD) criterion is the most important factor when predicting risk of rupture of abdominal aortic aneurysms (AAAs). An elevated wall stress has also been linked to a high risk of aneurysm rupture, yet is an uncommon clinical practice to compute AAA wall stress. The purpose of this study is to assess whether other characteristics of the AAA geometry are statistically correlated with wall stress. Using in-house segmentation and meshing algorithms, 30 patient-specific AAA models were generated for finite element analysis (FEA). These models were subsequently used to estimate wall stress and maximum diameter and to evaluate the spatial distributions of wall thickness, cross-sectional diameter, mean curvature, and Gaussian curvature. Data analysis consisted of statistical correlations of the aforementioned geometry metrics with wall stress for the 30 AAA inner and outer wall surfaces. In addition, a linear regression analysis was performed with all the AAA wall surfaces to quantify the relationship of the geometric indices with wall stress. These analyses indicated that while all the geometry metrics have statistically significant correlations with wall stress, the local mean curvature (LMC) exhibits the highest average Pearson's correlation coefficient for both inner and outer wall surfaces. The linear regression analysis revealed coefficients of determination for the outer and inner wall surfaces of 0.712 and 0.516, respectively, with LMC having the largest effect on the linear regression equation with wall stress. This work underscores the importance of evaluating AAA mean wall curvature as a potential surrogate for wall stress.

  14. Action control and situational risks in the prevention of HIV and STIs: individual, dyadic, and social influences on consistent condom use in a university population.

    PubMed

    Svenson, Gary R; Ostergren, Per-Olof; Merlo, Juan; Råstam, Lennart

    2002-12-01

    The aim of this study was to gain an understanding of consistent condom use. We took the perspective that condom use involves the ability to handle situational risks influenced at multiple levels, including the individual, dyadic, and social. The hypothesis was that action control, as measured by self-regulation, implementation intentions, and self-efficacy, was the primary determinant. The study was conducted at part of a community-based intervention at a major university (36,000 students). Data was collected using a validated questionnaire mailed to a random sample of students (n = 493, response rate = 71.5%). Statistical analysis included logistic regression models that successively included background, individual, dyadic, and social variables. In the final model, consistent condom use was higher among students with strong implementation intentions, high self-regulation and positive peer norms. The results contribute new knowledge on action control in predicting sexual risk behaviors and lends support to the conceptualization and analysis of HIV/sexually transmitted infection prevention at multiple levels of influence.

  15. Early parenting program as intervention strategy for emotional distress in first-time mothers: a propensity score analysis.

    PubMed

    Okamoto, Miwako; Ishigami, Hideaki; Tokimoto, Kumiko; Matsuoka, Megumi; Tango, Ryoko

    2013-08-01

    The purpose of this study is to evaluate the effectiveness of a single session intervention designed to reduce emotional distress in first-time mothers. We held a parenting class for first-time mothers who had given birth at a university hospital in Tokyo, Japan. The program of the class consists of lectures on infant care and group discussion, which is a common form of intervention in Japan. The effectiveness of intervention is assessed according to differences in emotional distress experienced by class participants and nonparticipants, and analyzed by the use of a propensity score method to avoid self-selection bias. In order to be more confident about our results, we employ several variations of this method. Results from statistical analysis show that although the effectiveness of the intervention was limited, it was able to alleviate subjects' loss of self-confidence as mothers. Because this outcome shows a good degree of consistency across methods, it can be considered robust. Moreover, it is roughly consistent with previous studies. Effectiveness can probably be increased by developing a program that improves upon the intervention.

  16. A statistical analysis of cervical auscultation signals from adults with unsafe airway protection.

    PubMed

    Dudik, Joshua M; Kurosu, Atsuko; Coyle, James L; Sejdić, Ervin

    2016-01-22

    Aspiration, where food or liquid is allowed to enter the larynx during a swallow, is recognized as the most clinically salient feature of oropharyngeal dysphagia. This event can lead to short-term harm via airway obstruction or more long-term effects such as pneumonia. In order to non-invasively identify this event using high resolution cervical auscultation there is a need to characterize cervical auscultation signals from subjects with dysphagia who aspirate. In this study, we collected swallowing sound and vibration data from 76 adults (50 men, 26 women, mean age 62) who underwent a routine videofluoroscopy swallowing examination. The analysis was limited to swallows of liquid with either thin (<5 cps) or viscous (≈300 cps) consistency and was divided into those with deep laryngeal penetration or aspiration (unsafe airway protection), and those with either shallow or no laryngeal penetration (safe airway protection), using a standardized scale. After calculating a selection of time, frequency, and time-frequency features for each swallow, the safe and unsafe categories were compared using Wilcoxon rank-sum statistical tests. Our analysis found that few of our chosen features varied in magnitude between safe and unsafe swallows with thin swallows demonstrating no statistical variation. We also supported our past findings with regard to the effects of sex and the presence or absence of stroke on cervical ausculation signals, but noticed certain discrepancies with regards to bolus viscosity. Overall, our results support the necessity of using multiple statistical features concurrently to identify laryngeal penetration of swallowed boluses in future work with high resolution cervical auscultation.

  17. Effect of citric acid, tetracycline, and doxycycline on instrumented periodontally involved root surfaces: A SEM study

    PubMed Central

    Chahal, Gurparkash Singh; Chhina, Kamalpreet; Chhabra, Vipin; Bhatnagar, Rakhi; Chahal, Amna

    2014-01-01

    Background: A surface smear layer consisting of organic and inorganic material is formed on the root surface following mechanical instrumentation and may inhibit the formation of new connective tissue attachment to the root surface. Modification of the tooth surface by root conditioning has resulted in improved connective tissue attachment and has advanced the goal of reconstructive periodontal treatment. Aim: The aim of this study was to compare the effects of citric acid, tetracycline, and doxycycline on the instrumented periodontally involved root surfaces in vitro using a scanning electron microscope. Settings and Design: A total of 45 dentin samples obtained from 15 extracted, scaled, and root planed teeth were divided into three groups. Materials and Methods: The root conditioning agents were applied with cotton pellets using the Passive burnishing technique for 5 minutes. The samples were then examined by the scanning electron microscope. Statistical Analysis Used: The statistical analysis was carried out using Statistical Package for Social Sciences (SPSS Inc., Chicago, IL, version 15.0 for Windows). For all quantitative variables means and standard deviations were calculated and compared. For more than two groups ANOVA was applied. For multiple comparisons post hoc tests with Bonferroni correction was used. Results: Upon statistical analysis the root conditioning agents used in this study were found to be effective in removing the smear layer, uncovering and widening the dentin tubules and unmasking the dentin collagen matrix. Conclusion: Tetracycline HCl was found to be the best root conditioner among the three agents used. PMID:24744541

  18. Diagnosis of students' ability in a statistical course based on Rasch probabilistic outcome

    NASA Astrophysics Data System (ADS)

    Mahmud, Zamalia; Ramli, Wan Syahira Wan; Sapri, Shamsiah; Ahmad, Sanizah

    2017-06-01

    Measuring students' ability and performance are important in assessing how well students have learned and mastered the statistical courses. Any improvement in learning will depend on the student's approaches to learning, which are relevant to some factors of learning, namely assessment methods carrying out tasks consisting of quizzes, tests, assignment and final examination. This study has attempted an alternative approach to measure students' ability in an undergraduate statistical course based on the Rasch probabilistic model. Firstly, this study aims to explore the learning outcome patterns of students in a statistics course (Applied Probability and Statistics) based on an Entrance-Exit survey. This is followed by investigating students' perceived learning ability based on four Course Learning Outcomes (CLOs) and students' actual learning ability based on their final examination scores. Rasch analysis revealed that students perceived themselves as lacking the ability to understand about 95% of the statistics concepts at the beginning of the class but eventually they had a good understanding at the end of the 14 weeks class. In terms of students' performance in their final examination, their ability in understanding the topics varies at different probability values given the ability of the students and difficulty of the questions. Majority found the probability and counting rules topic to be the most difficult to learn.

  19. Laser speckle imaging of rat retinal blood flow with hybrid temporal and spatial analysis method

    NASA Astrophysics Data System (ADS)

    Cheng, Haiying; Yan, Yumei; Duong, Timothy Q.

    2009-02-01

    Noninvasive monitoring of blood flow in retinal circulation will reveal the progression and treatment of ocular disorders, such as diabetic retinopathy, age-related macular degeneration and glaucoma. A non-invasive and direct BF measurement technique with high spatial-temporal resolution is needed for retinal imaging. Laser speckle imaging (LSI) is such a method. Currently, there are two analysis methods for LSI: spatial statistics LSI (SS-LSI) and temporal statistical LSI (TS-LSI). Comparing these two analysis methods, SS-LSI has higher signal to noise ratio (SNR) and TSLSI is less susceptible to artifacts from stationary speckle. We proposed a hybrid temporal and spatial analysis method (HTS-LSI) to measure the retinal blood flow. Gas challenge experiment was performed and images were analyzed by HTS-LSI. Results showed that HTS-LSI can not only remove the stationary speckle but also increase the SNR. Under 100% O2, retinal BF decreased by 20-30%. This was consistent with the results observed with laser Doppler technique. As retinal blood flow is a critical physiological parameter and its perturbation has been implicated in the early stages of many retinal diseases, HTS-LSI will be an efficient method in early detection of retina diseases.

  20. Self-consistent mean-field approach to the statistical level density in spherical nuclei

    NASA Astrophysics Data System (ADS)

    Kolomietz, V. M.; Sanzhur, A. I.; Shlomo, S.

    2018-06-01

    A self-consistent mean-field approach within the extended Thomas-Fermi approximation with Skyrme forces is applied to the calculations of the statistical level density in spherical nuclei. Landau's concept of quasiparticles with the nucleon effective mass and the correct description of the continuum states for the finite-depth potentials are taken into consideration. The A dependence and the temperature dependence of the statistical inverse level-density parameter K is obtained in a good agreement with experimental data.

  1. SEDA: A software package for the Statistical Earthquake Data Analysis

    NASA Astrophysics Data System (ADS)

    Lombardi, A. M.

    2017-03-01

    In this paper, the first version of the software SEDA (SEDAv1.0), designed to help seismologists statistically analyze earthquake data, is presented. The package consists of a user-friendly Matlab-based interface, which allows the user to easily interact with the application, and a computational core of Fortran codes, to guarantee the maximum speed. The primary factor driving the development of SEDA is to guarantee the research reproducibility, which is a growing movement among scientists and highly recommended by the most important scientific journals. SEDAv1.0 is mainly devoted to produce accurate and fast outputs. Less care has been taken for the graphic appeal, which will be improved in the future. The main part of SEDAv1.0 is devoted to the ETAS modeling. SEDAv1.0 contains a set of consistent tools on ETAS, allowing the estimation of parameters, the testing of model on data, the simulation of catalogs, the identification of sequences and forecasts calculation. The peculiarities of routines inside SEDAv1.0 are discussed in this paper. More specific details on the software are presented in the manual accompanying the program package.

  2. Prevalence of dental attrition in in vitro fertilization children of West Bengal.

    PubMed

    Kar, Sudipta; Sarkar, Subrata; Mukherjee, Ananya

    2014-01-01

    Dental attrition is one of the problems affecting the tooth structure. It may affect both in vitro fertilization (IVF) and spontaneously conceived children. This study was aimed to evaluate and to compare the prevalence of dental attrition in deciduous dentition of IVF and spontaneously conceived children. In a cross-sectional case control study dental attrition status of 3-5 years old children were assessed. The case group consisted of term, singleton babies who were the outcome of IVF in the studied area in 2009. The control group consisted of term, first child, singleton and spontaneously conceived 3-5 years old children who were also resident of the studied area. A sample of 153 IVF and 153 spontaneously conceived children was examined according to Hansson and Nilner classification. Statistical analysis was carried out using Chi-square tests (χ(2) ) or Z test. No statistically significant difference found in studied (IVF children) and control group (spontaneously conceived children). IVF children are considered same as spontaneously conceived children when studied in relation to dental attrition status.

  3. SEDA: A software package for the Statistical Earthquake Data Analysis

    PubMed Central

    Lombardi, A. M.

    2017-01-01

    In this paper, the first version of the software SEDA (SEDAv1.0), designed to help seismologists statistically analyze earthquake data, is presented. The package consists of a user-friendly Matlab-based interface, which allows the user to easily interact with the application, and a computational core of Fortran codes, to guarantee the maximum speed. The primary factor driving the development of SEDA is to guarantee the research reproducibility, which is a growing movement among scientists and highly recommended by the most important scientific journals. SEDAv1.0 is mainly devoted to produce accurate and fast outputs. Less care has been taken for the graphic appeal, which will be improved in the future. The main part of SEDAv1.0 is devoted to the ETAS modeling. SEDAv1.0 contains a set of consistent tools on ETAS, allowing the estimation of parameters, the testing of model on data, the simulation of catalogs, the identification of sequences and forecasts calculation. The peculiarities of routines inside SEDAv1.0 are discussed in this paper. More specific details on the software are presented in the manual accompanying the program package. PMID:28290482

  4. Facts and Figures. A Compendium of Statistics on Ontario Universities. Volume 4.

    ERIC Educational Resources Information Center

    Council of Ontario Universities, Toronto.

    The purpose of this compendium is to provide consistent and accurate statistical and graphical information on the Ontario (Canada) university system. The compendium consists of seven sections: (1) Ontario population data with population projections 1986-2021, median income by educational attainment 1985-1994, and unemployment rates by educational…

  5. Distribution of water quality parameters in Dhemaji district, Assam (India).

    PubMed

    Buragohain, Mridul; Bhuyan, Bhabajit; Sarma, H P

    2010-07-01

    The primary objective of this study is to present a statistically significant water quality database of Dhemaji district, Assam (India) with special reference to pH, fluoride, nitrate, arsenic, iron, sodium and potassium. 25 water samples collected from different locations of five development blocks in Dhemaji district have been studied separately. The implications presented are based on statistical analyses of the raw data. Normal distribution statistics and reliability analysis (correlation and covariance matrix) have been employed to find out the distribution pattern, localisation of data, and other related information. Statistical observations show that all the parameters under investigation exhibit non uniform distribution with a long asymmetric tail either on the right or left side of the median. The width of the third quartile was consistently found to be more than the second quartile for each parameter. Differences among mean, mode and median, significant skewness and kurtosis value indicate that the distribution of various water quality parameters in the study area is widely off normal. Thus, the intrinsic water quality is not encouraging due to unsymmetrical distribution of various water quality parameters in the study area.

  6. Statistical learning of novel graphotactic constraints in children and adults.

    PubMed

    Samara, Anna; Caravolas, Markéta

    2014-05-01

    The current study explored statistical learning processes in the acquisition of orthographic knowledge in school-aged children and skilled adults. Learning of novel graphotactic constraints on the position and context of letter distributions was induced by means of a two-phase learning task adapted from Onishi, Chambers, and Fisher (Cognition, 83 (2002) B13-B23). Following incidental exposure to pattern-embedding stimuli in Phase 1, participants' learning generalization was tested in Phase 2 with legality judgments about novel conforming/nonconforming word-like strings. Test phase performance was above chance, suggesting that both types of constraints were reliably learned even after relatively brief exposure. As hypothesized, signal detection theory d' analyses confirmed that learning permissible letter positions (d'=0.97) was easier than permissible neighboring letter contexts (d'=0.19). Adults were more accurate than children in all but a strict analysis of the contextual constraints condition. Consistent with the statistical learning perspective in literacy, our results suggest that statistical learning mechanisms contribute to children's and adults' acquisition of knowledge about graphotactic constraints similar to those existing in their orthography. Copyright © 2013 Elsevier Inc. All rights reserved.

  7. GASP cloud- and particle-encounter statistics and their application to LPC aircraft studies. Volume 1: Analysis and conclusions

    NASA Technical Reports Server (NTRS)

    Jasperson, W. H.; Nastrom, G. D.; Davis, R. E.; Holdeman, J. D.

    1984-01-01

    Summary studies are presented for the entire cloud observation archieve from the NASA Global Atmospheric Sampling Program (GASP). Studies are also presented for GASP particle concentration data gathered concurrently with the cloud observations. Cloud encounters are shown on about 15 percent of the data samples overall, but the probability of cloud encounter is shown to vary significantly with altitude, latitude, and distance from the tropopause. Several meteorological circulation features are apparent in the latitudinal distribution of cloud cover, and the cloud encounter statistics are shown to be consistent with the classical mid-latitude cyclone model. Observations of clouds spaced more closely than 90 minutes are shown to be statistically dependent. The statistics for cloud and particle encounter are utilized to estimate the frequency of cloud encounter on long range airline routes, and to assess the probability and extent of laminar flow loss due to cloud or particle encounter by aircraft utilizing laminar flow control (LFC). It is shown that the probability of extended cloud encounter is too low, of itself, to make LFC impractical.

  8. Guide to Using Onionskin Analysis Code (U)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fugate, Michael Lynn; Morzinski, Jerome Arthur

    2016-09-15

    This document is a guide to using R-code written for the purpose of analyzing onionskin experiments. We expect the user to be very familiar with statistical methods and the R programming language. For more details about onionskin experiments and the statistical methods mentioned in this document see Storlie, Fugate, et al. (2013). Engineers at LANL experiment with detonators and high explosives to assess performance. The experimental unit, called an onionskin, is a hemisphere consisting of a detonator and a booster pellet surrounded by explosive material. When the detonator explodes, a streak camera mounted above the pole of the hemisphere recordsmore » when the shock wave arrives at the surface. The output from the camera is a two-dimensional image that is transformed into a curve that shows the arrival time as a function of polar angle. The statistical challenge is to characterize a baseline population of arrival time curves and to compare the baseline curves to curves from a new, so-called, test series. The hope is that the new test series of curves is statistically similar to the baseline population.« less

  9. Are conventional statistical techniques exhaustive for defining metal background concentrations in harbour sediments? A case study: The Coastal Area of Bari (Southeast Italy).

    PubMed

    Mali, Matilda; Dell'Anna, Maria Michela; Mastrorilli, Piero; Damiani, Leonardo; Ungaro, Nicola; Belviso, Claudia; Fiore, Saverio

    2015-11-01

    Sediment contamination by metals poses significant risks to coastal ecosystems and is considered to be problematic for dredging operations. The determination of the background values of metal and metalloid distribution based on site-specific variability is fundamental in assessing pollution levels in harbour sediments. The novelty of the present work consists of addressing the scope and limitation of analysing port sediments through the use of conventional statistical techniques (such as: linear regression analysis, construction of cumulative frequency curves and the iterative 2σ technique), that are commonly employed for assessing Regional Geochemical Background (RGB) values in coastal sediments. This study ascertained that although the tout court use of such techniques in determining the RGB values in harbour sediments seems appropriate (the chemical-physical parameters of port sediments fit well with statistical equations), it should nevertheless be avoided because it may be misleading and can mask key aspects of the study area that can only be revealed by further investigations, such as mineralogical and multivariate statistical analyses. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. Super-delta: a new differential gene expression analysis procedure with robust data normalization.

    PubMed

    Liu, Yuhang; Zhang, Jinfeng; Qiu, Xing

    2017-12-21

    Normalization is an important data preparation step in gene expression analyses, designed to remove various systematic noise. Sample variance is greatly reduced after normalization, hence the power of subsequent statistical analyses is likely to increase. On the other hand, variance reduction is made possible by borrowing information across all genes, including differentially expressed genes (DEGs) and outliers, which will inevitably introduce some bias. This bias typically inflates type I error; and can reduce statistical power in certain situations. In this study we propose a new differential expression analysis pipeline, dubbed as super-delta, that consists of a multivariate extension of the global normalization and a modified t-test. A robust procedure is designed to minimize the bias introduced by DEGs in the normalization step. The modified t-test is derived based on asymptotic theory for hypothesis testing that suitably pairs with the proposed robust normalization. We first compared super-delta with four commonly used normalization methods: global, median-IQR, quantile, and cyclic loess normalization in simulation studies. Super-delta was shown to have better statistical power with tighter control of type I error rate than its competitors. In many cases, the performance of super-delta is close to that of an oracle test in which datasets without technical noise were used. We then applied all methods to a collection of gene expression datasets on breast cancer patients who received neoadjuvant chemotherapy. While there is a substantial overlap of the DEGs identified by all of them, super-delta were able to identify comparatively more DEGs than its competitors. Downstream gene set enrichment analysis confirmed that all these methods selected largely consistent pathways. Detailed investigations on the relatively small differences showed that pathways identified by super-delta have better connections to breast cancer than other methods. As a new pipeline, super-delta provides new insights to the area of differential gene expression analysis. Solid theoretical foundation supports its asymptotic unbiasedness and technical noise-free properties. Implementation on real and simulated datasets demonstrates its decent performance compared with state-of-art procedures. It also has the potential of expansion to be incorporated with other data type and/or more general between-group comparison problems.

  11. Post-hoc analysis of randomised, placebo-controlled, double-blind study (MCI186-19) of edaravone (MCI-186) in amyotrophic lateral sclerosis.

    PubMed

    Takei, Koji; Takahashi, Fumihiro; Liu, Shawn; Tsuda, Kikumi; Palumbo, Joseph

    2017-10-01

    Post-hoc analyses of the ALS Functional Rating Scale-Revised (ALSFRS-R) score data, the primary endpoint in the 24-week double-blind placebo-controlled study of edaravone (MCI186-19, NCT01492686), were performed to confirm statistical robustness of the result. The previously reported original analysis had used a last observation carried forward (LOCF) method and also excluded patients with fewer than three completed treatment cycles. The post-hoc sensitivity analyses used different statistical methods as follows: 1) including all patients regardless of treatment cycles received (ALL LOCF); 2) a mixed model for repeated measurements (MMRM) analysis; and 3) the Combined Assessment of Function and Survival (CAFS) endpoint. Findings were consistent with the original primary analysis in showing superiority of edaravone over placebo. We also investigated the distribution of change in ALSFRS-R total score across all patients in the study as well as which ALSFRS-R items and domains may have contributed to the overall efficacy findings. The distribution of changes in ALSFRS-R total score from baseline to the end of cycle 6 (ALL LOCF) shifted in favour of edaravone compared to placebo. Edaravone was descriptively favoured for each ALSFRS-R item and each of the four ALSFRS-R domains at the end of cycle 6 (ALL LOCF), suggesting a generalised effect of edaravone in slowing functional decline across all anatomical regions. The effect of edaravone appeared to be similar in patients with bulbar onset and limb onset. Together, these observations would be consistent with its putative neuroprotective effects against the development of oxidative damage unspecific to anatomical regions.

  12. "Geo-statistics methods and neural networks in geophysical applications: A case study"

    NASA Astrophysics Data System (ADS)

    Rodriguez Sandoval, R.; Urrutia Fucugauchi, J.; Ramirez Cruz, L. C.

    2008-12-01

    The study is focus in the Ebano-Panuco basin of northeastern Mexico, which is being explored for hydrocarbon reservoirs. These reservoirs are in limestones and there is interest in determining porosity and permeability in the carbonate sequences. The porosity maps presented in this study are estimated from application of multiattribute and neural networks techniques, which combine geophysics logs and 3-D seismic data by means of statistical relationships. The multiattribute analysis is a process to predict a volume of any underground petrophysical measurement from well-log and seismic data. The data consist of a series of target logs from wells which tie a 3-D seismic volume. The target logs are neutron porosity logs. From the 3-D seismic volume a series of sample attributes is calculated. The objective of this study is to derive a set of attributes and the target log values. The selected set is determined by a process of forward stepwise regression. The analysis can be linear or nonlinear. In the linear mode the method consists of a series of weights derived by least-square minimization. In the nonlinear mode, a neural network is trained using the select attributes as inputs. In this case we used a probabilistic neural network PNN. The method is applied to a real data set from PEMEX. For better reservoir characterization the porosity distribution was estimated using both techniques. The case shown a continues improvement in the prediction of the porosity from the multiattribute to the neural network analysis. The improvement is in the training and the validation, which are important indicators of the reliability of the results. The neural network showed an improvement in resolution over the multiattribute analysis. The final maps provide more realistic results of the porosity distribution.

  13. Potential surrogate endpoints for prostate cancer survival: analysis of a phase III randomized trial.

    PubMed

    Ray, Michael E; Bae, Kyounghwa; Hussain, Maha H A; Hanks, Gerald E; Shipley, William U; Sandler, Howard M

    2009-02-18

    The identification of surrogate endpoints for prostate cancer-specific survival may shorten the length of clinical trials for prostate cancer. We evaluated distant metastasis and general clinical treatment failure as potential surrogates for prostate cancer-specific survival by use of data from the Radiation Therapy and Oncology Group 92-02 randomized trial. Patients (n = 1554 randomly assigned and 1521 evaluable for this analysis) with locally advanced prostate cancer had been treated with 4 months of neoadjuvant and concurrent androgen deprivation therapy with external beam radiation therapy and then randomly assigned to no additional therapy (control arm) or 24 additional months of androgen deprivation therapy (experimental arm). Data from landmark analyses at 3 and 5 years for general clinical treatment failure (defined as documented local disease progression, regional or distant metastasis, initiation of androgen deprivation therapy, or a prostate-specific antigen level of 25 ng/mL or higher after radiation therapy) and/or distant metastasis were tested as surrogate endpoints for prostate cancer-specific survival at 10 years by use of Prentice's four criteria. All statistical tests were two-sided. At 3 years, 1364 patients were alive and contributed data for analysis. Both distant metastasis and general clinical treatment failure at 3 years were consistent with all four of Prentice's criteria for being surrogate endpoints for prostate cancer-specific survival at 10 years. At 5 years, 1178 patients were alive and contributed data for analysis. Although prostate cancer-specific survival was not statistically significantly different between treatment arms at 5 years (P = .08), both endpoints were consistent with Prentice's remaining criteria. Distant metastasis and general clinical treatment failure at 3 years may be candidate surrogate endpoints for prostate cancer-specific survival at 10 years. These endpoints, however, must be validated in other datasets.

  14. Task-Related Edge Density (TED)—A New Method for Revealing Dynamic Network Formation in fMRI Data of the Human Brain

    PubMed Central

    Lohmann, Gabriele; Stelzer, Johannes; Zuber, Verena; Buschmann, Tilo; Margulies, Daniel; Bartels, Andreas; Scheffler, Klaus

    2016-01-01

    The formation of transient networks in response to external stimuli or as a reflection of internal cognitive processes is a hallmark of human brain function. However, its identification in fMRI data of the human brain is notoriously difficult. Here we propose a new method of fMRI data analysis that tackles this problem by considering large-scale, task-related synchronisation networks. Networks consist of nodes and edges connecting them, where nodes correspond to voxels in fMRI data, and the weight of an edge is determined via task-related changes in dynamic synchronisation between their respective times series. Based on these definitions, we developed a new data analysis algorithm that identifies edges that show differing levels of synchrony between two distinct task conditions and that occur in dense packs with similar characteristics. Hence, we call this approach “Task-related Edge Density” (TED). TED proved to be a very strong marker for dynamic network formation that easily lends itself to statistical analysis using large scale statistical inference. A major advantage of TED compared to other methods is that it does not depend on any specific hemodynamic response model, and it also does not require a presegmentation of the data for dimensionality reduction as it can handle large networks consisting of tens of thousands of voxels. We applied TED to fMRI data of a fingertapping and an emotion processing task provided by the Human Connectome Project. TED revealed network-based involvement of a large number of brain areas that evaded detection using traditional GLM-based analysis. We show that our proposed method provides an entirely new window into the immense complexity of human brain function. PMID:27341204

  15. Task-Related Edge Density (TED)-A New Method for Revealing Dynamic Network Formation in fMRI Data of the Human Brain.

    PubMed

    Lohmann, Gabriele; Stelzer, Johannes; Zuber, Verena; Buschmann, Tilo; Margulies, Daniel; Bartels, Andreas; Scheffler, Klaus

    2016-01-01

    The formation of transient networks in response to external stimuli or as a reflection of internal cognitive processes is a hallmark of human brain function. However, its identification in fMRI data of the human brain is notoriously difficult. Here we propose a new method of fMRI data analysis that tackles this problem by considering large-scale, task-related synchronisation networks. Networks consist of nodes and edges connecting them, where nodes correspond to voxels in fMRI data, and the weight of an edge is determined via task-related changes in dynamic synchronisation between their respective times series. Based on these definitions, we developed a new data analysis algorithm that identifies edges that show differing levels of synchrony between two distinct task conditions and that occur in dense packs with similar characteristics. Hence, we call this approach "Task-related Edge Density" (TED). TED proved to be a very strong marker for dynamic network formation that easily lends itself to statistical analysis using large scale statistical inference. A major advantage of TED compared to other methods is that it does not depend on any specific hemodynamic response model, and it also does not require a presegmentation of the data for dimensionality reduction as it can handle large networks consisting of tens of thousands of voxels. We applied TED to fMRI data of a fingertapping and an emotion processing task provided by the Human Connectome Project. TED revealed network-based involvement of a large number of brain areas that evaded detection using traditional GLM-based analysis. We show that our proposed method provides an entirely new window into the immense complexity of human brain function.

  16. Automatic cloud coverage assessment of Formosat-2 image

    NASA Astrophysics Data System (ADS)

    Hsu, Kuo-Hsien

    2011-11-01

    Formosat-2 satellite equips with the high-spatial-resolution (2m ground sampling distance) remote sensing instrument. It has been being operated on the daily-revisiting mission orbit by National Space organization (NSPO) of Taiwan since May 21 2004. NSPO has also serving as one of the ground receiving stations for daily processing the received Formosat- 2 images. The current cloud coverage assessment of Formosat-2 image for NSPO Image Processing System generally consists of two major steps. Firstly, an un-supervised K-means method is used for automatically estimating the cloud statistic of Formosat-2 image. Secondly, manual estimation of cloud coverage from Formosat-2 image is processed by manual examination. Apparently, a more accurate Automatic Cloud Coverage Assessment (ACCA) method certainly increases the efficiency of processing step 2 with a good prediction of cloud statistic. In this paper, mainly based on the research results from Chang et al, Irish, and Gotoh, we propose a modified Formosat-2 ACCA method which considered pre-processing and post-processing analysis. For pre-processing analysis, cloud statistic is determined by using un-supervised K-means classification, Sobel's method, Otsu's method, non-cloudy pixels reexamination, and cross-band filter method. Box-Counting fractal method is considered as a post-processing tool to double check the results of pre-processing analysis for increasing the efficiency of manual examination.

  17. A systematic review of the quality of statistical methods employed for analysing quality of life data in cancer randomised controlled trials.

    PubMed

    Hamel, Jean-Francois; Saulnier, Patrick; Pe, Madeline; Zikos, Efstathios; Musoro, Jammbe; Coens, Corneel; Bottomley, Andrew

    2017-09-01

    Over the last decades, Health-related Quality of Life (HRQoL) end-points have become an important outcome of the randomised controlled trials (RCTs). HRQoL methodology in RCTs has improved following international consensus recommendations. However, no international recommendations exist concerning the statistical analysis of such data. The aim of our study was to identify and characterise the quality of the statistical methods commonly used for analysing HRQoL data in cancer RCTs. Building on our recently published systematic review, we analysed a total of 33 published RCTs studying the HRQoL methods reported in RCTs since 1991. We focussed on the ability of the methods to deal with the three major problems commonly encountered when analysing HRQoL data: their multidimensional and longitudinal structure and the commonly high rate of missing data. All studies reported HRQoL being assessed repeatedly over time for a period ranging from 2 to 36 months. Missing data were common, with compliance rates ranging from 45% to 90%. From the 33 studies considered, 12 different statistical methods were identified. Twenty-nine studies analysed each of the questionnaire sub-dimensions without type I error adjustment. Thirteen studies repeated the HRQoL analysis at each assessment time again without type I error adjustment. Only 8 studies used methods suitable for repeated measurements. Our findings show a lack of consistency in statistical methods for analysing HRQoL data. Problems related to multiple comparisons were rarely considered leading to a high risk of false positive results. It is therefore critical that international recommendations for improving such statistical practices are developed. Copyright © 2017. Published by Elsevier Ltd.

  18. Topographic optic disc analysis by Heidelberg retinal tomography in ocular Behçet's disease

    PubMed Central

    Berker, Nilufer; Elgin, Ufuk; Ozdal, Pinar; Batman, Aygen; Soykan, Emel; Ozkan, Seyhan S

    2007-01-01

    Aim To compare the topographic characteristics of the optic discs in patients with severe and mild ocular Behçet's disease by using Heidelberg retinal tomographaphy (HRT). Methods This prospective study included 47 eyes of 47 patients with ocular BD who were being followed‐up at the Uveitis Clinic of the Ankara Ulucanlar Eye Research Hospital, Ankara, Turkey. The patients were divided into two groups. Group 1 consisted of 21 eyes with mild uveitis, and group 2 consisted of 26 eyes with severe uveitis. All patients underwent topographic optic disc analysis by HRT II, and the quantitative optic disc parameters of both groups were compared by non‐parametric Mann‐Whitney U test. Results The mean cup volume, rim volume, cup area, disc area and cup depth in group 1 were found to be statistically significantly greater than those in group 2 (p<0.0001, p = 0.03, p = 0.021, p = 0.01 and p = 0.017, respectively), while the difference between the mean cup‐to‐disc ratios in group 1 and group 2 were found to be statistically insignificant (p = 0.148). Conclusion A relationship was found between the severity of ocular BD and optic disc topography determined by HRT. In eyes with smaller optic discs, uveitis was observed to have a more severe course with more frequent relapses than those with larger discs. PMID:17475703

  19. Topographic optic disc analysis by Heidelberg retinal tomography in ocular Behcet's disease.

    PubMed

    Berker, Nilufer; Elgin, Ufuk; Ozdal, Pinar; Batman, Aygen; Soykan, Emel; Ozkan, Seyhan S

    2007-09-01

    To compare the topographic characteristics of the optic discs in patients with severe and mild ocular Behçet's disease by using Heidelberg retinal tomographaphy (HRT). This prospective study included 47 eyes of 47 patients with ocular BD who were being followed-up at the Uveitis Clinic of the Ankara Ulucanlar Eye Research Hospital, Ankara, Turkey. The patients were divided into two groups. Group 1 consisted of 21 eyes with mild uveitis, and group 2 consisted of 26 eyes with severe uveitis. All patients underwent topographic optic disc analysis by HRT II, and the quantitative optic disc parameters of both groups were compared by non-parametric Mann-Whitney U test. The mean cup volume, rim volume, cup area, disc area and cup depth in group 1 were found to be statistically significantly greater than those in group 2 (p<0.0001, p = 0.03, p = 0.021, p = 0.01 and p = 0.017, respectively), while the difference between the mean cup-to-disc ratios in group 1 and group 2 were found to be statistically insignificant (p = 0.148). A relationship was found between the severity of ocular BD and optic disc topography determined by HRT. In eyes with smaller optic discs, uveitis was observed to have a more severe course with more frequent relapses than those with larger discs.

  20. Further validation of the Health Promoting Activities Scale with mothers of typically developing children.

    PubMed

    Bourke-Taylor, Helen; Lalor, Aislinn; Farnworth, Louise; Pallant, Julie F

    2014-10-01

    The Health Promoting Activities Scale (HPAS) measures the frequency that mothers participate in self-selected leisure activities that promote health and wellbeing. The scale was originally validated on mothers of school-aged children with disabilities, and the current article extends this research using a comparative sample of mothers of typically developing school-aged children. Australian mothers (N = 263) completed a questionnaire containing the HPAS, a measure of depression, anxiety and stress (DASS-21) and questions concerning their weight, height, sleep quality and demographics. Statistical analysis assessed the underlying structure, internal consistency and construct validity of the HPAS. Inferential statistics were utilised to investigate the construct validity. Exploratory factor analysis supported the unidimensionality of the HPAS. It showed good internal consistency (Cronbach's alpha = 0.78). Significantly lower HPAS scores were recorded for women who were obese; had elevated levels of depression, anxiety and stress; had poor quality sleep or had heavy caring commitments. The mean HPAS score in this sample (M = 32.2) was significantly higher than was previously reported for women of children with a disability (M = 21.6: P < 0.001). Further psychometric evaluation of the HPAS continues to support the HPAS as a sound instrument that measures the frequency that women participate in meaningful occupation that is associated with differences in mental health and wellbeing and other health indicators. © 2014 Occupational Therapy Australia.

  1. Development and evaluation of the Internalized Racism in Asian Americans Scale (IRAAS).

    PubMed

    Choi, Andrew Young; Israel, Tania; Maeda, Hotaka

    2017-01-01

    This article presents the development and psychometric evaluation of the Internalized Racism in Asian Americans Scale (IRAAS), which was designed to measure the degree to which Asian Americans internalized hostile attitudes and negative messages targeted toward their racial identity. Items were developed on basis of prior literature, vetted through expert feedback and cognitive interviews, and administered to 655 Asian American participants through Amazon Mechanical Turk. Exploratory factor analysis with a random subsample (n = 324) yielded a psychometrically robust preliminary measurement model consisting of 3 factors: Self-Negativity, Weakness Stereotypes, and Appearance Bias. Confirmatory factor analysis with a separate subsample (n = 331) indicated that the proposed correlated factors model was strongly consistent with the observed data. Factor determinacies were high and demonstrated that the specified items adequately measured their intended factors. Bifactor modeling further indicated that this multidimensionality could be univocally represented for the purpose of measurement, including the use of a mean total score representing a single continuum of internalized racism on which individuals vary. The IRAAS statistically predicted depressive symptoms, and demonstrated statistically significant correlations in theoretically expected directions with four dimensions of collective self-esteem. These results provide initial validity evidence supporting the use of the IRAAS to measure aspects of internalized racism in this population. Limitations and research implications are discussed. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  2. A Mokken scale analysis of the peer physical examination questionnaire.

    PubMed

    Vaughan, Brett; Grace, Sandra

    2018-01-01

    Peer physical examination (PPE) is a teaching and learning strategy utilised in most health profession education programs. Perceptions of participating in PPE have been described in the literature, focusing on areas of the body students are willing, or unwilling, to examine. A small number of questionnaires exist to evaluate these perceptions, however none have described the measurement properties that may allow them to be used longitudinally. The present study undertook a Mokken scale analysis of the Peer Physical Examination Questionnaire (PPEQ) to evaluate its dimensionality and structure when used with Australian osteopathy students. Students enrolled in Year 1 of the osteopathy programs at Victoria University (Melbourne, Australia) and Southern Cross University (Lismore, Australia) were invited to complete the PPEQ prior to their first practical skills examination class. R, an open-source statistics program, was used to generate the descriptive statistics and perform a Mokken scale analysis. Mokken scale analysis is a non-parametric item response theory approach that is used to cluster items measuring a latent construct. Initial analysis suggested the PPEQ did not form a single scale. Further analysis identified three subscales: 'comfort', 'concern', and 'professionalism and education'. The properties of each subscale suggested they were unidimensional with variable internal structures. The 'comfort' subscale was the strongest of the three identified. All subscales demonstrated acceptable reliability estimation statistics (McDonald's omega > 0.75) supporting the calculation of a sum score for each subscale. The subscales identified are consistent with the literature. The 'comfort' subscale may be useful to longitudinally evaluate student perceptions of PPE. Further research is required to evaluate changes with PPE and the utility of the questionnaire with other health profession education programs.

  3. An outline of graphical Markov models in dentistry.

    PubMed

    Helfenstein, U; Steiner, M; Menghini, G

    1999-12-01

    In the usual multiple regression model there is one response variable and one block of several explanatory variables. In contrast, in reality there may be a block of several possibly interacting response variables one would like to explain. In addition, the explanatory variables may split into a sequence of several blocks, each block containing several interacting variables. The variables in the second block are explained by those in the first block; the variables in the third block by those in the first and the second block etc. During recent years methods have been developed allowing analysis of problems where the data set has the above complex structure. The models involved are called graphical models or graphical Markov models. The main result of an analysis is a picture, a conditional independence graph with precise statistical meaning, consisting of circles representing variables and lines or arrows representing significant conditional associations. The absence of a line between two circles signifies that the corresponding two variables are independent conditional on the presence of other variables in the model. An example from epidemiology is presented in order to demonstrate application and use of the models. The data set in the example has a complex structure consisting of successive blocks: the variable in the first block is year of investigation; the variables in the second block are age and gender; the variables in the third block are indices of calculus, gingivitis and mutans streptococci and the final response variables in the fourth block are different indices of caries. Since the statistical methods may not be easily accessible to dentists, this article presents them in an introductory form. Graphical models may be of great value to dentists in allowing analysis and visualisation of complex structured multivariate data sets consisting of a sequence of blocks of interacting variables and, in particular, several possibly interacting responses in the final block.

  4. Influence of Stepped Osteotomy on Primary Stability of Implants Inserted in Low-Density Bone Sites: An In Vitro Study.

    PubMed

    Degidi, Marco; Daprile, Giuseppe; Piattelli, Adriano

    The aims of this study were to evaluate the ability of a stepped osteotomy to improve dental implant primary stability in low-density bone sites and to investigate possible correlations between primary stability parameters. The study was performed on fresh humid bovine bone classified as type III. The test group consisted of 30 Astra Tech EV implants inserted following the protocol provided by the manufacturer. The first control group consisted of 30 Astra Tech EV implants inserted in sites without the underpreparation of the apical portion. The second control group consisted of 30 Astra Tech TX implants inserted following the protocol provided by the manufacturer. Implant insertion was performed at the predetermined 30 rpm. The insertion torque data were recorded and exported as a curve; using a trapezoidal integration technique, the area underlying the curve was calculated: this area represents the variable torque work (VTW). Peak insertion torque (pIT) and resonance frequency analysis (RFA) were also recorded. A Mann-Whitney test showed that the mean VTW was significantly higher in the test group compared with the first control and second control groups; furthermore, statistical analysis showed that pIT also was significantly higher in the test group compared with the first and second control groups. Analyzing RFA values, only the difference between the test group and second control group showed statistical significance. Pearson correlation analysis showed a very strong positive correlation between pIT and VTW values in all groups; furthermore, it showed a positive correlation between pIT and RFA values and between VTW and RFA values only in the test group. Within the limitations of an in vitro study, the results show that stepped osteotomy can be a viable method to improve implant primary stability in low-density bone sites, and that, when a traditional osteotomy method is performed, RFA presents no correlation with pIT and VTW.

  5. Gate line edge roughness amplitude and frequency variation effects on intra die MOS device characteristics

    NASA Astrophysics Data System (ADS)

    Hamadeh, Emad; Gunther, Norman G.; Niemann, Darrell; Rahman, Mahmud

    2006-06-01

    Random fluctuations in fabrication process outcomes such as gate line edge roughness (LER) give rise to corresponding fluctuations in scaled down MOS device characteristics. A thermodynamic-variational model is presented to study the effects of LER on threshold voltage and capacitance of sub-50 nm MOS devices. Conceptually, we treat the geometric definition of the MOS devices on a die as consisting of a collection of gates. In turn, each of these gates has an area, A, and a perimeter, P, defined by nominally straight lines subject to random process outcomes producing roughness. We treat roughness as being deviations from straightness consisting of both transverse amplitude and longitudinal wavelength each having lognormal distribution. We obtain closed-form expressions for variance of threshold voltage ( Vth), and device capacitance ( C) at Onset of Strong Inversion (OSI) for a small device. Using our variational model, we characterized the device electrical properties such as σ and σC in terms of the statistical parameters of the roughness amplitude and spatial frequency, i.e., inverse roughness wavelength. We then verified our model with numerical analysis of Vth roll-off for small devices and σ due to dopant fluctuation. Our model was also benchmarked against TCAD of σ as a function of LER. We then extended our analysis to predict variations in σ and σC versus average LER spatial frequency and amplitude, and oxide-thickness. Given the intuitive expectation that LER of very short wavelengths must also have small amplitude, we have investigated the case in which the amplitude mean is inversely related to the frequency mean. We compare with the situation in which amplitude and frequency mean are unrelated. Given also that the gate perimeter may consist of different LER signature for each side, we have extended our analysis to the case when the LER statistical difference between gate sides is moderate, as well as when it is significantly large.

  6. Understanding nanocellulose chirality and structure–properties relationship at the single fibril level

    PubMed Central

    Usov, Ivan; Nyström, Gustav; Adamcik, Jozef; Handschin, Stephan; Schütz, Christina; Fall, Andreas; Bergström, Lennart; Mezzenga, Raffaele

    2015-01-01

    Nanocellulose fibrils are ubiquitous in nature and nanotechnologies but their mesoscopic structural assembly is not yet fully understood. Here we study the structural features of rod-like cellulose nanoparticles on a single particle level, by applying statistical polymer physics concepts on electron and atomic force microscopy images, and we assess their physical properties via quantitative nanomechanical mapping. We show evidence of right-handed chirality, observed on both bundles and on single fibrils. Statistical analysis of contours from microscopy images shows a non-Gaussian kink angle distribution. This is inconsistent with a structure consisting of alternating amorphous and crystalline domains along the contour and supports process-induced kink formation. The intrinsic mechanical properties of nanocellulose are extracted from nanoindentation and persistence length method for transversal and longitudinal directions, respectively. The structural analysis is pushed to the level of single cellulose polymer chains, and their smallest associated unit with a proposed 2 × 2 chain-packing arrangement. PMID:26108282

  7. General solution of the chemical master equation and modality of marginal distributions for hierarchic first-order reaction networks.

    PubMed

    Reis, Matthias; Kromer, Justus A; Klipp, Edda

    2018-01-20

    Multimodality is a phenomenon which complicates the analysis of statistical data based exclusively on mean and variance. Here, we present criteria for multimodality in hierarchic first-order reaction networks, consisting of catalytic and splitting reactions. Those networks are characterized by independent and dependent subnetworks. First, we prove the general solvability of the Chemical Master Equation (CME) for this type of reaction network and thereby extend the class of solvable CME's. Our general solution is analytical in the sense that it allows for a detailed analysis of its statistical properties. Given Poisson/deterministic initial conditions, we then prove the independent species to be Poisson/binomially distributed, while the dependent species exhibit generalized Poisson/Khatri Type B distributions. Generalized Poisson/Khatri Type B distributions are multimodal for an appropriate choice of parameters. We illustrate our criteria for multimodality by several basic models, as well as the well-known two-stage transcription-translation network and Bateman's model from nuclear physics. For both examples, multimodality was previously not reported.

  8. Time and space in the middle paleolithic: Spatial structure and occupation dynamics of seven open-air sites.

    PubMed

    Clark, Amy E

    2016-05-06

    The spatial structure of archeological sites can help reconstruct the settlement dynamics of hunter-gatherers by providing information on the number and length of occupations. This study seeks to access this information through a comparison of seven sites. These sites are open-air and were all excavated over large spatial areas, up to 2,000 m(2) , and are therefore ideal for spatial analysis, which was done using two complementary methods, lithic refitting and density zones. Both methods were assessed statistically using confidence intervals. The statistically significant results from each site were then compiled to evaluate trends that occur across the seven sites. These results were used to assess the "spatial consistency" of each assemblage and, through that, the number and duration of occupations. This study demonstrates that spatial analysis can be a powerful tool in research on occupation dynamics and can help disentangle the many occupations that often make up an archeological assemblage. © 2016 Wiley Periodicals, Inc.

  9. Statistical analysis of the surface figure of the James Webb Space Telescope

    NASA Astrophysics Data System (ADS)

    Lightsey, Paul A.; Chaney, David; Gallagher, Benjamin B.; Brown, Bob J.; Smith, Koby; Schwenker, John

    2012-09-01

    The performance of an optical system is best characterized by either the point spread function (PSF) or the optical transfer function (OTF). However, for system budgeting purposes, it is convenient to use a single scalar metric, or a combination of a few scalar metrics to track performance. For the James Webb Space Telescope, the Observatory level requirements were expressed in metrics of Strehl Ratio, and Encircled Energy. These in turn were converted to the metrics of total rms WFE and rms WFE within spatial frequency domains. The 18 individual mirror segments for the primary mirror segment assemblies (PMSA), the secondary mirror (SM), tertiary mirror (TM), and Fine Steering Mirror have all been fabricated. They are polished beryllium mirrors with a protected gold reflective coating. The statistical analysis of the resulting Surface Figure Error of these mirrors has been analyzed. The average spatial frequency distribution and the mirror-to-mirror consistency of the spatial frequency distribution are reported. The results provide insight to system budgeting processes for similar optical systems.

  10. Increased frequencies of aberrant sperm as indicators of mutagenic damage in mice.

    PubMed

    Soares, E R; Sheridan, W; Haseman, J K; Segall, M

    1979-02-01

    We have tested the effects of TEM in 3 strains of mice using the sperm morphology assay. In addition, we have made an attempt to evaluate this test system with respect to experimental design, statistical problems and possible interlaboratory differences. Treatment with TEM results in significant increases in the percent of abnormally shaped sperm. These increases are readily detectable in sperm treated as spermatocytes and spermatogonial stages. Our data indicate possible problems associated with inter-laboratory variation in slide analysis. We have found that despite the introduction of such sources of variation, our data were consistent with respect to the effects of TEM. Another area of concern in the sperm morphology test is the presence of "outlier" animals. In our study, such animals comprised 4% of the total number of animals considered. Statistical analysis of the slides from these animals have shown that this problem can be dealt with and that when recognized as such, "outliers" do not effect the outcome of the sperm morphology assay.

  11. Grain size analysis and depositional environment of shallow marine to basin floor, Kelantan River Delta

    NASA Astrophysics Data System (ADS)

    Afifah, M. R. Nurul; Aziz, A. Che; Roslan, M. Kamal

    2015-09-01

    Sediment samples were collected from the shallow marine from Kuala Besar, Kelantan outwards to the basin floor of South China Sea which consisted of quaternary bottom sediments. Sixty five samples were analysed for their grain size distribution and statistical relationships. Basic statistical analysis like mean, standard deviation, skewness and kurtosis were calculated and used to differentiate the depositional environment of the sediments and to derive the uniformity of depositional environment either from the beach or river environment. The sediments of all areas were varied in their sorting ranging from very well sorted to poorly sorted, strongly negative skewed to strongly positive skewed, and extremely leptokurtic to very platykurtic in nature. Bivariate plots between the grain-size parameters were then interpreted and the Coarsest-Median (CM) pattern showed the trend suggesting relationships between sediments influenced by three ongoing hydrodynamic factors namely turbidity current, littoral drift and waves dynamic, which functioned to control the sediments distribution pattern in various ways.

  12. Analysis and generation of groundwater concentration time series

    NASA Astrophysics Data System (ADS)

    Crăciun, Maria; Vamoş, Călin; Suciu, Nicolae

    2018-01-01

    Concentration time series are provided by simulated concentrations of a nonreactive solute transported in groundwater, integrated over the transverse direction of a two-dimensional computational domain and recorded at the plume center of mass. The analysis of a statistical ensemble of time series reveals subtle features that are not captured by the first two moments which characterize the approximate Gaussian distribution of the two-dimensional concentration fields. The concentration time series exhibit a complex preasymptotic behavior driven by a nonstationary trend and correlated fluctuations with time-variable amplitude. Time series with almost the same statistics are generated by successively adding to a time-dependent trend a sum of linear regression terms, accounting for correlations between fluctuations around the trend and their increments in time, and terms of an amplitude modulated autoregressive noise of order one with time-varying parameter. The algorithm generalizes mixing models used in probability density function approaches. The well-known interaction by exchange with the mean mixing model is a special case consisting of a linear regression with constant coefficients.

  13. Evidence for social learning in wild lemurs (Lemur catta).

    PubMed

    Kendal, Rachel L; Custance, Deborah M; Kendal, Jeremy R; Vale, Gillian; Stoinski, Tara S; Rakotomalala, Nirina Lalaina; Rasamimanana, Hantanirina

    2010-08-01

    Interest in social learning has been fueled by claims of culture in wild animals. These remain controversial because alternative explanations to social learning, such as asocial learning or ecological differences, remain difficult to refute. Compared with laboratory-based research, the study of social learning in natural contexts is in its infancy. Here, for the first time, we apply two new statistical methods, option-bias analysis and network-based diffusion analysis, to data from the wild, complemented by standard inferential statistics. Contrary to common thought regarding the cognitive abilities of prosimian primates, our evidence is consistent with social learning within subgroups in the ring-tailed lemur (Lemur catta), supporting the theory of directed social learning (Coussi-Korbel & Fragaszy, 1995). We also caution that, as the toolbox for capturing social learning in natural contexts grows, care is required in ensuring that the methods employed are appropriate-in particular, regarding social dynamics among study subjects. Supplemental materials for this article may be downloaded from http://lb.psychonomic-journals.org/content/supplemental.

  14. A Powerful Procedure for Pathway-Based Meta-analysis Using Summary Statistics Identifies 43 Pathways Associated with Type II Diabetes in European Populations.

    PubMed

    Zhang, Han; Wheeler, William; Hyland, Paula L; Yang, Yifan; Shi, Jianxin; Chatterjee, Nilanjan; Yu, Kai

    2016-06-01

    Meta-analysis of multiple genome-wide association studies (GWAS) has become an effective approach for detecting single nucleotide polymorphism (SNP) associations with complex traits. However, it is difficult to integrate the readily accessible SNP-level summary statistics from a meta-analysis into more powerful multi-marker testing procedures, which generally require individual-level genetic data. We developed a general procedure called Summary based Adaptive Rank Truncated Product (sARTP) for conducting gene and pathway meta-analysis that uses only SNP-level summary statistics in combination with genotype correlation estimated from a panel of individual-level genetic data. We demonstrated the validity and power advantage of sARTP through empirical and simulated data. We conducted a comprehensive pathway-based meta-analysis with sARTP on type 2 diabetes (T2D) by integrating SNP-level summary statistics from two large studies consisting of 19,809 T2D cases and 111,181 controls with European ancestry. Among 4,713 candidate pathways from which genes in neighborhoods of 170 GWAS established T2D loci were excluded, we detected 43 T2D globally significant pathways (with Bonferroni corrected p-values < 0.05), which included the insulin signaling pathway and T2D pathway defined by KEGG, as well as the pathways defined according to specific gene expression patterns on pancreatic adenocarcinoma, hepatocellular carcinoma, and bladder carcinoma. Using summary data from 8 eastern Asian T2D GWAS with 6,952 cases and 11,865 controls, we showed 7 out of the 43 pathways identified in European populations remained to be significant in eastern Asians at the false discovery rate of 0.1. We created an R package and a web-based tool for sARTP with the capability to analyze pathways with thousands of genes and tens of thousands of SNPs.

  15. A Powerful Procedure for Pathway-Based Meta-analysis Using Summary Statistics Identifies 43 Pathways Associated with Type II Diabetes in European Populations

    PubMed Central

    Zhang, Han; Wheeler, William; Hyland, Paula L.; Yang, Yifan; Shi, Jianxin; Chatterjee, Nilanjan; Yu, Kai

    2016-01-01

    Meta-analysis of multiple genome-wide association studies (GWAS) has become an effective approach for detecting single nucleotide polymorphism (SNP) associations with complex traits. However, it is difficult to integrate the readily accessible SNP-level summary statistics from a meta-analysis into more powerful multi-marker testing procedures, which generally require individual-level genetic data. We developed a general procedure called Summary based Adaptive Rank Truncated Product (sARTP) for conducting gene and pathway meta-analysis that uses only SNP-level summary statistics in combination with genotype correlation estimated from a panel of individual-level genetic data. We demonstrated the validity and power advantage of sARTP through empirical and simulated data. We conducted a comprehensive pathway-based meta-analysis with sARTP on type 2 diabetes (T2D) by integrating SNP-level summary statistics from two large studies consisting of 19,809 T2D cases and 111,181 controls with European ancestry. Among 4,713 candidate pathways from which genes in neighborhoods of 170 GWAS established T2D loci were excluded, we detected 43 T2D globally significant pathways (with Bonferroni corrected p-values < 0.05), which included the insulin signaling pathway and T2D pathway defined by KEGG, as well as the pathways defined according to specific gene expression patterns on pancreatic adenocarcinoma, hepatocellular carcinoma, and bladder carcinoma. Using summary data from 8 eastern Asian T2D GWAS with 6,952 cases and 11,865 controls, we showed 7 out of the 43 pathways identified in European populations remained to be significant in eastern Asians at the false discovery rate of 0.1. We created an R package and a web-based tool for sARTP with the capability to analyze pathways with thousands of genes and tens of thousands of SNPs. PMID:27362418

  16. [GSTP1, APC and RASSF1 gene methylation in prostate cancer samples: comparative analysis of MS-HRM method and Infinium HumanMethylation450 BeadChip beadchiparray diagnostic value].

    PubMed

    Skorodumova, L O; Babalyan, K A; Sultanov, R; Vasiliev, A O; Govorov, A V; Pushkar, D Y; Prilepskaya, E A; Danilenko, S A; Generozov, E V; Larin, A K; Kostryukova, E S; Sharova, E I

    2016-11-01

    There is a clear need in molecular markers for prostate cancer (PC) risk stratification. Alteration of DNA methylation is one of processes that occur during ÐÑ progression. Methylation-sensitive PCR with high resolution melting curve analysis (MS-HRM) can be used for gene methylation analysis in routine laboratory practice. This method requires very small amounts of DNA for analysis. Numerous results have been accumulated on DNA methylation in PC samples analyzed by the Infinium HumanMethylation450 BeadChip (HM450). However, the consistency of MS-HRM results with chip hybridization results has not been examined yet. The aim of this study was to assess the consistency of results of GSTP1, APC and RASSF1 gene methylation analysis in ÐÑ biopsy samples obtained by MS-HRM and chip hybridization. The methylation levels of each gene determined by MS-HRM were statistically different in the group of PC tissue samples and the samples without signs of tumor growth. Chip hybridization data analysis confirmed the results obtained with the MS-HRM. Differences in methylation levels between tumor tissue and histologically intact tissue of each sample determined by MS-HRM and chip hybridization, were consistent with each other. Thus, we showed that the assessment of GSTP1, APC and RASSF1 gene methylation analysis using MS-HRM is suitable for the design of laboratory assays that will differentiate the PC tissue from the tissue without signs of tumor growth.

  17. Reliability and Validity of the Italian Version of the Protocol of Orofacial Myofunctional Evaluation with Scores (I-OMES).

    PubMed

    Scarponi, Letizia; de Felicio, Claudia Maria; Sforza, Chiarella; Pimenta Ferreira, Claudia Lucia; Ginocchio, Daniela; Pizzorni, Nicole; Barozzi, Stefania; Mozzanica, Francesco; Schindler, Antonio

    2018-05-30

    To evaluate the reliability, validity, and responsiveness of the Italian OMES (I-OMES). The study consisted of 3 phases: (1) internal consistency and reliability, (2) validity, and (3) responsiveness analysis. The recruited population included 27 patients with orofacial myofunctional disorders (OMD) and 174 healthy volunteers. Forty-seven subjects, 18 healthy and all recruited patients with OMD were assessed for inter-rater and test-retest reliability analysis. I-OMES and Nordic Orofacial Test - Screening (NOT-S) scores of the patients were correlated for concurrent validity analysis. I-OMES scores from 27 patients with OMD and 27 age- and gender-matched healthy subjects were compared to investigate construct validity. I-OMES scores before and after successful swallowing rehabilitation in patients were compared for responsiveness analysis. Adequate internal consistency (Cronbach α = 0.71) and strong inter-rater and test-retest reliability (intraclass coefficient correlation = 0.97 and 0.98, respectively) were found. I-OMES and NOT-S scores significantly and inversely correlated (r = -0.38). A statistical significance (p < 0.001) was found between the pathological group and the control group for the total I-OMES score. The mean I-OMES score improved from 90 (78-102) to 99 (89-103) after myofunctional rehabilitation (p < 0.001). The I-OMES is a reliable and valid tool to evaluate OMD. © 2018 S. Karger AG, Basel.

  18. Assessing Auditory Processing Deficits in Tinnitus and Hearing Impaired Patients with the Auditory Behavior Questionnaire

    PubMed Central

    Diges, Isabel; Simón, Francisco; Cobo, Pedro

    2017-01-01

    Background and Purpose: Auditory processing disorders (APD), tinnitus and hearing loss (HL) are typical issues reported by patients in audiologic clinics. These auditory impairments can be concomitant or mutually excluding. APD are not necessarily accompanied by significant HL, whereas many adults exhibit peripheral HL and typical cognitive deficits often associated with APD. Since HL, tinnitus and APD affects to several parts of the ascending auditory pathway from the periphery to the auditory cortex, there could be some interrelationship between them. For instance, tinnitus has been reported to degrade the auditory localization capacity. Tinnitus is believed to be triggered by deafferentation of normal peripheral input to the central auditory system. This peripheral deficit can be accompanied by HL or not, since a type of permanent cochlear damage (thus deafferentation) without an elevation of hearing thresholds might persist. Therefore, a combined study of APD, tinnitus and HL on the same cohort of patients can be audiologically relevant and worthy. Methods: Statistical analysis is applied to a cohort of 305 patients attending an audiology clinic in Madrid (Spain). This group of patients is first categorized in four subgroups, namely, HLTG (with tinnitus and HL), NHLTG (with tinnitus and without HL), HLNTG (with HL but no tinnitus), and NHLNTG (neither tinnitus nor HL). The statistical variables include Age, Average Auditory Threshold (ATT), for assessing HL, Tinnitus Handicap Inventory (THI), for measuring tinnitus, and a new 25-item Auditory Behavior Questionnaire (ABQ), for scoring APD. Factor analysis is applied to arrange these items into 4 subscales. The internal consistency reliability of this ABQ is confirmed by calculating Cronbach's coefficients α. The test-retest reliability is assessed by the intraclass correlation coefficients, ICC. Statistical techniques applied to the data set include descriptive analysis of variables and Spearman rank correlations (ρ) between them. Results: Overall reliability of ABQ is confirmed by an α value of 0.89 and by an ICC of 0.91. Regarding the internal consistency reliability, the four subscales prove a fairly good consistency with α coefficients above 0.7. Average values of statistical variables show significantly lower age of patients with tinnitus and no HL, which can provide a cue of noise overexposure of this segment of population. These younger patients show also decreased ABQ and similar THI in comparison with patients in the other subgroups. A strong correlation (ρ = 0.63) was found between AAT and Age for the HLNTG subgroup. For the HLTG subgroup, a moderate correlation (ρ = 0.44) was found between ABQ and THI. Conclusion: The utilized questionnaire (ABQ), together with AAT and THI, can help to study comorbid hearing impairments in patients regularly attending an audiological clinic. PMID:28428741

  19. Latent structure and reliability analysis of the measure of body apperception: cross-validation for head and neck cancer patients.

    PubMed

    Jean-Pierre, Pascal; Fundakowski, Christopher; Perez, Enrique; Jean-Pierre, Shadae E; Jean-Pierre, Ashley R; Melillo, Angelica B; Libby, Rachel; Sargi, Zoukaa

    2013-02-01

    Cancer and its treatments are associated with psychological distress that can negatively impact self-perception, psychosocial functioning, and quality of life. Patients with head and neck cancers (HNC) are particularly susceptible to psychological distress. This study involved a cross-validation of the Measure of Body Apperception (MBA) for HNC patients. One hundred and twenty-two English-fluent HNC patients between 20 and 88 years of age completed the MBA on a Likert scale ranging from "1 = disagree" to "4 = agree." We assessed the latent structure and internal consistency reliability of the MBA using Principal Components Analysis (PCA) and Cronbach's coefficient alpha (α), respectively. We determined convergent and divergent validities of the MBA using correlations with the Hospital Anxiety and Depression Scale (HADS), observer disfigurement rating, and patients' clinical and demographic variables. The PCA revealed a coherent set of items that explained 38 % of the variance. The Kaiser-Meyer-Olkin measure of sampling adequacy was 0.73 and the Bartlett's test of sphericity was statistically significant (χ (2) (28) = 253.64; p < 0.001), confirming the suitability of the data for dimension reduction analysis. The MBA had good internal consistency reliability (α = 0.77) and demonstrated adequate convergent and divergent validities based on statistically significant moderate correlations with the HADS (p < 0.01) and observer rating of disfigurement (p < 0.026) and nonstatistically significant correlations with patients' clinical and demographic variables: tumor location, age at diagnosis, and birth place (all p (s) > 0.05). The MBA is a valid and reliable screening measure of body apperception for HNC patients.

  20. Network-based statistical comparison of citation topology of bibliographic databases

    PubMed Central

    Šubelj, Lovro; Fiala, Dalibor; Bajec, Marko

    2014-01-01

    Modern bibliographic databases provide the basis for scientific research and its evaluation. While their content and structure differ substantially, there exist only informal notions on their reliability. Here we compare the topological consistency of citation networks extracted from six popular bibliographic databases including Web of Science, CiteSeer and arXiv.org. The networks are assessed through a rich set of local and global graph statistics. We first reveal statistically significant inconsistencies between some of the databases with respect to individual statistics. For example, the introduced field bow-tie decomposition of DBLP Computer Science Bibliography substantially differs from the rest due to the coverage of the database, while the citation information within arXiv.org is the most exhaustive. Finally, we compare the databases over multiple graph statistics using the critical difference diagram. The citation topology of DBLP Computer Science Bibliography is the least consistent with the rest, while, not surprisingly, Web of Science is significantly more reliable from the perspective of consistency. This work can serve either as a reference for scholars in bibliometrics and scientometrics or a scientific evaluation guideline for governments and research agencies. PMID:25263231

  1. Visual wetness perception based on image color statistics.

    PubMed

    Sawayama, Masataka; Adelson, Edward H; Nishida, Shin'ya

    2017-05-01

    Color vision provides humans and animals with the abilities to discriminate colors based on the wavelength composition of light and to determine the location and identity of objects of interest in cluttered scenes (e.g., ripe fruit among foliage). However, we argue that color vision can inform us about much more than color alone. Since a trichromatic image carries more information about the optical properties of a scene than a monochromatic image does, color can help us recognize complex material qualities. Here we show that human vision uses color statistics of an image for the perception of an ecologically important surface condition (i.e., wetness). Psychophysical experiments showed that overall enhancement of chromatic saturation, combined with a luminance tone change that increases the darkness and glossiness of the image, tended to make dry scenes look wetter. Theoretical analysis along with image analysis of real objects indicated that our image transformation, which we call the wetness enhancing transformation, is consistent with actual optical changes produced by surface wetting. Furthermore, we found that the wetness enhancing transformation operator was more effective for the images with many colors (large hue entropy) than for those with few colors (small hue entropy). The hue entropy may be used to separate surface wetness from other surface states having similar optical properties. While surface wetness and surface color might seem to be independent, there are higher order color statistics that can influence wetness judgments, in accord with the ecological statistics. The present findings indicate that the visual system uses color image statistics in an elegant way to help estimate the complex physical status of a scene.

  2. Identification of crop cultivars with consistently high lignocellulosic sugar release requires the use of appropriate statistical design and modelling

    PubMed Central

    2013-01-01

    Background In this study, a multi-parent population of barley cultivars was grown in the field for two consecutive years and then straw saccharification (sugar release by enzymes) was subsequently analysed in the laboratory to identify the cultivars with the highest consistent sugar yield. This experiment was used to assess the benefit of accounting for both the multi-phase and multi-environment aspects of large-scale phenotyping experiments with field-grown germplasm through sound statistical design and analysis. Results Complementary designs at both the field and laboratory phases of the experiment ensured that non-genetic sources of variation could be separated from the genetic variation of cultivars, which was the main target of the study. The field phase included biological replication and plot randomisation. The laboratory phase employed re-randomisation and technical replication of samples within a batch, with a subset of cultivars chosen as duplicates that were randomly allocated across batches. The resulting data was analysed using a linear mixed model that incorporated field and laboratory variation and a cultivar by trial interaction, and ensured that the cultivar means were more accurately represented than if the non-genetic variation was ignored. The heritability detected was more than doubled in each year of the trial by accounting for the non-genetic variation in the analysis, clearly showing the benefit of this design and approach. Conclusions The importance of accounting for both field and laboratory variation, as well as the cultivar by trial interaction, by fitting a single statistical model (multi-environment trial, MET, model), was evidenced by the changes in list of the top 40 cultivars showing the highest sugar yields. Failure to account for this interaction resulted in only eight cultivars that were consistently in the top 40 in different years. The correspondence between the rankings of cultivars was much higher at 25 in the MET model. This approach is suited to any multi-phase and multi-environment population-based genetic experiment. PMID:24359577

  3. [Assessment of psychometric properties of the academic involvement questionnaire, expectations version].

    PubMed

    Pérez V, Cristhian; Ortiz M, Liliana; Fasce H, Eduardo; Parra P, Paula; Matus B, Olga; McColl C, Peter; Torres A, Graciela; Meyer K, Andrea; Márquez U, Carolina; Ortega B, Javiera

    2015-11-01

    Academic Involvement Questionnaire, Expectations version (CIA-A), assesses the expectations of involvement in studies. It is a relevant predictor of student success. However, the evidence of its validity and reliability in Chile is low, and in the case of Medical students, there is no evidence at all. To evaluate the factorial structure and internal consistency of the CIA-A in Chilean Medical school freshmen. The survey was applied to 340 Medicine freshmen, chosen by non-probability quota sampling. They answered a back-translated version of CIA-A from Portuguese to Spanish, plus a sociodemographic questionnaire. For psychometric analysis of the CIA-A, an exploratory factor analysis was carried on, the reliability of the factors was calculated, a descriptive analysis was conducted and their correlation was assessed. Five factors were identified: vocational, institutional and social involvement, use of resources and student participation. Their reliabilities ranged between Cronbach's alpha values of 0.71 to 0.87. Factors also showed statistically significant correlations between each other. Identified factor structure is theoretically consistent with the structure of original version. It just disagrees in one factor. In addition, the factors' internal consistency were adequate for using them in research. This supports the construct validity and reliability of the CIA-A to assess involvement expectations in medical school freshmen.

  4. Implementation of Discovery Projects in Statistics

    ERIC Educational Resources Information Center

    Bailey, Brad; Spence, Dianna J.; Sinn, Robb

    2013-01-01

    Researchers and statistics educators consistently suggest that students will learn statistics more effectively by conducting projects through which they actively engage in a broad spectrum of tasks integral to statistical inquiry, in the authentic context of a real-world application. In keeping with these findings, we share an implementation of…

  5. [The reliability of a questionnaire regarding Colombian children's physical activity].

    PubMed

    Herazo-Beltrán, Aliz Y; Domínguez-Anaya, Regina

    2012-10-01

    Reporting the Physical Activity Questionnaire for school children's (PAQ-C) test-retest reliability and internal consistency. This was a descriptive study of 100 school-aged children aged 9 to 11 years old attending a school in Cartagena, Colombia. The sample was randomly selected. The PAQ-C was given twice, one week apart, after the informed consent forms had been signing by the children's parents and school officials. Cronbach's alpha coefficient of reliability was used for assessing internal consistency and an intra-class correlation coefficient for test-retest reliability SPSS (version 17.0) was used for statistical analysis. The questionnaire scored 0.73 internal consistencies during the first measurement and 0.78 on the second; intra-class correlation coefficient was 0.60. There were differences between boys and girls regarding both measurements. The PAQ-C had acceptable internal consistency and test-retest reliability, thereby making it useful for measuring children's self-reported physical activity and a valuable tool for population studies in Colombia.

  6. Order statistics applied to the most massive and most distant galaxy clusters

    NASA Astrophysics Data System (ADS)

    Waizmann, J.-C.; Ettori, S.; Bartelmann, M.

    2013-06-01

    In this work, we present an analytic framework for calculating the individual and joint distributions of the nth most massive or nth highest redshift galaxy cluster for a given survey characteristic allowing us to formulate Λ cold dark matter (ΛCDM) exclusion criteria. We show that the cumulative distribution functions steepen with increasing order, giving them a higher constraining power with respect to the extreme value statistics. Additionally, we find that the order statistics in mass (being dominated by clusters at lower redshifts) is sensitive to the matter density and the normalization of the matter fluctuations, whereas the order statistics in redshift is particularly sensitive to the geometric evolution of the Universe. For a fixed cosmology, both order statistics are efficient probes of the functional shape of the mass function at the high-mass end. To allow a quick assessment of both order statistics, we provide fits as a function of the survey area that allow percentile estimation with an accuracy better than 2 per cent. Furthermore, we discuss the joint distributions in the two-dimensional case and find that for the combination of the largest and the second largest observation, it is most likely to find them to be realized with similar values with a broadly peaked distribution. When combining the largest observation with higher orders, it is more likely to find a larger gap between the observations and when combining higher orders in general, the joint probability density function peaks more strongly. Having introduced the theory, we apply the order statistical analysis to the Southpole Telescope (SPT) massive cluster sample and metacatalogue of X-ray detected clusters of galaxies catalogue and find that the 10 most massive clusters in the sample are consistent with ΛCDM and the Tinker mass function. For the order statistics in redshift, we find a discrepancy between the data and the theoretical distributions, which could in principle indicate a deviation from the standard cosmology. However, we attribute this deviation to the uncertainty in the modelling of the SPT survey selection function. In turn, by assuming the ΛCDM reference cosmology, order statistics can also be utilized for consistency checks of the completeness of the observed sample and of the modelling of the survey selection function.

  7. Variance of foot biomechanical parameters across age groups for the elderly people in Romania

    NASA Astrophysics Data System (ADS)

    Deselnicu, D. C.; Vasilescu, A. M.; Militaru, G.

    2017-10-01

    The paper presents the results of a fieldwork study conducted in order to analyze major causal factors that influence the foot deformities and pathologies of elderly women in Romania. The study has an exploratory and descriptive nature and uses quantitative methodology. The sample consisted of 100 elderly women from Romania, ranging from 55 to over 75 years of age. The collected data was analyzed on multiple dimensions using a statistic analysis software program. The analysis of variance demonstrated significant differences across age groups in terms of several biomechanical parameters such as travel speed, toe off phase and support phase in the case of elderly women.

  8. QGene 4.0, an extensible Java QTL-analysis platform.

    PubMed

    Joehanes, Roby; Nelson, James C

    2008-12-01

    Of many statistical methods developed to date for quantitative trait locus (QTL) analysis, only a limited subset are available in public software allowing their exploration, comparison and practical application by researchers. We have developed QGene 4.0, a plug-in platform that allows execution and comparison of a variety of modern QTL-mapping methods and supports third-party addition of new ones. The software accommodates line-cross mating designs consisting of any arbitrary sequence of selfing, backcrossing, intercrossing and haploid-doubling steps that includes map, population, and trait simulators; and is scriptable. Software and documentation are available at http://coding.plantpath.ksu.edu/qgene. Source code is available on request.

  9. Dysphagia management: an analysis of patient outcomes using VitalStim therapy compared to traditional swallow therapy.

    PubMed

    Kiger, Mary; Brown, Catherine S; Watkins, Lynn

    2006-10-01

    This study compares the outcomes using VitalStim therapy to outcomes using traditional swallowing therapy for deglutition disorders. Twenty-two patients had an initial and a followup videofluoroscopic swallowing study or fiberoptic endoscopic evaluation of swallowing and were divided into an experimental group that received VitalStim treatments and a control group that received traditional swallowing therapy. Outcomes were analyzed for changes in oral and pharyngeal phase dysphagia severity, dietary consistency restrictions, and progression from nonoral to oral intake. Results of chi(2) analysis showed no statistically significant difference in outcomes between the experimental and control groups.

  10. Over ten thousand cases and counting: acidbase.org is serving the critical care community.

    PubMed

    Elbers, Paul W G; Van Regenmortel, Niels; Gatz, Rainer

    2015-01-01

    Acidbase.org has been serving the critical care community for over a decade. The backbone of this online resource consists of Peter Stewart's original text "How to understand Acid-Base" which is freely available to everyone. In addition, Stewart's Textbook of Acid Base, which puts the theory in today's clinical context is available for purchase from the website. However, many intensivists use acidbase.org on a daily basis for its educational content and in particular for its analysis module. This review provides an overview of the history of the website, a tutorial and descriptive statistics of over 10,000 queries submitted to the analysis module.

  11. Contribution of artificial intelligence to the knowledge of prognostic factors in Hodgkin's lymphoma.

    PubMed

    Buciński, Adam; Marszałł, Michał Piotr; Krysiński, Jerzy; Lemieszek, Andrzej; Załuski, Jerzy

    2010-07-01

    Hodgkin's lymphoma is one of the most curable malignancies and most patients achieve a lasting complete remission. In this study, artificial neural network (ANN) analysis was shown to provide significant factors with regard to 5-year recurrence after lymphoma treatment. Data from 114 patients treated for Hodgkin's disease were available for evaluation and comparison. A total of 31 variables were subjected to ANN analysis. The ANN approach as an advanced multivariate data processing method was shown to provide objective prognostic data. Some of these prognostic factors are consistent or even identical to the factors evaluated earlier by other statistical methods.

  12. Factors affecting job satisfaction in nurse faculty: a meta-analysis.

    PubMed

    Gormley, Denise K

    2003-04-01

    Evidence in the literature suggests job satisfaction can make a difference in keeping qualified workers on the job, but little research has been conducted focusing specifically on nursing faculty. Several studies have examined nurse faculty satisfaction in relationship to one or two influencing factors. These factors include professional autonomy, leader role expectations, organizational climate, perceived role conflict and role ambiguity, leadership behaviors, and organizational characteristics. This meta-analysis attempts to synthesize the various studies conducted on job satisfaction in nursing faculty and analyze which influencing factors have the greatest effect. The procedure used for this meta-analysis consisted of reviewing studies to identify factors influencing job satisfaction, research questions, sample size reported, instruments used for measurement of job satisfaction and influencing factors, and results of statistical analysis.

  13. Development of computer-assisted instruction application for statistical data analysis android platform as learning resource

    NASA Astrophysics Data System (ADS)

    Hendikawati, P.; Arifudin, R.; Zahid, M. Z.

    2018-03-01

    This study aims to design an android Statistics Data Analysis application that can be accessed through mobile devices to making it easier for users to access. The Statistics Data Analysis application includes various topics of basic statistical along with a parametric statistics data analysis application. The output of this application system is parametric statistics data analysis that can be used for students, lecturers, and users who need the results of statistical calculations quickly and easily understood. Android application development is created using Java programming language. The server programming language uses PHP with the Code Igniter framework, and the database used MySQL. The system development methodology used is the Waterfall methodology with the stages of analysis, design, coding, testing, and implementation and system maintenance. This statistical data analysis application is expected to support statistical lecturing activities and make students easier to understand the statistical analysis of mobile devices.

  14. Evaluating NASA S-NPP continuity cloud products for climate research using CALIPSO, CATS and Level-3 analysis

    NASA Astrophysics Data System (ADS)

    Holz, R.; Platnick, S. E.; Meyer, K.; Frey, R.; Wind, G.; Ackerman, S. A.; Heidinger, A. K.; Botambekov, D.; Yorks, J. E.; McGill, M. J.

    2016-12-01

    The launch of VIIRS and CrIS on Suomi NPP in the fall of 2011 introduced the next generation of U.S. operational polar orbiting environmental observations. Similar to MODIS, VIIRS provides visible and IR observations at moderate spatial resolution and has a 1:30 pm equatorial crossing time consistent with the MODIS on Aqua platform. However unlike MODIS, VIIRS lacks water vapor and CO2 absorbing channels that are used by the MODIS cloud algorithms for both cloud detection and to retrieve cloud top height and cloud emissivity for ice clouds. Given the different spectral and spatial characteristics of VIIRS, we seek to understand the extent to which the 15-year MODIS climate record can be continued with VIIRS/CrIS observations while maintaining consistent sensitivities across the observational systems. This presentation will focus on the evaluation of the latest version of the NASA funded cloud retrieval algorithms being developed for climate research. We will present collocated inter-comparisons between the imagers (VIIRS and MODIS Aqua) with CALIPSO and Cloud Aerosol Transport System (CATS) lidar observations as well as long term statistics based on a new Level-3 (L3) product being developed as part the project. The CALIPSO inter-comparisons will focus on cloud detection (cloud mask) with a focus on the impact of recent modifications to the cloud mask and how these changes impact the global statistics. For the first time we will provide inter-comparisons between two different cloud lidar systems (CALIOP and CATS) and investigate how the different sensitivities of the lidars impact the cloud mask and cloud comparisons. Using CALIPSO and CATS as the reference, and applying the same algorithms to VIIRS and MODIS, we will discuss the consistency between products from both imagers. The L3 analysis will focus on the regional and seasonal consistency between the suite of MODIS and VIIRS continuity cloud products. Do systematic biases remains when using consistent algorithms but applied to different observations (MODIS or VIIRS)?

  15. Comparative analysis of rationale used by dentists and patient for final esthetic outcome of dental treatment.

    PubMed

    Reddy, S Varalakshmi; Madineni, Praveen Kumar; Sudheer, A; Gujjarlapudi, Manmohan Choudary; Sreedevi, B; Reddy, Patelu Sunil Kumar

    2013-05-01

    To compare and evaluate the perceptions of esthetics among dentists and patients regarding the final esthetic outcome of a dental treatment. Esthetics is a matter of perception and is associated with the way different people look at an object. What constitutes esthetic for a particular person may not be acceptable for another. Hence it is subjective in nature. This becomes more obvious during the post-treatment evaluation of esthetics by dentist and the concerned patient. Opinion seldom matches. Hence, the study is a necessary part of the process of understanding the mind of dentist and patient regarding what constitutes esthetics. A survey has been conducted by means of a questionnaire consisting of 10 questions, on two groups of people. First group consists of 100 dentists picked at random in Kanyakumari district of Tamil Nadu, India. Second group consisted of 100 patients who required complete denture prosthesis. The second group was divided into two subgroups A and B. Subgroup A consisting of 50 men and subgroup B consisting of 50 women. In each subgroup 25 patients were selected in age group of 40 to 50 and 25 patients were selected in the age group of 50 to 60. The questionnaire was given to both the groups and asked to fill up, which was then statistically analyzed to look for patterns of thought process among them. Results were subjected to statistical analysis by Student's t-test. Perceptions of esthetics differs from dentist who is educated regarding esthetic principles of treatment and a patient who is not subjected to such education. Since, the questions were formulated such that patients could better understand the underlying problem, the final outcome of survey is a proof that dentists need to take into account what the patient regards as esthetics in order to provide a satisfactory treatment. CLINICAL AND ACADEMIC SIGNIFICANCE: The current study helps the dentist to better educate the patient regarding esthetics so that patient appreciates the final scientifically based esthetic outcome of treatment. It also helps the dental students to understand the underlying patient's thought process regarding esthetics.

  16. An instrument to assess the statistical intensity of medical research papers.

    PubMed

    Nieminen, Pentti; Virtanen, Jorma I; Vähänikkilä, Hannu

    2017-01-01

    There is widespread evidence that statistical methods play an important role in original research articles, especially in medical research. The evaluation of statistical methods and reporting in journals suffers from a lack of standardized methods for assessing the use of statistics. The objective of this study was to develop and evaluate an instrument to assess the statistical intensity in research articles in a standardized way. A checklist-type measure scale was developed by selecting and refining items from previous reports about the statistical contents of medical journal articles and from published guidelines for statistical reporting. A total of 840 original medical research articles that were published between 2007-2015 in 16 journals were evaluated to test the scoring instrument. The total sum of all items was used to assess the intensity between sub-fields and journals. Inter-rater agreement was examined using a random sample of 40 articles. Four raters read and evaluated the selected articles using the developed instrument. The scale consisted of 66 items. The total summary score adequately discriminated between research articles according to their study design characteristics. The new instrument could also discriminate between journals according to their statistical intensity. The inter-observer agreement measured by the ICC was 0.88 between all four raters. Individual item analysis showed very high agreement between the rater pairs, the percentage agreement ranged from 91.7% to 95.2%. A reliable and applicable instrument for evaluating the statistical intensity in research papers was developed. It is a helpful tool for comparing the statistical intensity between sub-fields and journals. The novel instrument may be applied in manuscript peer review to identify papers in need of additional statistical review.

  17. Using structural equation modeling for network meta-analysis.

    PubMed

    Tu, Yu-Kang; Wu, Yun-Chun

    2017-07-14

    Network meta-analysis overcomes the limitations of traditional pair-wise meta-analysis by incorporating all available evidence into a general statistical framework for simultaneous comparisons of several treatments. Currently, network meta-analyses are undertaken either within the Bayesian hierarchical linear models or frequentist generalized linear mixed models. Structural equation modeling (SEM) is a statistical method originally developed for modeling causal relations among observed and latent variables. As random effect is explicitly modeled as a latent variable in SEM, it is very flexible for analysts to specify complex random effect structure and to make linear and nonlinear constraints on parameters. The aim of this article is to show how to undertake a network meta-analysis within the statistical framework of SEM. We used an example dataset to demonstrate the standard fixed and random effect network meta-analysis models can be easily implemented in SEM. It contains results of 26 studies that directly compared three treatment groups A, B and C for prevention of first bleeding in patients with liver cirrhosis. We also showed that a new approach to network meta-analysis based on the technique of unrestricted weighted least squares (UWLS) method can also be undertaken using SEM. For both the fixed and random effect network meta-analysis, SEM yielded similar coefficients and confidence intervals to those reported in the previous literature. The point estimates of two UWLS models were identical to those in the fixed effect model but the confidence intervals were greater. This is consistent with results from the traditional pairwise meta-analyses. Comparing to UWLS model with common variance adjusted factor, UWLS model with unique variance adjusted factor has greater confidence intervals when the heterogeneity was larger in the pairwise comparison. The UWLS model with unique variance adjusted factor reflects the difference in heterogeneity within each comparison. SEM provides a very flexible framework for univariate and multivariate meta-analysis, and its potential as a powerful tool for advanced meta-analysis is still to be explored.

  18. FieldTrip: Open source software for advanced analysis of MEG, EEG, and invasive electrophysiological data.

    PubMed

    Oostenveld, Robert; Fries, Pascal; Maris, Eric; Schoffelen, Jan-Mathijs

    2011-01-01

    This paper describes FieldTrip, an open source software package that we developed for the analysis of MEG, EEG, and other electrophysiological data. The software is implemented as a MATLAB toolbox and includes a complete set of consistent and user-friendly high-level functions that allow experimental neuroscientists to analyze experimental data. It includes algorithms for simple and advanced analysis, such as time-frequency analysis using multitapers, source reconstruction using dipoles, distributed sources and beamformers, connectivity analysis, and nonparametric statistical permutation tests at the channel and source level. The implementation as toolbox allows the user to perform elaborate and structured analyses of large data sets using the MATLAB command line and batch scripting. Furthermore, users and developers can easily extend the functionality and implement new algorithms. The modular design facilitates the reuse in other software packages.

  19. Additive hazards regression and partial likelihood estimation for ecological monitoring data across space.

    PubMed

    Lin, Feng-Chang; Zhu, Jun

    2012-01-01

    We develop continuous-time models for the analysis of environmental or ecological monitoring data such that subjects are observed at multiple monitoring time points across space. Of particular interest are additive hazards regression models where the baseline hazard function can take on flexible forms. We consider time-varying covariates and take into account spatial dependence via autoregression in space and time. We develop statistical inference for the regression coefficients via partial likelihood. Asymptotic properties, including consistency and asymptotic normality, are established for parameter estimates under suitable regularity conditions. Feasible algorithms utilizing existing statistical software packages are developed for computation. We also consider a simpler additive hazards model with homogeneous baseline hazard and develop hypothesis testing for homogeneity. A simulation study demonstrates that the statistical inference using partial likelihood has sound finite-sample properties and offers a viable alternative to maximum likelihood estimation. For illustration, we analyze data from an ecological study that monitors bark beetle colonization of red pines in a plantation of Wisconsin.

  20. Conditions, interventions, and outcomes in nursing research: a comparative analysis of North American and European/International journals. (1981-1990).

    PubMed

    Abraham, I L; Chalifoux, Z L; Evers, G C; De Geest, S

    1995-04-01

    This study compared the conceptual foci and methodological characteristics of research projects which tested the effects of nursing interventions, published in four general nursing research journals with predominantly North American, and two with predominantly European/International authorship and readership. Dimensions and variables of comparison included: nature of subjects, design issues, statistical methodology, statistical power, and types of interventions and outcomes. Although some differences emerged, the most striking and consistent finding was that there were no statistically significant differences (and thus similarities) in the content foci and methodological parameters of the intervention studies published in both groups of journals. We conclude that European/International and North American nursing intervention studies, as reported in major general nursing research journals, are highly similar in the parameters studied, yet in need of overall improvement. Certainly, there is no empirical support for the common (explicit or implicit) ethnocentric American bias that leadership in nursing intervention research resides with and in the United States of America.

  1. Gridded Calibration of Ensemble Wind Vector Forecasts Using Ensemble Model Output Statistics

    NASA Astrophysics Data System (ADS)

    Lazarus, S. M.; Holman, B. P.; Splitt, M. E.

    2017-12-01

    A computationally efficient method is developed that performs gridded post processing of ensemble wind vector forecasts. An expansive set of idealized WRF model simulations are generated to provide physically consistent high resolution winds over a coastal domain characterized by an intricate land / water mask. Ensemble model output statistics (EMOS) is used to calibrate the ensemble wind vector forecasts at observation locations. The local EMOS predictive parameters (mean and variance) are then spread throughout the grid utilizing flow-dependent statistical relationships extracted from the downscaled WRF winds. Using data withdrawal and 28 east central Florida stations, the method is applied to one year of 24 h wind forecasts from the Global Ensemble Forecast System (GEFS). Compared to the raw GEFS, the approach improves both the deterministic and probabilistic forecast skill. Analysis of multivariate rank histograms indicate the post processed forecasts are calibrated. Two downscaling case studies are presented, a quiescent easterly flow event and a frontal passage. Strengths and weaknesses of the approach are presented and discussed.

  2. New Developments in the Embedded Statistical Coupling Method: Atomistic/Continuum Crack Propagation

    NASA Technical Reports Server (NTRS)

    Saether, E.; Yamakov, V.; Glaessgen, E.

    2008-01-01

    A concurrent multiscale modeling methodology that embeds a molecular dynamics (MD) region within a finite element (FEM) domain has been enhanced. The concurrent MD-FEM coupling methodology uses statistical averaging of the deformation of the atomistic MD domain to provide interface displacement boundary conditions to the surrounding continuum FEM region, which, in turn, generates interface reaction forces that are applied as piecewise constant traction boundary conditions to the MD domain. The enhancement is based on the addition of molecular dynamics-based cohesive zone model (CZM) elements near the MD-FEM interface. The CZM elements are a continuum interpretation of the traction-displacement relationships taken from MD simulations using Cohesive Zone Volume Elements (CZVE). The addition of CZM elements to the concurrent MD-FEM analysis provides a consistent set of atomistically-based cohesive properties within the finite element region near the growing crack. Another set of CZVEs are then used to extract revised CZM relationships from the enhanced embedded statistical coupling method (ESCM) simulation of an edge crack under uniaxial loading.

  3. Application of image recognition algorithms for statistical description of nano- and microstructured surfaces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mărăscu, V.; Dinescu, G.; Faculty of Physics, University of Bucharest, 405 Atomistilor Street, Bucharest-Magurele

    In this paper we propose a statistical approach for describing the self-assembling of sub-micronic polystyrene beads on silicon surfaces, as well as the evolution of surface topography due to plasma treatments. Algorithms for image recognition are used in conjunction with Scanning Electron Microscopy (SEM) imaging of surfaces. In a first step, greyscale images of the surface covered by the polystyrene beads are obtained. Further, an adaptive thresholding method was applied for obtaining binary images. The next step consisted in automatic identification of polystyrene beads dimensions, by using Hough transform algorithm, according to beads radius. In order to analyze the uniformitymore » of the self–assembled polystyrene beads, the squared modulus of 2-dimensional Fast Fourier Transform (2- D FFT) was applied. By combining these algorithms we obtain a powerful and fast statistical tool for analysis of micro and nanomaterials with aspect features regularly distributed on surface upon SEM examination.« less

  4. Implication of correlations among some common stability statistics - a Monte Carlo simulations.

    PubMed

    Piepho, H P

    1995-03-01

    Stability analysis of multilocation trials is often based on a mixed two-way model. Two stability measures in frequent use are the environmental variance (S i (2) )and the ecovalence (W i). Under the two-way model the rank orders of the expected values of these two statistics are identical for a given set of genotypes. By contrast, empirical rank correlations among these measures are consistently low. This suggests that the two-way mixed model may not be appropriate for describing real data. To check this hypothesis, a Monte Carlo simulation was conducted. It revealed that the low empirical rank correlation amongS i (2) and W i is most likely due to sampling errors. It is concluded that the observed low rank correlation does not invalidate the two-way model. The paper also discusses tests for homogeneity of S i (2) as well as implications of the two-way model for the classification of stability statistics.

  5. A Framework for Assessing High School Students' Statistical Reasoning.

    PubMed

    Chan, Shiau Wei; Ismail, Zaleha; Sumintono, Bambang

    2016-01-01

    Based on a synthesis of literature, earlier studies, analyses and observations on high school students, this study developed an initial framework for assessing students' statistical reasoning about descriptive statistics. Framework descriptors were established across five levels of statistical reasoning and four key constructs. The former consisted of idiosyncratic reasoning, verbal reasoning, transitional reasoning, procedural reasoning, and integrated process reasoning. The latter include describing data, organizing and reducing data, representing data, and analyzing and interpreting data. In contrast to earlier studies, this initial framework formulated a complete and coherent statistical reasoning framework. A statistical reasoning assessment tool was then constructed from this initial framework. The tool was administered to 10 tenth-grade students in a task-based interview. The initial framework was refined, and the statistical reasoning assessment tool was revised. The ten students then participated in the second task-based interview, and the data obtained were used to validate the framework. The findings showed that the students' statistical reasoning levels were consistent across the four constructs, and this result confirmed the framework's cohesion. Developed to contribute to statistics education, this newly developed statistical reasoning framework provides a guide for planning learning goals and designing instruction and assessments.

  6. A Framework for Assessing High School Students' Statistical Reasoning

    PubMed Central

    2016-01-01

    Based on a synthesis of literature, earlier studies, analyses and observations on high school students, this study developed an initial framework for assessing students’ statistical reasoning about descriptive statistics. Framework descriptors were established across five levels of statistical reasoning and four key constructs. The former consisted of idiosyncratic reasoning, verbal reasoning, transitional reasoning, procedural reasoning, and integrated process reasoning. The latter include describing data, organizing and reducing data, representing data, and analyzing and interpreting data. In contrast to earlier studies, this initial framework formulated a complete and coherent statistical reasoning framework. A statistical reasoning assessment tool was then constructed from this initial framework. The tool was administered to 10 tenth-grade students in a task-based interview. The initial framework was refined, and the statistical reasoning assessment tool was revised. The ten students then participated in the second task-based interview, and the data obtained were used to validate the framework. The findings showed that the students’ statistical reasoning levels were consistent across the four constructs, and this result confirmed the framework’s cohesion. Developed to contribute to statistics education, this newly developed statistical reasoning framework provides a guide for planning learning goals and designing instruction and assessments. PMID:27812091

  7. Identifying natural flow regimes using fish communities

    NASA Astrophysics Data System (ADS)

    Chang, Fi-John; Tsai, Wen-Ping; Wu, Tzu-Ching; Chen, Hung-kwai; Herricks, Edwin E.

    2011-10-01

    SummaryModern water resources management has adopted natural flow regimes as reasonable targets for river restoration and conservation. The characterization of a natural flow regime begins with the development of hydrologic statistics from flow records. However, little guidance exists for defining the period of record needed for regime determination. In Taiwan, the Taiwan Eco-hydrological Indicator System (TEIS), a group of hydrologic statistics selected for fisheries relevance, is being used to evaluate ecological flows. The TEIS consists of a group of hydrologic statistics selected to characterize the relationships between flow and the life history of indigenous species. Using the TEIS and biosurvey data for Taiwan, this paper identifies the length of hydrologic record sufficient for natural flow regime characterization. To define the ecological hydrology of fish communities, this study connected hydrologic statistics to fish communities by using methods to define antecedent conditions that influence existing community composition. A moving average method was applied to TEIS statistics to reflect the effects of antecedent flow condition and a point-biserial correlation method was used to relate fisheries collections with TEIS statistics. The resulting fish species-TEIS (FISH-TEIS) hydrologic statistics matrix takes full advantage of historical flows and fisheries data. The analysis indicates that, in the watersheds analyzed, averaging TEIS statistics for the present year and 3 years prior to the sampling date, termed MA(4), is sufficient to develop a natural flow regime. This result suggests that flow regimes based on hydrologic statistics for the period of record can be replaced by regimes developed for sampled fish communities.

  8. The application of the statistical theory of extreme values to gust-load problems

    NASA Technical Reports Server (NTRS)

    Press, Harry

    1950-01-01

    An analysis is presented which indicates that the statistical theory of extreme values is applicable to the problems of predicting the frequency of encountering the larger gust loads and gust velocities for both specific test conditions as well as commercial transport operations. The extreme-value theory provides an analytic form for the distributions of maximum values of gust load and velocity. Methods of fitting the distribution are given along with a method of estimating the reliability of the predictions. The theory of extreme values is applied to available load data from commercial transport operations. The results indicate that the estimates of the frequency of encountering the larger loads are more consistent with the data and more reliable than those obtained in previous analyses. (author)

  9. Integrated data management for clinical studies: automatic transformation of data models with semantic annotations for principal investigators, data managers and statisticians.

    PubMed

    Dugas, Martin; Dugas-Breit, Susanne

    2014-01-01

    Design, execution and analysis of clinical studies involves several stakeholders with different professional backgrounds. Typically, principle investigators are familiar with standard office tools, data managers apply electronic data capture (EDC) systems and statisticians work with statistics software. Case report forms (CRFs) specify the data model of study subjects, evolve over time and consist of hundreds to thousands of data items per study. To avoid erroneous manual transformation work, a converting tool for different representations of study data models was designed. It can convert between office format, EDC and statistics format. In addition, it supports semantic annotations, which enable precise definitions for data items. A reference implementation is available as open source package ODMconverter at http://cran.r-project.org.

  10. X-ray studies of quasars with the Einstein Observatory. IV - X-ray dependence on radio emission

    NASA Technical Reports Server (NTRS)

    Worrall, D. M.; Tananbaum, H.; Giommi, P.; Zamorani, G.

    1987-01-01

    The X-ray properties of a sample of 114 radio-loud quasars observed with the Einstein Observatory are examined, and the results are compared with those obtained from a large sample of radio-quiet quasars. The results of statistical analysis of the dependence of X-ray luminosity on combined functions of optical and radio luminosity show that the dependence on both luminosities is important. However, statistically significant differences are found between subsamples of flat radio spectra quasars and steep radio spectra quasars with regard to dependence of X-ray luminosity on only radio luminosity. The data are consistent with radio-loud quasars having a physical component, not directly related to the optical luminosity, which produces the core radio luminosity plus 'extra' X-ray emission.

  11. Differential 3D Mueller-matrix mapping of optically anisotropic depolarizing biological layers

    NASA Astrophysics Data System (ADS)

    Ushenko, O. G.; Grytsyuk, M.; Ushenko, V. O.; Bodnar, G. B.; Vanchulyak, O.; Meglinskiy, I.

    2018-01-01

    The paper consists of two parts. The first part is devoted to the short theoretical basics of the method of differential Mueller-matrix description of properties of partially depolarizing layers. It was provided the experimentally measured maps of differential matrix of the 2nd order of polycrystalline structure of the histological section of rectum wall tissue. It was defined the values of statistical moments of the1st-4th orders, which characterize the distribution of matrix elements. In the second part of the paper it was provided the data of statistic analysis of birefringence and dichroism of the histological sections of connecting component of vagina wall tissue (normal and with prolapse). It were defined the objective criteria of differential diagnostics of pathologies of vagina wall.

  12. Fracking and labquakes

    NASA Astrophysics Data System (ADS)

    Baró, Jordi; Planes, Antoni; Salje, Ekhard K. H.; Vives, Eduard

    2016-12-01

    Local fracture events (or labquakes) during compression of shale rocks have been studied by acoustic emission. They are assumed to simulate quakes induced by hydraulic fracturing (fracking) or other water injection activities. Results are compared with those obtained during compression of porous Vycor glass, which are known to display statistical features very similar to those characterising natural earthquakes. Our acoustic emission results show that labquake energies are power law distributed, which is consistent with recent statistical analysis of fracking-/water injection-induced quakes. The data confirm a Gutenberg-Richter behaviour with exponents larger than the exponents characterising the energy distribution of natural earthquakes. In contrast to natural earthquakes, labquakes in shales do not show time correlations, which indicates that the probability of aftershocks is smaller than in the natural scenario (e.g. during Californian earthquakes).

  13. A Two-Dimensional Variational Analysis Method for NSCAT Ambiguity Removal: Methodology, Sensitivity, and Tuning

    NASA Technical Reports Server (NTRS)

    Hoffman, R. N.; Leidner, S. M.; Henderson, J. M.; Atlas, R.; Ardizzone, J. V.; Bloom, S. C.; Atlas, Robert (Technical Monitor)

    2001-01-01

    In this study, we apply a two-dimensional variational analysis method (2d-VAR) to select a wind solution from NASA Scatterometer (NSCAT) ambiguous winds. 2d-VAR determines a "best" gridded surface wind analysis by minimizing a cost function. The cost function measures the misfit to the observations, the background, and the filtering and dynamical constraints. The ambiguity closest in direction to the minimizing analysis is selected. 2d-VAR method, sensitivity and numerical behavior are described. 2d-VAR is compared to statistical interpolation (OI) by examining the response of both systems to a single ship observation and to a swath of unique scatterometer winds. 2d-VAR is used with both NSCAT ambiguities and NSCAT backscatter values. Results are roughly comparable. When the background field is poor, 2d-VAR ambiguity removal often selects low probability ambiguities. To avoid this behavior, an initial 2d-VAR analysis, using only the two most likely ambiguities, provides the first guess for an analysis using all the ambiguities or the backscatter data. 2d-VAR and median filter selected ambiguities usually agree. Both methods require horizontal consistency, so disagreements occur in clumps, or as linear features. In these cases, 2d-VAR ambiguities are often more meteorologically reasonable and more consistent with satellite imagery.

  14. Measuring trust in nurses - Psychometric properties of the Trust in Nurses Scale in four countries.

    PubMed

    Stolt, Minna; Charalambous, Andreas; Radwin, Laurel; Adam, Christina; Katajisto, Jouko; Lemonidou, Chryssoula; Patiraki, Elisabeth; Sjövall, Katarina; Suhonen, Riitta

    2016-12-01

    The purpose of this study was to examine psychometric properties of three translated versions of the Trust in Nurses Scale (TNS) and cancer patients' perceptions of trust in nurses in a sample of cancer patients from four European countries. A cross-sectional, cross-cultural, multi-site survey design was used. The data were collected with the Trust in Nurses Scale from patients with different types of malignancies in 17 units within five clinical sites (n = 599) between 09/2012 and 06/2014. Data were analyzed using descriptive and inferential statistics, multivariate methods and psychometrics using exploratory factor analysis, Cronbach's alpha coefficients, item analysis and Rasch analysis. The psychometric properties of the data were consistent in all countries. Within the exploratory factor analysis the principal component analysis supported the one component structure (unidimensionality) of the TNS. The internal consistency reliability was acceptable. The Rasch analysis supported the unidimensionality of the TNS cross-culturally. All items of the TNS demonstrated acceptable goodness-of-fit to the Rasch model. Cancer patients trusted nurses to a great extent although between-country differences were found. The Trust in Nurses Scale proved to be a valid and reliable tool for measuring patients' trust in nurses in oncological settings in international contexts. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Trajectory Design for a Single-String Impactor Concept

    NASA Technical Reports Server (NTRS)

    Dono Perez, Andres; Burton, Roland; Stupl, Jan; Mauro, David

    2017-01-01

    This paper introduces a trajectory design for a secondary spacecraft concept to augment science return in interplanetary missions. The concept consist of a single-string probe with a kinetic impactor on board that generates an artificial plume to perform in-situ sampling. The trajectory design was applied to a particular case study that samples ejecta particles from the Jovian moon Europa. Results were validated using statistical analysis. Details regarding the navigation, targeting and disposal challenges related to this concept are presented herein.

  16. Complex degree of mutual anisotropy in diagnostics of biological tissues physiological changes

    NASA Astrophysics Data System (ADS)

    Ushenko, Yu. A.; Dubolazov, O. V.; Karachevtcev, A. O.; Zabolotna, N. I.

    2011-05-01

    To characterize the degree of consistency of parameters of the optically uniaxial birefringent protein nets of blood plasma a new parameter - complex degree of mutual anisotropy is suggested. The technique of polarization measuring the coordinate distributions of the complex degree of mutual anisotropy of blood plasma is developed. It is shown that statistic approach to the analysis of complex degree of mutual anisotropy distributions of blood plasma is effective in the diagnosis and differentiation of acute inflammation - acute and gangrenous appendicitis.

  17. Complex degree of mutual anisotropy in diagnostics of biological tissues physiological changes

    NASA Astrophysics Data System (ADS)

    Ushenko, Yu. A.; Dubolazov, A. V.; Karachevtcev, A. O.; Zabolotna, N. I.

    2011-09-01

    To characterize the degree of consistency of parameters of the optically uniaxial birefringent protein nets of blood plasma a new parameter - complex degree of mutual anisotropy is suggested. The technique of polarization measuring the coordinate distributions of the complex degree of mutual anisotropy of blood plasma is developed. It is shown that statistic approach to the analysis of complex degree of mutual anisotropy distributions of blood plasma is effective in the diagnosis and differentiation of acute inflammation - acute and gangrenous appendicitis.

  18. Performance simulation in high altitude platforms (HAPs) communications systems

    NASA Astrophysics Data System (ADS)

    Ulloa-Vásquez, Fernando; Delgado-Penin, J. A.

    2002-07-01

    This paper considers the analysis by simulation of a digital narrowband communication system for an scenario which consists of a High-Altitude aeronautical Platform (HAP) and fixed/mobile terrestrial transceivers. The aeronautical channel is modelled considering geometrical (angle of elevation vs. horizontal distance of the terrestrial reflectors) and statistical arguments and under these circumstances a serial concatenated coded digital transmission is analysed for several hypothesis related to radio-electric coverage areas. The results indicate a good feasibility for the communication system proposed and analysed.

  19. Transistor-like behavior of single metalloprotein junctions.

    PubMed

    Artés, Juan M; Díez-Pérez, Ismael; Gorostiza, Pau

    2012-06-13

    Single protein junctions consisting of azurin bridged between a gold substrate and the probe of an electrochemical tunneling microscope (ECSTM) have been obtained by two independent methods that allowed statistical analysis over a large number of measured junctions. Conductance measurements yield (7.3 ± 1.5) × 10(-6)G(0) in agreement with reported estimates using other techniques. Redox gating of the protein with an on/off ratio of 20 was demonstrated and constitutes a proof-of-principle of a single redox protein field-effect transistor.

  20. Lifetime assessment analysis of Galileo Li/SO2 cells: Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Levy, S.C.; Jaeger, C.D.; Bouchard, D.A.

    Galileo Li/SO2 cells from five lots and five storage temperatures were studied to establish a database from which the performance of flight modules may be predicted. Nondestructive tests consisting of complex impedance analysis and a 15-s pulse were performed on all cells. Chemical analysis was performed on one cell from each lot/storage group, and the remaining cells were discharged at Galileo mission loads. An additional number of cells were placed on high-temperature accelerated aging storage for 6 months and then discharged. All data were statistically analyzed. Results indicate that the present Galileo design Li/SO2 cell will satisfy electrical requirements formore » a 10-year mission. 10 figs., 4 tabs.« less

  1. Testing averaged cosmology with type Ia supernovae and BAO data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Santos, B.; Alcaniz, J.S.; Coley, A.A.

    An important problem in precision cosmology is the determination of the effects of averaging and backreaction on observational predictions, particularly in view of the wealth of new observational data and improved statistical techniques. In this paper, we discuss the observational viability of a class of averaged cosmologies which consist of a simple parametrized phenomenological two-scale backreaction model with decoupled spatial curvature parameters. We perform a Bayesian model selection analysis and find that this class of averaged phenomenological cosmological models is favored with respect to the standard ΛCDM cosmological scenario when a joint analysis of current SNe Ia and BAO datamore » is performed. In particular, the analysis provides observational evidence for non-trivial spatial curvature.« less

  2. Measuring ambivalence to science

    NASA Astrophysics Data System (ADS)

    Gardner, P. L.

    Ambivalence is a psychological state in which a person holds mixed feelings (positive and negative) towards some psychological object. Standard methods of attitude measurement, such as Likert and semantic differential scales, ignore the possibility of ambivalence; ambivalent responses cannot be distinguished from neutral ones. This neglect arises out of an assumption that positive and negative affects towards a particular psychological object are bipolar, i.e., unidimensional in opposite directions. This assumption is frequently untenable. Conventional item statistics and measures of test internal consistency are ineffective as checks on this assumption; it is possible for a scale to be multidimensional and still display apparent internal consistency. Factor analysis is a more effective procedure. Methods of measuring ambivalence are suggested, and implications for research are discussed.

  3. ParallABEL: an R library for generalized parallelization of genome-wide association studies

    PubMed Central

    2010-01-01

    Background Genome-Wide Association (GWA) analysis is a powerful method for identifying loci associated with complex traits and drug response. Parts of GWA analyses, especially those involving thousands of individuals and consuming hours to months, will benefit from parallel computation. It is arduous acquiring the necessary programming skills to correctly partition and distribute data, control and monitor tasks on clustered computers, and merge output files. Results Most components of GWA analysis can be divided into four groups based on the types of input data and statistical outputs. The first group contains statistics computed for a particular Single Nucleotide Polymorphism (SNP), or trait, such as SNP characterization statistics or association test statistics. The input data of this group includes the SNPs/traits. The second group concerns statistics characterizing an individual in a study, for example, the summary statistics of genotype quality for each sample. The input data of this group includes individuals. The third group consists of pair-wise statistics derived from analyses between each pair of individuals in the study, for example genome-wide identity-by-state or genomic kinship analyses. The input data of this group includes pairs of SNPs/traits. The final group concerns pair-wise statistics derived for pairs of SNPs, such as the linkage disequilibrium characterisation. The input data of this group includes pairs of individuals. We developed the ParallABEL library, which utilizes the Rmpi library, to parallelize these four types of computations. ParallABEL library is not only aimed at GenABEL, but may also be employed to parallelize various GWA packages in R. The data set from the North American Rheumatoid Arthritis Consortium (NARAC) includes 2,062 individuals with 545,080, SNPs' genotyping, was used to measure ParallABEL performance. Almost perfect speed-up was achieved for many types of analyses. For example, the computing time for the identity-by-state matrix was linearly reduced from approximately eight hours to one hour when ParallABEL employed eight processors. Conclusions Executing genome-wide association analysis using the ParallABEL library on a computer cluster is an effective way to boost performance, and simplify the parallelization of GWA studies. ParallABEL is a user-friendly parallelization of GenABEL. PMID:20429914

  4. Network meta-analysis: application and practice using Stata

    PubMed Central

    2017-01-01

    This review aimed to arrange the concepts of a network meta-analysis (NMA) and to demonstrate the analytical process of NMA using Stata software under frequentist framework. The NMA tries to synthesize evidences for a decision making by evaluating the comparative effectiveness of more than two alternative interventions for the same condition. Before conducting a NMA, 3 major assumptions—similarity, transitivity, and consistency—should be checked. The statistical analysis consists of 5 steps. The first step is to draw a network geometry to provide an overview of the network relationship. The second step checks the assumption of consistency. The third step is to make the network forest plot or interval plot in order to illustrate the summary size of comparative effectiveness among various interventions. The fourth step calculates cumulative rankings for identifying superiority among interventions. The last step evaluates publication bias or effect modifiers for a valid inference from results. The synthesized evidences through five steps would be very useful to evidence-based decision-making in healthcare. Thus, NMA should be activated in order to guarantee the quality of healthcare system. PMID:29092392

  5. Regional flux analysis for discovering and quantifying anatomical changes: An application to the brain morphometry in Alzheimer's disease.

    PubMed

    Lorenzi, M; Ayache, N; Pennec, X

    2015-07-15

    In this study we introduce the regional flux analysis, a novel approach to deformation based morphometry based on the Helmholtz decomposition of deformations parameterized by stationary velocity fields. We use the scalar pressure map associated to the irrotational component of the deformation to discover the critical regions of volume change. These regions are used to consistently quantify the associated measure of volume change by the probabilistic integration of the flux of the longitudinal deformations across the boundaries. The presented framework unifies voxel-based and regional approaches, and robustly describes the volume changes at both group-wise and subject-specific level as a spatial process governed by consistently defined regions. Our experiments on the large cohorts of the ADNI dataset show that the regional flux analysis is a powerful and flexible instrument for the study of Alzheimer's disease in a wide range of scenarios: cross-sectional deformation based morphometry, longitudinal discovery and quantification of group-wise volume changes, and statistically powered and robust quantification of hippocampal and ventricular atrophy. Copyright © 2015 Elsevier Inc. All rights reserved.

  6. Evaluation of Informed Choice for contraceptive methods among women attending a family planning program: conceptual development; a case study in Chile.

    PubMed

    Valdés, Patricio R; Alarcon, Ana M; Munoz, Sergio R

    2013-03-01

    To generate and validate a scale to measure the Informed Choice of contraceptive methods among women attending a family health care service in Chile. The study follows a multimethod design that combined expert opinions from 13 physicians, 3 focus groups of 21 women each, and a sample survey of 1,446 women. Data analysis consisted of a qualitative text analysis of group interviews, a factor analysis for construct validity, and kappa statistic and Cronbach alpha to assess scale reliability. The instrument comprises 25 items grouped into six categories: information and orientation, quality of treatment, communication, participation in decision making, expression of reproductive rights, and method access and availability. Internal consistency measured with Cronbach alpha ranged from 0.75 to 0.89 for all subscales (kappa, 0.62; standard deviation, 0.06), and construct validity was demonstrated from the testing of several hypotheses. The use of mixed methods contributed to developing a scale of Informed Choice that was culturally appropriate for assessing the women who participated in the family planning program. Copyright © 2013 Elsevier Inc. All rights reserved.

  7. In situ and in-transit analysis of cosmological simulations

    DOE PAGES

    Friesen, Brian; Almgren, Ann; Lukic, Zarija; ...

    2016-08-24

    Modern cosmological simulations have reached the trillion-element scale, rendering data storage and subsequent analysis formidable tasks. To address this circumstance, we present a new MPI-parallel approach for analysis of simulation data while the simulation runs, as an alternative to the traditional workflow consisting of periodically saving large data sets to disk for subsequent ‘offline’ analysis. We demonstrate this approach in the compressible gasdynamics/N-body code Nyx, a hybrid MPI+OpenMP code based on the BoxLib framework, used for large-scale cosmological simulations. We have enabled on-the-fly workflows in two different ways: one is a straightforward approach consisting of all MPI processes periodically haltingmore » the main simulation and analyzing each component of data that they own (‘ in situ’). The other consists of partitioning processes into disjoint MPI groups, with one performing the simulation and periodically sending data to the other ‘sidecar’ group, which post-processes it while the simulation continues (‘in-transit’). The two groups execute their tasks asynchronously, stopping only to synchronize when a new set of simulation data needs to be analyzed. For both the in situ and in-transit approaches, we experiment with two different analysis suites with distinct performance behavior: one which finds dark matter halos in the simulation using merge trees to calculate the mass contained within iso-density contours, and another which calculates probability distribution functions and power spectra of various fields in the simulation. Both are common analysis tasks for cosmology, and both result in summary statistics significantly smaller than the original data set. We study the behavior of each type of analysis in each workflow in order to determine the optimal configuration for the different data analysis algorithms.« less

  8. A critical analysis of high-redshift, massive, galaxy clusters. Part I

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoyle, Ben; Jimenez, Raul; Verde, Licia

    2012-02-01

    We critically investigate current statistical tests applied to high redshift clusters of galaxies in order to test the standard cosmological model and describe their range of validity. We carefully compare a sample of high-redshift, massive, galaxy clusters with realistic Poisson sample simulations of the theoretical mass function, which include the effect of Eddington bias. We compare the observations and simulations using the following statistical tests: the distributions of ensemble and individual existence probabilities (in the > M, > z sense), the redshift distributions, and the 2d Kolmogorov-Smirnov test. Using seemingly rare clusters from Hoyle et al. (2011), and Jee etmore » al. (2011) and assuming the same survey geometry as in Jee et al. (2011, which is less conservative than Hoyle et al. 2011), we find that the ( > M, > z) existence probabilities of all clusters are fully consistent with ΛCDM. However assuming the same survey geometry, we use the 2d K-S test probability to show that the observed clusters are not consistent with being the least probable clusters from simulations at > 95% confidence, and are also not consistent with being a random selection of clusters, which may be caused by the non-trivial selection function and survey geometry. Tension can be removed if we examine only a X-ray selected sub sample, with simulations performed assuming a modified survey geometry.« less

  9. Gaussian statistics for palaeomagnetic vectors

    USGS Publications Warehouse

    Love, J.J.; Constable, C.G.

    2003-01-01

    With the aim of treating the statistics of palaeomagnetic directions and intensities jointly and consistently, we represent the mean and the variance of palaeomagnetic vectors, at a particular site and of a particular polarity, by a probability density function in a Cartesian three-space of orthogonal magnetic-field components consisting of a single (unimoda) non-zero mean, spherically-symmetrical (isotropic) Gaussian function. For palaeomagnetic data of mixed polarities, we consider a bimodal distribution consisting of a pair of such symmetrical Gaussian functions, with equal, but opposite, means and equal variances. For both the Gaussian and bi-Gaussian distributions, and in the spherical three-space of intensity, inclination, and declination, we obtain analytical expressions for the marginal density functions, the cumulative distributions, and the expected values and variances for each spherical coordinate (including the angle with respect to the axis of symmetry of the distributions). The mathematical expressions for the intensity and off-axis angle are closed-form and especially manageable, with the intensity distribution being Rayleigh-Rician. In the limit of small relative vectorial dispersion, the Gaussian (bi-Gaussian) directional distribution approaches a Fisher (Bingham) distribution and the intensity distribution approaches a normal distribution. In the opposite limit of large relative vectorial dispersion, the directional distributions approach a spherically-uniform distribution and the intensity distribution approaches a Maxwell distribution. We quantify biases in estimating the properties of the vector field resulting from the use of simple arithmetic averages, such as estimates of the intensity or the inclination of the mean vector, or the variances of these quantities. With the statistical framework developed here and using the maximum-likelihood method, which gives unbiased estimates in the limit of large data numbers, we demonstrate how to formulate the inverse problem, and how to estimate the mean and variance of the magnetic vector field, even when the data consist of mixed combinations of directions and intensities. We examine palaeomagnetic secular-variation data from Hawaii and Re??union, and although these two sites are on almost opposite latitudes, we find significant differences in the mean vector and differences in the local vectorial variances, with the Hawaiian data being particularly anisotropic. These observations are inconsistent with a description of the mean field as being a simple geocentric axial dipole and with secular variation being statistically symmetrical with respect to reflection through the equatorial plane. Finally, our analysis of palaeomagnetic acquisition data from the 1960 Kilauea flow in Hawaii and the Holocene Xitle flow in Mexico, is consistent with the widely held suspicion that directional data are more accurate than intensity data.

  10. Gaussian statistics for palaeomagnetic vectors

    NASA Astrophysics Data System (ADS)

    Love, J. J.; Constable, C. G.

    2003-03-01

    With the aim of treating the statistics of palaeomagnetic directions and intensities jointly and consistently, we represent the mean and the variance of palaeomagnetic vectors, at a particular site and of a particular polarity, by a probability density function in a Cartesian three-space of orthogonal magnetic-field components consisting of a single (unimodal) non-zero mean, spherically-symmetrical (isotropic) Gaussian function. For palaeomagnetic data of mixed polarities, we consider a bimodal distribution consisting of a pair of such symmetrical Gaussian functions, with equal, but opposite, means and equal variances. For both the Gaussian and bi-Gaussian distributions, and in the spherical three-space of intensity, inclination, and declination, we obtain analytical expressions for the marginal density functions, the cumulative distributions, and the expected values and variances for each spherical coordinate (including the angle with respect to the axis of symmetry of the distributions). The mathematical expressions for the intensity and off-axis angle are closed-form and especially manageable, with the intensity distribution being Rayleigh-Rician. In the limit of small relative vectorial dispersion, the Gaussian (bi-Gaussian) directional distribution approaches a Fisher (Bingham) distribution and the intensity distribution approaches a normal distribution. In the opposite limit of large relative vectorial dispersion, the directional distributions approach a spherically-uniform distribution and the intensity distribution approaches a Maxwell distribution. We quantify biases in estimating the properties of the vector field resulting from the use of simple arithmetic averages, such as estimates of the intensity or the inclination of the mean vector, or the variances of these quantities. With the statistical framework developed here and using the maximum-likelihood method, which gives unbiased estimates in the limit of large data numbers, we demonstrate how to formulate the inverse problem, and how to estimate the mean and variance of the magnetic vector field, even when the data consist of mixed combinations of directions and intensities. We examine palaeomagnetic secular-variation data from Hawaii and Réunion, and although these two sites are on almost opposite latitudes, we find significant differences in the mean vector and differences in the local vectorial variances, with the Hawaiian data being particularly anisotropic. These observations are inconsistent with a description of the mean field as being a simple geocentric axial dipole and with secular variation being statistically symmetrical with respect to reflection through the equatorial plane. Finally, our analysis of palaeomagnetic acquisition data from the 1960 Kilauea flow in Hawaii and the Holocene Xitle flow in Mexico, is consistent with the widely held suspicion that directional data are more accurate than intensity data.

  11. Analysis of data collected from right and left limbs: Accounting for dependence and improving statistical efficiency in musculoskeletal research.

    PubMed

    Stewart, Sarah; Pearson, Janet; Rome, Keith; Dalbeth, Nicola; Vandal, Alain C

    2018-01-01

    Statistical techniques currently used in musculoskeletal research often inefficiently account for paired-limb measurements or the relationship between measurements taken from multiple regions within limbs. This study compared three commonly used analysis methods with a mixed-models approach that appropriately accounted for the association between limbs, regions, and trials and that utilised all information available from repeated trials. Four analysis were applied to an existing data set containing plantar pressure data, which was collected for seven masked regions on right and left feet, over three trials, across three participant groups. Methods 1-3 averaged data over trials and analysed right foot data (Method 1), data from a randomly selected foot (Method 2), and averaged right and left foot data (Method 3). Method 4 used all available data in a mixed-effects regression that accounted for repeated measures taken for each foot, foot region and trial. Confidence interval widths for the mean differences between groups for each foot region were used as a criterion for comparison of statistical efficiency. Mean differences in pressure between groups were similar across methods for each foot region, while the confidence interval widths were consistently smaller for Method 4. Method 4 also revealed significant between-group differences that were not detected by Methods 1-3. A mixed effects linear model approach generates improved efficiency and power by producing more precise estimates compared to alternative approaches that discard information in the process of accounting for paired-limb measurements. This approach is recommended in generating more clinically sound and statistically efficient research outputs. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Antithrombotic drug therapy for IgA nephropathy: a meta analysis of randomized controlled trials.

    PubMed

    Liu, Xiu-Juan; Geng, Yan-Qiu; Xin, Shao-Nan; Huang, Guo-Ming; Tu, Xiao-Wen; Ding, Zhong-Ru; Chen, Xiang-Mei

    2011-01-01

    Antithrombotic agents, including antiplatelet agents, anticoagulants and thrombolysis agents, have been widely used in the management of immunoglobulin A (IgA) nephropathy in Chinese and Japanese populations. To systematically evaluate the effects of antithrombotic agents for IgA nephropathy. Data sources consisted of MEDLINE, EMBASE, the Cochrane Library, Chinese Biomedical Literature Database (CBM), Chinese Science and Technology Periodicals Databases (CNKI) and Japana Centra Revuo Medicina (http://www.jamas.gr.jp) up to April 5, 2011. The quality of the studies was evaluated from the intention to treat analysis and allocation concealment, as well as by the Jadad method. Meta-analyses were performed on the outcomes of proteinuria and renal function. Six articles met the predetermined inclusion criteria. Antithrombotic agents showed statistically significant effects on proteinuria (p<0.0001) but not on the protection of renal function (p=0.07). The pooled risk ratio for proteinuria was 0.53, [95% confidence intervals (CI): 0.41-0.68; I(2)=0%] and for renal function it was 0.42 (95% CI 0.17-1.06; I(2)=72%). Subgroup analysis showed that dipyridamole was beneficial for proteinuria (p=0.0003) but had no significant effects on protecting renal function. Urokinase had statistically significant effects both on the reduction of proteinuria (p=0.0005) and protecting renal function (p<0.00001) when compared with the control group. Antithrombotic agents had statistically significant effects on the reduction of proteinuria but not on the protection of renal function in patients with IgAN. Urokinase had statistically significant effects both on the reduction of proteinuria and on protecting renal function. Urokinase was shown to be a promising medication and should be investigated further.

  13. Explaining nitrate pollution pressure on the groundwater resource in Kinshasa using a multivariate statistical modelling approach

    NASA Astrophysics Data System (ADS)

    Mfumu Kihumba, Antoine; Vanclooster, Marnik

    2013-04-01

    Drinking water in Kinshasa, the capital of the Democratic Republic of Congo, is provided by extracting groundwater from the local aquifer, particularly in peripheral areas. The exploited groundwater body is mainly unconfined and located within a continuous detrital aquifer, primarily composed of sedimentary formations. However, the aquifer is subjected to an increasing threat of anthropogenic pollution pressure. Understanding the detailed origin of this pollution pressure is important for sustainable drinking water management in Kinshasa. The present study aims to explain the observed nitrate pollution problem, nitrate being considered as a good tracer for other pollution threats. The analysis is made in terms of physical attributes that are readily available using a statistical modelling approach. For the nitrate data, use was made of a historical groundwater quality assessment study, for which the data were re-analysed. The physical attributes are related to the topography, land use, geology and hydrogeology of the region. Prior to the statistical modelling, intrinsic and specific vulnerability for nitrate pollution was assessed. This vulnerability assessment showed that the alluvium area in the northern part of the region is the most vulnerable area. This area consists of urban land use with poor sanitation. Re-analysis of the nitrate pollution data demonstrated that the spatial variability of nitrate concentrations in the groundwater body is high, and coherent with the fragmented land use of the region and the intrinsic and specific vulnerability maps. For the statistical modeling use was made of multiple regression and regression tree analysis. The results demonstrated the significant impact of land use variables on the Kinshasa groundwater nitrate pollution and the need for a detailed delineation of groundwater capture zones around the monitoring stations. Key words: Groundwater , Isotopic, Kinshasa, Modelling, Pollution, Physico-chemical.

  14. Anticoagulant vs. antiplatelet therapy in patients with cryptogenic stroke and patent foramen ovale: an individual participant data meta-analysis.

    PubMed

    Kent, David M; Dahabreh, Issa J; Ruthazer, Robin; Furlan, Anthony J; Weimar, Christian; Serena, Joaquín; Meier, Bernhard; Mattle, Heinrich P; Di Angelantonio, Emanuele; Paciaroni, Maurizio; Schuchlenz, Herwig; Homma, Shunichi; Lutz, Jennifer S; Thaler, David E

    2015-09-14

    The preferred antithrombotic strategy for secondary prevention in patients with cryptogenic stroke (CS) and patent foramen ovale (PFO) is unknown. We pooled multiple observational studies and used propensity score-based methods to estimate the comparative effectiveness of oral anticoagulation (OAC) compared with antiplatelet therapy (APT). Individual participant data from 12 databases of medically treated patients with CS and PFO were analysed with Cox regression models, to estimate database-specific hazard ratios (HRs) comparing OAC with APT, for both the primary composite outcome [recurrent stroke, transient ischaemic attack (TIA), or death] and stroke alone. Propensity scores were applied via inverse probability of treatment weighting to control for confounding. We synthesized database-specific HRs using random-effects meta-analysis models. This analysis included 2385 (OAC = 804 and APT = 1581) patients with 227 composite endpoints (stroke/TIA/death). The difference between OAC and APT was not statistically significant for the primary composite outcome [adjusted HR = 0.76, 95% confidence interval (CI) 0.52-1.12] or for the secondary outcome of stroke alone (adjusted HR = 0.75, 95% CI 0.44-1.27). Results were consistent in analyses applying alternative weighting schemes, with the exception that OAC had a statistically significant beneficial effect on the composite outcome in analyses standardized to the patient population who actually received APT (adjusted HR = 0.64, 95% CI 0.42-0.99). Subgroup analyses did not detect statistically significant heterogeneity of treatment effects across clinically important patient groups. We did not find a statistically significant difference comparing OAC with APT; our results justify randomized trials comparing different antithrombotic approaches in these patients. Published on behalf of the European Society of Cardiology. All rights reserved. © The Author 2015. For permissions please email: journals.permissions@oup.com.

  15. Spatio-temporal analysis of sub-hourly rainfall over Mumbai, India: Is statistical forecasting futile?

    NASA Astrophysics Data System (ADS)

    Singh, Jitendra; Sekharan, Sheeba; Karmakar, Subhankar; Ghosh, Subimal; Zope, P. E.; Eldho, T. I.

    2017-04-01

    Mumbai, the commercial and financial capital of India, experiences incessant annual rain episodes, mainly attributable to erratic rainfall pattern during monsoons and urban heat-island effect due to escalating urbanization, leading to increasing vulnerability to frequent flooding. After the infamous episode of 2005 Mumbai torrential rains when only two rain gauging stations existed, the governing civic body, the Municipal Corporation of Greater Mumbai (MCGM) came forward with an initiative to install 26 automatic weather stations (AWS) in June 2006 (MCGM 2007), which later increased to 60 AWS. A comprehensive statistical analysis to understand the spatio-temporal pattern of rainfall over Mumbai or any other coastal city in India has never been attempted earlier. In the current study, a thorough analysis of available rainfall data for 2006-2014 from these stations was performed; the 2013-2014 sub-hourly data from 26 AWS was found useful for further analyses due to their consistency and continuity. Correlogram cloud indicated no pattern of significant correlation when we considered the closest to the farthest gauging station from the base station; this impression was also supported by the semivariogram plots. Gini index values, a statistical measure of temporal non-uniformity, were found above 0.8 in visible majority showing an increasing trend in most gauging stations; this sufficiently led us to conclude that inconsistency in daily rainfall was gradually increasing with progress in monsoon. Interestingly, night rainfall was lesser compared to daytime rainfall. The pattern-less high spatio-temporal variation observed in Mumbai rainfall data signifies the futility of independently applying advanced statistical techniques, and thus calls for simultaneous inclusion of physics-centred models such as different meso-scale numerical weather prediction systems, particularly the Weather Research and Forecasting (WRF) model.

  16. Detecting most influencing courses on students grades using block PCA

    NASA Astrophysics Data System (ADS)

    Othman, Osama H.; Gebril, Rami Salah

    2014-12-01

    One of the modern solutions adopted in dealing with the problem of large number of variables in statistical analyses is the Block Principal Component Analysis (Block PCA). This modified technique can be used to reduce the vertical dimension (variables) of the data matrix Xn×p by selecting a smaller number of variables, (say m) containing most of the statistical information. These selected variables can then be employed in further investigations and analyses. Block PCA is an adapted multistage technique of the original PCA. It involves the application of Cluster Analysis (CA) and variable selection throughout sub principal components scores (PC's). The application of Block PCA in this paper is a modified version of the original work of Liu et al (2002). The main objective was to apply PCA on each group of variables, (established using cluster analysis), instead of involving the whole large pack of variables which was proved to be unreliable. In this work, the Block PCA is used to reduce the size of a huge data matrix ((n = 41) × (p = 251)) consisting of Grade Point Average (GPA) of the students in 251 courses (variables) in the faculty of science in Benghazi University. In other words, we are constructing a smaller analytical data matrix of the GPA's of the students with less variables containing most variation (statistical information) in the original database. By applying the Block PCA, (12) courses were found to `absorb' most of the variation or influence from the original data matrix, and hence worth to be keep for future statistical exploring and analytical studies. In addition, the course Independent Study (Math.) was found to be the most influencing course on students GPA among the 12 selected courses.

  17. Consistency of extreme flood estimation approaches

    NASA Astrophysics Data System (ADS)

    Felder, Guido; Paquet, Emmanuel; Penot, David; Zischg, Andreas; Weingartner, Rolf

    2017-04-01

    Estimations of low-probability flood events are frequently used for the planning of infrastructure as well as for determining the dimensions of flood protection measures. There are several well-established methodical procedures to estimate low-probability floods. However, a global assessment of the consistency of these methods is difficult to achieve, the "true value" of an extreme flood being not observable. Anyway, a detailed comparison performed on a given case study brings useful information about the statistical and hydrological processes involved in different methods. In this study, the following three different approaches for estimating low-probability floods are compared: a purely statistical approach (ordinary extreme value statistics), a statistical approach based on stochastic rainfall-runoff simulation (SCHADEX method), and a deterministic approach (physically based PMF estimation). These methods are tested for two different Swiss catchments. The results and some intermediate variables are used for assessing potential strengths and weaknesses of each method, as well as for evaluating the consistency of these methods.

  18. In vivo Comet assay--statistical analysis and power calculations of mice testicular cells.

    PubMed

    Hansen, Merete Kjær; Sharma, Anoop Kumar; Dybdahl, Marianne; Boberg, Julie; Kulahci, Murat

    2014-11-01

    The in vivo Comet assay is a sensitive method for evaluating DNA damage. A recurrent concern is how to analyze the data appropriately and efficiently. A popular approach is to summarize the raw data into a summary statistic prior to the statistical analysis. However, consensus on which summary statistic to use has yet to be reached. Another important consideration concerns the assessment of proper sample sizes in the design of Comet assay studies. This study aims to identify a statistic suitably summarizing the % tail DNA of mice testicular samples in Comet assay studies. A second aim is to provide curves for this statistic outlining the number of animals and gels to use. The current study was based on 11 compounds administered via oral gavage in three doses to male mice: CAS no. 110-26-9, CAS no. 512-56-1, CAS no. 111873-33-7, CAS no. 79-94-7, CAS no. 115-96-8, CAS no. 598-55-0, CAS no. 636-97-5, CAS no. 85-28-9, CAS no. 13674-87-8, CAS no. 43100-38-5 and CAS no. 60965-26-6. Testicular cells were examined using the alkaline version of the Comet assay and the DNA damage was quantified as % tail DNA using a fully automatic scoring system. From the raw data 23 summary statistics were examined. A linear mixed-effects model was fitted to the summarized data and the estimated variance components were used to generate power curves as a function of sample size. The statistic that most appropriately summarized the within-sample distributions was the median of the log-transformed data, as it most consistently conformed to the assumptions of the statistical model. Power curves for 1.5-, 2-, and 2.5-fold changes of the highest dose group compared to the control group when 50 and 100 cells were scored per gel are provided to aid in the design of future Comet assay studies on testicular cells. Copyright © 2014 Elsevier B.V. All rights reserved.

  19. Gestational surrogacy: Viewpoint of Iranian infertile women

    PubMed Central

    Rahmani, Azad; Sattarzadeh, Nilofar; Gholizadeh, Leila; Sheikhalipour, Zahra; Allahbakhshian, Atefeh; Hassankhani, Hadi

    2011-01-01

    BACKGROUND: Surrogacy is a popular form of assisted reproductive technology of which only gestational form is approved by most of the religious scholars in Iran. Little evidence exists about the Iranian infertile women's viewpoint regarding gestational surrogacy. AIM: To assess the viewpoint of Iranian infertile women toward gestational surrogacy. SETTING AND DESIGN: This descriptive study was conducted at the infertility clinic of Tabriz University of Medical Sciences, Iran. MATERIALS AND METHODS: The study sample consisted of 238 infertile women who were selected using the eligible sampling method. Data were collected by using a researcher developed questionnaire that included 25 items based on a five-point Likert scale. STATISTICAL ANALYSIS: Data analysis was conducted by SPSS statistical software using descriptive statistics. RESULTS: Viewpoint of 214 women (89.9%) was positive. 36 (15.1%) women considered gestational surrogacy against their religious beliefs; 170 women (71.4%) did not assume the commissioning couple as owners of the baby; 160 women (67.2%) said that children who were born through surrogacy would better not know about it; and 174 women (73.1%) believed that children born through surrogacy will face mental problems. CONCLUSION: Iranian infertile women have positive viewpoint regarding the surrogacy. However, to increase the acceptability of surrogacy among infertile women, further efforts are needed. PMID:22346081

  20. Thermal heterogeneity within aqueous materials quantified by 1H NMR spectroscopy: Multiparametric validation in silico and in vitro

    NASA Astrophysics Data System (ADS)

    Lutz, Norbert W.; Bernard, Monique

    2018-02-01

    We recently suggested a new paradigm for statistical analysis of thermal heterogeneity in (semi-)aqueous materials by 1H NMR spectroscopy, using water as a temperature probe. Here, we present a comprehensive in silico and in vitro validation that demonstrates the ability of this new technique to provide accurate quantitative parameters characterizing the statistical distribution of temperature values in a volume of (semi-)aqueous matter. First, line shape parameters of numerically simulated water 1H NMR spectra are systematically varied to study a range of mathematically well-defined temperature distributions. Then, corresponding models based on measured 1H NMR spectra of agarose gel are analyzed. In addition, dedicated samples based on hydrogels or biological tissue are designed to produce temperature gradients changing over time, and dynamic NMR spectroscopy is employed to analyze the resulting temperature profiles at sub-second temporal resolution. Accuracy and consistency of the previously introduced statistical descriptors of temperature heterogeneity are determined: weighted median and mean temperature, standard deviation, temperature range, temperature mode(s), kurtosis, skewness, entropy, and relative areas under temperature curves. Potential and limitations of this method for quantitative analysis of thermal heterogeneity in (semi-)aqueous materials are discussed in view of prospective applications in materials science as well as biology and medicine.

  1. On the distribution of career longevity and the evolution of home-run prowess in professional baseball

    NASA Astrophysics Data System (ADS)

    Petersen, Alexander M.; Jung, Woo-Sung; Stanley, H. Eugene

    2008-09-01

    Statistical analysis is a major aspect of baseball, from player averages to historical benchmarks and records. Much of baseball fanfare is based around players exceeding the norm, some in a single game and others over a long career. Career statistics serve as a metric for classifying players and establishing their historical legacy. However, the concept of records and benchmarks assumes that the level of competition in baseball is stationary in time. Here we show that power law probability density functions, a hallmark of many complex systems that are driven by competition, govern career longevity in baseball. We also find similar power laws in the density functions of all major performance metrics for pitchers and batters. The use of performance-enhancing drugs has a dark history, emerging as a problem for both amateur and professional sports. We find statistical evidence consistent with performance-enhancing drugs in the analysis of home runs hit by players in the last 25 years. This is corroborated by the findings of the Mitchell Report (2007), a two-year investigation into the use of illegal steroids in Major League Baseball, which recently revealed that over 5 percent of Major League Baseball players tested positive for performance-enhancing drugs in an anonymous 2003 survey.

  2. The neuronal correlates of intranasal trigeminal function – An ALE meta-analysis of human functional brain imaging data

    PubMed Central

    Albrecht, Jessica; Kopietz, Rainer; Frasnelli, Johannes; Wiesmann, Martin; Hummel, Thomas; Lundström, Johan N.

    2009-01-01

    Almost every odor we encounter in daily life has the capacity to produce a trigeminal sensation. Surprisingly, few functional imaging studies exploring human neuronal correlates of intranasal trigeminal function exist, and results are to some degree inconsistent. We utilized activation likelihood estimation (ALE), a quantitative voxel-based meta-analysis tool, to analyze functional imaging data (fMRI/PET) following intranasal trigeminal stimulation with carbon dioxide (CO2), a stimulus known to exclusively activate the trigeminal system. Meta-analysis tools are able to identify activations common across studies, thereby enabling activation mapping with higher certainty. Activation foci of nine studies utilizing trigeminal stimulation were included in the meta-analysis. We found significant ALE scores, thus indicating consistent activation across studies, in the brainstem, ventrolateral posterior thalamic nucleus, anterior cingulate cortex, insula, precentral gyrus, as well as in primary and secondary somatosensory cortices – a network known for the processing of intranasal nociceptive stimuli. Significant ALE values were also observed in the piriform cortex, insula, and the orbitofrontal cortex, areas known to process chemosensory stimuli, and in association cortices. Additionally, the trigeminal ALE statistics were directly compared with ALE statistics originating from olfactory stimulation, demonstrating considerable overlap in activation. In conclusion, the results of this meta-analysis map the human neuronal correlates of intranasal trigeminal stimulation with high statistical certainty and demonstrate that the cortical areas recruited during the processing of intranasal CO2 stimuli include those outside traditional trigeminal areas. Moreover, through illustrations of the considerable overlap between brain areas that process trigeminal and olfactory information; these results demonstrate the interconnectivity of flavor processing. PMID:19913573

  3. Statistical analysis of the horizontal divergent flow in emerging solar active regions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Toriumi, Shin; Hayashi, Keiji; Yokoyama, Takaaki, E-mail: shin.toriumi@nao.ac.jp

    Solar active regions (ARs) are thought to be formed by magnetic fields from the convection zone. Our flux emergence simulations revealed that a strong horizontal divergent flow (HDF) of unmagnetized plasma appears at the photosphere before the flux begins to emerge. In our earlier study, we analyzed HMI data for a single AR and confirmed presence of this precursor plasma flow in the actual Sun. In this paper, as an extension of our earlier study, we conducted a statistical analysis of the HDFs to further investigate their characteristics and better determine the properties. From SDO/HMI data, we picked up 23more » flux emergence events over a period of 14 months, the total flux of which ranges from 10{sup 20} to 10{sup 22} Mx. Out of 23 selected events, 6 clear HDFs were detected by the method we developed in our earlier study, and 7 HDFs detected by visual inspection were added to this statistic analysis. We found that the duration of the HDF is on average 61 minutes and the maximum HDF speed is on average 3.1 km s{sup –1}. We also estimated the rising speed of the subsurface magnetic flux to be 0.6-1.4 km s{sup –1}. These values are highly consistent with our previous one-event analysis as well as our simulation results. The observation results lead us to the conclusion that the HDF is a rather common feature in the earliest phase of AR emergence. Moreover, our HDF analysis has the capability of determining the subsurface properties of emerging fields that cannot be directly measured.« less

  4. Incorporating Budget Impact Analysis in the Implementation of Complex Interventions: A Case of an Integrated Intervention for Multimorbid Patients within the CareWell Study.

    PubMed

    Soto-Gordoa, Myriam; Arrospide, Arantzazu; Merino Hernández, Marisa; Mora Amengual, Joana; Fullaondo Zabala, Ane; Larrañaga, Igor; de Manuel, Esteban; Mar, Javier

    2017-01-01

    To develop a framework for the management of complex health care interventions within the Deming continuous improvement cycle and to test the framework in the case of an integrated intervention for multimorbid patients in the Basque Country within the CareWell project. Statistical analysis alone, although necessary, may not always represent the practical significance of the intervention. Thus, to ascertain the true economic impact of the intervention, the statistical results can be integrated into the budget impact analysis. The intervention of the case study consisted of a comprehensive approach that integrated new provider roles and new technological infrastructure for multimorbid patients, with the aim of reducing patient decompensations by 10% over 5 years. The study period was 2012 to 2020. Given the aging of the general population, the conventional scenario predicts an increase of 21% in the health care budget for care of multimorbid patients during the study period. With a successful intervention, this figure should drop to 18%. The statistical analysis, however, showed no significant differences in costs either in primary care or in hospital care between 2012 and 2014. The real costs in 2014 were by far closer to those in the conventional scenario than to the reductions expected in the objective scenario. The present implementation should be reappraised, because the present expenditure did not move closer to the objective budget. This work demonstrates the capacity of budget impact analysis to enhance the implementation of complex interventions. Its integration in the context of the continuous improvement cycle is transferable to other contexts in which implementation depth and time are important. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  5. The effect of ion-exchange purification on the determination of plutonium at the New Brunswick Laboratory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mitchell, W.G.; Spaletto, M.I.; Lewis, K.

    The method of plutonium (Pu) determination at the Brunswick Laboratory (NBL) consists of a combination of ion-exchange purification followed by controlled-potential coulometric analysis (IE/CPC). The present report's purpose is to quantify any detectable Pu loss occurring in the ion-exchange (IE) purification step which would cause a negative bias in the NBL method for Pu analysis. The magnitude of any such loss would be contained within the reproducibility (0.05%) of the IE/CPC method which utilizes a state-of-the-art autocoulometer developed at NBL. When the NBL IE/CPC method is used for Pu analysis, any loss in ion-exchange purification (<0.05%) is confounded with themore » repeatability of the ion-exchange and the precision of the CPC analysis technique (<0.05%). Consequently, to detect a bias in the IE/CPC method due to the IE alone using the IE/CPC method itself requires that many randomized analyses on a single material be performed over time and that statistical analysis of the data be performed. The initial approach described in this report to quantify any IE loss was an independent method, Isotope Dilution Mass Spectrometry; however, the number of analyses performed was insufficient to assign a statistically significant value to the IE loss (<0.02% of 10 mg samples of Pu). The second method used for quantifying any IE loss of Pu was multiple ion exchanges of the same Pu aliquant; the small number of analyses possible per individual IE together with the column-to-column variability over multiple ion exchanges prevented statistical detection of any loss of <0.05%. 12 refs.« less

  6. A simulator for evaluating methods for the detection of lesion-deficit associations

    NASA Technical Reports Server (NTRS)

    Megalooikonomou, V.; Davatzikos, C.; Herskovits, E. H.

    2000-01-01

    Although much has been learned about the functional organization of the human brain through lesion-deficit analysis, the variety of statistical and image-processing methods developed for this purpose precludes a closed-form analysis of the statistical power of these systems. Therefore, we developed a lesion-deficit simulator (LDS), which generates artificial subjects, each of which consists of a set of functional deficits, and a brain image with lesions; the deficits and lesions conform to predefined distributions. We used probability distributions to model the number, sizes, and spatial distribution of lesions, to model the structure-function associations, and to model registration error. We used the LDS to evaluate, as examples, the effects of the complexities and strengths of lesion-deficit associations, and of registration error, on the power of lesion-deficit analysis. We measured the numbers of recovered associations from these simulated data, as a function of the number of subjects analyzed, the strengths and number of associations in the statistical model, the number of structures associated with a particular function, and the prior probabilities of structures being abnormal. The number of subjects required to recover the simulated lesion-deficit associations was found to have an inverse relationship to the strength of associations, and to the smallest probability in the structure-function model. The number of structures associated with a particular function (i.e., the complexity of associations) had a much greater effect on the performance of the analysis method than did the total number of associations. We also found that registration error of 5 mm or less reduces the number of associations discovered by approximately 13% compared to perfect registration. The LDS provides a flexible framework for evaluating many aspects of lesion-deficit analysis.

  7. Descriptive data analysis.

    PubMed

    Thompson, Cheryl Bagley

    2009-01-01

    This 13th article of the Basics of Research series is first in a short series on statistical analysis. These articles will discuss creating your statistical analysis plan, levels of measurement, descriptive statistics, probability theory, inferential statistics, and general considerations for interpretation of the results of a statistical analysis.

  8. Data free inference with processed data products

    DOE PAGES

    Chowdhary, K.; Najm, H. N.

    2014-07-12

    Here, we consider the context of probabilistic inference of model parameters given error bars or confidence intervals on model output values, when the data is unavailable. We introduce a class of algorithms in a Bayesian framework, relying on maximum entropy arguments and approximate Bayesian computation methods, to generate consistent data with the given summary statistics. Once we obtain consistent data sets, we pool the respective posteriors, to arrive at a single, averaged density on the parameters. This approach allows us to perform accurate forward uncertainty propagation consistent with the reported statistics.

  9. A Proof of Concept Study of Function-Based Statistical Analysis of fNIRS Data: Syntax Comprehension in Children with Specific Language Impairment Compared to Typically-Developing Controls.

    PubMed

    Fu, Guifang; Wan, Nicholas J A; Baker, Joseph M; Montgomery, James W; Evans, Julia L; Gillam, Ronald B

    2016-01-01

    Functional near infrared spectroscopy (fNIRS) is a neuroimaging technology that enables investigators to indirectly monitor brain activity in vivo through relative changes in the concentration of oxygenated and deoxygenated hemoglobin. One of the key features of fNIRS is its superior temporal resolution, with dense measurements over very short periods of time (100 ms increments). Unfortunately, most statistical analysis approaches in the existing literature have not fully utilized the high temporal resolution of fNIRS. For example, many analysis procedures are based on linearity assumptions that only extract partial information, thereby neglecting the overall dynamic trends in fNIRS trajectories. The main goal of this article is to assess the ability of a functional data analysis (FDA) approach for detecting significant differences in hemodynamic responses recorded by fNIRS. Children with and without SLI wore two, 3 × 5 fNIRS caps situated over the bilateral parasylvian areas as they completed a language comprehension task. FDA was used to decompose the high dimensional hemodynamic curves into the mean function and a few eigenfunctions to represent the overall trend and variation structures over time. Compared to the most popular GLM, we did not assume any parametric structure and let the data speak for itself. This analysis identified significant differences between the case and control groups in the oxygenated hemodynamic mean trends in the bilateral inferior frontal and left inferior posterior parietal brain regions. We also detected significant group differences in the deoxygenated hemodynamic mean trends in the right inferior posterior parietal cortex and left temporal parietal junction. These findings, using dramatically different approaches, experimental designs, data sets, and foci, were consistent with several other reports, confirming group differences in the importance of these two areas for syntax comprehension. The proposed FDA was consistent with the temporal characteristics of fNIRS, thus providing an alternative methodology for fNIRS analyses.

  10. The chemiluminescence based Ziplex automated workstation focus array reproduces ovarian cancer Affymetrix GeneChip expression profiles.

    PubMed

    Quinn, Michael C J; Wilson, Daniel J; Young, Fiona; Dempsey, Adam A; Arcand, Suzanna L; Birch, Ashley H; Wojnarowicz, Paulina M; Provencher, Diane; Mes-Masson, Anne-Marie; Englert, David; Tonin, Patricia N

    2009-07-06

    As gene expression signatures may serve as biomarkers, there is a need to develop technologies based on mRNA expression patterns that are adaptable for translational research. Xceed Molecular has recently developed a Ziplex technology, that can assay for gene expression of a discrete number of genes as a focused array. The present study has evaluated the reproducibility of the Ziplex system as applied to ovarian cancer research of genes shown to exhibit distinct expression profiles initially assessed by Affymetrix GeneChip analyses. The new chemiluminescence-based Ziplex gene expression array technology was evaluated for the expression of 93 genes selected based on their Affymetrix GeneChip profiles as applied to ovarian cancer research. Probe design was based on the Affymetrix target sequence that favors the 3' UTR of transcripts in order to maximize reproducibility across platforms. Gene expression analysis was performed using the Ziplex Automated Workstation. Statistical analyses were performed to evaluate reproducibility of both the magnitude of expression and differences between normal and tumor samples by correlation analyses, fold change differences and statistical significance testing. Expressions of 82 of 93 (88.2%) genes were highly correlated (p < 0.01) in a comparison of the two platforms. Overall, 75 of 93 (80.6%) genes exhibited consistent results in normal versus tumor tissue comparisons for both platforms (p < 0.001). The fold change differences were concordant for 87 of 93 (94%) genes, where there was agreement between the platforms regarding statistical significance for 71 (76%) of 87 genes. There was a strong agreement between the two platforms as shown by comparisons of log2 fold differences of gene expression between tumor versus normal samples (R = 0.93) and by Bland-Altman analysis, where greater than 90% of expression values fell within the 95% limits of agreement. Overall concordance of gene expression patterns based on correlations, statistical significance between tumor and normal ovary data, and fold changes was consistent between the Ziplex and Affymetrix platforms. The reproducibility and ease-of-use of the technology suggests that the Ziplex array is a suitable platform for translational research.

  11. Bilirubin and Stroke Risk Using a Mendelian Randomization Design.

    PubMed

    Lee, Sun Ju; Jee, Yon Ho; Jung, Keum Ji; Hong, Seri; Shin, Eun Soon; Jee, Sun Ha

    2017-05-01

    Circulating bilirubin, a natural antioxidant, is associated with decreased risk of stroke. However, the nature of the relationship between the two remains unknown. We used a Mendelian randomization analysis to assess the causal effect of serum bilirubin on stroke risk in Koreans. The 14 single-nucleotide polymorphisms (SNPs) (<10 -7 ) including rs6742078 of uridine diphosphoglucuronyl-transferase were selected from genome-wide association study of bilirubin level in the KCPS-II (Korean Cancer Prevention Study-II) Biobank subcohort consisting of 4793 healthy Korean and 806 stroke cases. Weighted genetic risk score was calculated using 14 SNPs selected from the top SNPs. Both rs6742078 (F statistics=138) and weighted genetic risk score with 14 SNPs (F statistics=187) were strongly associated with bilirubin levels. Simultaneously, serum bilirubin level was associated with decreased risk of stroke in an ordinary least-squares analysis. However, in 2-stage least-squares Mendelian randomization analysis, no causal relationship between serum bilirubin and stroke risk was found. There is no evidence that bilirubin level is causally associated with risk of stroke in Koreans. Therefore, bilirubin level is not a risk determinant of stroke. © 2017 American Heart Association, Inc.

  12. The Effectiveness of Computer-Assisted Instruction to Teach Physical Examination to Students and Trainees in the Health Sciences Professions: A Systematic Review and Meta-Analysis

    PubMed Central

    Tomesko, Jennifer; Touger-Decker, Riva; Dreker, Margaret; Zelig, Rena; Parrott, James Scott

    2017-01-01

    Purpose: To explore knowledge and skill acquisition outcomes related to learning physical examination (PE) through computer-assisted instruction (CAI) compared with a face-to-face (F2F) approach. Method: A systematic literature review and meta-analysis published between January 2001 and December 2016 was conducted. Databases searched included Medline, Cochrane, CINAHL, ERIC, Ebsco, Scopus, and Web of Science. Studies were synthesized by study design, intervention, and outcomes. Statistical analyses included DerSimonian-Laird random-effects model. Results: In total, 7 studies were included in the review, and 5 in the meta-analysis. There were no statistically significant differences for knowledge (mean difference [MD] = 5.39, 95% confidence interval [CI]: −2.05 to 12.84) or skill acquisition (MD = 0.35, 95% CI: −5.30 to 6.01). Conclusions: The evidence does not suggest a strong consistent preference for either CAI or F2F instruction to teach students/trainees PE. Further research is needed to identify conditions which examine knowledge and skill acquisition outcomes that favor one mode of instruction over the other. PMID:29349338

  13. Probability distributions of bed load particle velocities, accelerations, hop distances, and travel times informed by Jaynes's principle of maximum entropy

    USGS Publications Warehouse

    Furbish, David; Schmeeckle, Mark; Schumer, Rina; Fathel, Siobhan

    2016-01-01

    We describe the most likely forms of the probability distributions of bed load particle velocities, accelerations, hop distances, and travel times, in a manner that formally appeals to inferential statistics while honoring mechanical and kinematic constraints imposed by equilibrium transport conditions. The analysis is based on E. Jaynes's elaboration of the implications of the similarity between the Gibbs entropy in statistical mechanics and the Shannon entropy in information theory. By maximizing the information entropy of a distribution subject to known constraints on its moments, our choice of the form of the distribution is unbiased. The analysis suggests that particle velocities and travel times are exponentially distributed and that particle accelerations follow a Laplace distribution with zero mean. Particle hop distances, viewed alone, ought to be distributed exponentially. However, the covariance between hop distances and travel times precludes this result. Instead, the covariance structure suggests that hop distances follow a Weibull distribution. These distributions are consistent with high-resolution measurements obtained from high-speed imaging of bed load particle motions. The analysis brings us closer to choosing distributions based on our mechanical insight.

  14. The analysis of factors of management of safety of critical information infrastructure with use of dynamic models

    NASA Astrophysics Data System (ADS)

    Trostyansky, S. N.; Kalach, A. V.; Lavlinsky, V. V.; Lankin, O. V.

    2018-03-01

    Based on the analysis of the dynamic model of panel data by region, including fire statistics for surveillance sites and statistics of a set of regional socio-economic indicators, as well as the time of rapid response of the state fire service to fires, the probability of fires in the surveillance sites and the risk of human death in The result of such fires from the values of the corresponding indicators for the previous year, a set of regional social-economics factors, as well as regional indicators time rapid response of the state fire service in the fire. The results obtained are consistent with the results of the application to the fire risks of the model of a rational offender. Estimation of the economic equivalent of human life from data on surveillance objects for Russia, calculated on the basis of the analysis of the presented dynamic model of fire risks, correctly agrees with the known literary data. The results obtained on the basis of the econometric approach to fire risks allow us to forecast fire risks at the supervisory sites in the regions of Russia and to develop management solutions to minimize such risks.

  15. Using the gene ontology for microarray data mining: a comparison of methods and application to age effects in human prefrontal cortex.

    PubMed

    Pavlidis, Paul; Qin, Jie; Arango, Victoria; Mann, John J; Sibille, Etienne

    2004-06-01

    One of the challenges in the analysis of gene expression data is placing the results in the context of other data available about genes and their relationships to each other. Here, we approach this problem in the study of gene expression changes associated with age in two areas of the human prefrontal cortex, comparing two computational methods. The first method, "overrepresentation analysis" (ORA), is based on statistically evaluating the fraction of genes in a particular gene ontology class found among the set of genes showing age-related changes in expression. The second method, "functional class scoring" (FCS), examines the statistical distribution of individual gene scores among all genes in the gene ontology class and does not involve an initial gene selection step. We find that FCS yields more consistent results than ORA, and the results of ORA depended strongly on the gene selection threshold. Our findings highlight the utility of functional class scoring for the analysis of complex expression data sets and emphasize the advantage of considering all available genomic information rather than sets of genes that pass a predetermined "threshold of significance."

  16. Recipient area folliculitis after follicular-unit transplantation: characterization of clinical features and analysis of associated factors.

    PubMed

    Bunagan, M J Kristine S; Pathomvanich, Damkerng; Laorwong, Kongkiat

    2010-07-01

    Postoperative recipient-area folliculitis may be a cause of less or delayed growth of transplanted hair and an obvious cause of distress to the patient. No study has been done to elaborate on its clinical features and assess possible factors that may correlate with its occurrence. To study the clinical features and possible factors that may be associated with the development of recipient-area folliculitis after follicular-unit transplantation (FUT). Retrospective analysis of 27 patients who developed folliculitis after FUT and 28 patients without such complication. Lesion onset ranged from 2 days to 6 months after FUT (mean 1.44 months). Lesions were mostly pustules that resolved without sequela. Statistical analysis showed that, in terms of patient characteristics (e.g., hair features, scalp condition) and the number of grafts transplanted, there was no statistically significant difference in assessed parameters between those with and without folliculitis (p<.05). Main clinical features of postoperative folliculitis consist mostly of few to moderate self-limited pustules. In this study, regardless of management, lesions healed without scarring and without affecting graft growth. Neither patient characteristics nor number of grafts transplanted was associated with this complication.

  17. A systematic review and meta-analysis of tract-based spatial statistics studies regarding attention-deficit/hyperactivity disorder.

    PubMed

    Chen, Lizhou; Hu, Xinyu; Ouyang, Luo; He, Ning; Liao, Yi; Liu, Qi; Zhou, Ming; Wu, Min; Huang, Xiaoqi; Gong, Qiyong

    2016-09-01

    Diffusion tensor imaging (DTI) studies that use tract-based spatial statistics (TBSS) have demonstrated the microstructural abnormalities of white matter (WM) in patients with attention-deficit/hyperactivity disorder (ADHD); however, robust conclusions have not yet been drawn. The present study integrated the findings of previous TBSS studies to determine the most consistent WM alterations in ADHD via a narrative review and meta-analysis. The literature search was conducted through October 2015 to identify TBSS studies that compared fractional anisotropy (FA) between ADHD patients and healthy controls. FA reductions were identified in the splenium of the corpus callosum (CC) that extended to the right cingulum, right sagittal stratum, and left tapetum. The first two clusters retained significance in the sensitivity analysis and in all subgroup analyses. The FA reduction in the CC splenium was negatively associated with the mean age of the ADHD group. We hypothesize that, in addition to the fronto-striatal-cerebellar circuit, the disturbed WM matter tracts that integrate the bilateral hemispheres and posterior-brain circuitries play a crucial role in the pathophysiology of ADHD. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Some insight on censored cost estimators.

    PubMed

    Zhao, H; Cheng, Y; Bang, H

    2011-08-30

    Censored survival data analysis has been studied for many years. Yet, the analysis of censored mark variables, such as medical cost, quality-adjusted lifetime, and repeated events, faces a unique challenge that makes standard survival analysis techniques invalid. Because of the 'informative' censorship imbedded in censored mark variables, the use of the Kaplan-Meier (Journal of the American Statistical Association 1958; 53:457-481) estimator, as an example, will produce biased estimates. Innovative estimators have been developed in the past decade in order to handle this issue. Even though consistent estimators have been proposed, the formulations and interpretations of some estimators are less intuitive to practitioners. On the other hand, more intuitive estimators have been proposed, but their mathematical properties have not been established. In this paper, we prove the analytic identity between some estimators (a statistically motivated estimator and an intuitive estimator) for censored cost data. Efron (1967) made similar investigation for censored survival data (between the Kaplan-Meier estimator and the redistribute-to-the-right algorithm). Therefore, we view our study as an extension of Efron's work to informatively censored data so that our findings could be applied to other marked variables. Copyright © 2011 John Wiley & Sons, Ltd.

  19. Statistical Policy Working Paper 25. Data Editing Workshop and Exposition

    DOT National Transportation Integrated Search

    1996-12-01

    Statistical Policy Working Paper 25 is the written record of the Data Editing Workshop and Exposition held March 22, 1996, at the Bureau of Labor Statistics (BLS) Conference and Training Center. The program consisted of 44 oral presentations and 19 s...

  20. A global estimate of the Earth's magnetic crustal thickness

    NASA Astrophysics Data System (ADS)

    Vervelidou, Foteini; Thébault, Erwan

    2014-05-01

    The Earth's lithosphere is considered to be magnetic only down to the Curie isotherm. Therefore the Curie isotherm can, in principle, be estimated by analysis of magnetic data. Here, we propose such an analysis in the spectral domain by means of a newly introduced regional spatial power spectrum. This spectrum is based on the Revised Spherical Cap Harmonic Analysis (R-SCHA) formalism (Thébault et al., 2006). We briefly discuss its properties and its relationship with the Spherical Harmonic spatial power spectrum. This relationship allows us to adapt any theoretical expression of the lithospheric field power spectrum expressed in Spherical Harmonic degrees to the regional formulation. We compared previously published statistical expressions (Jackson, 1994 ; Voorhies et al., 2002) to the recent lithospheric field models derived from the CHAMP and airborne measurements and we finally developed a new statistical form for the power spectrum of the Earth's magnetic lithosphere that we think provides more consistent results. This expression depends on the mean magnetization, the mean crustal thickness and a power law value that describes the amount of spatial correlation of the sources. In this study, we make a combine use of the R-SCHA surface power spectrum and this statistical form. We conduct a series of regional spectral analyses for the entire Earth. For each region, we estimate the R-SCHA surface power spectrum of the NGDC-720 Spherical Harmonic model (Maus, 2010). We then fit each of these observational spectra to the statistical expression of the power spectrum of the Earth's lithosphere. By doing so, we estimate the large wavelengths of the magnetic crustal thickness on a global scale that are not accessible directly from the magnetic measurements due to the masking core field. We then discuss these results and compare them to the results we obtained by conducting a similar spectral analysis, but this time in the cartesian coordinates, by means of a published statistical expression (Maus et al., 1997). We also compare our results to crustal thickness global maps derived by means of additional geophysical data (Purucker et al., 2002).

Top