Sample records for system statistical analyses

  1. Statistical Analyses of Raw Material Data for MTM45-1/CF7442A-36% RW: CMH Cure Cycle

    NASA Technical Reports Server (NTRS)

    Coroneos, Rula; Pai, Shantaram, S.; Murthy, Pappu

    2013-01-01

    This report describes statistical characterization of physical properties of the composite material system MTM45-1/CF7442A, which has been tested and is currently being considered for use on spacecraft structures. This composite system is made of 6K plain weave graphite fibers in a highly toughened resin system. This report summarizes the distribution types and statistical details of the tests and the conditions for the experimental data generated. These distributions will be used in multivariate regression analyses to help determine material and design allowables for similar material systems and to establish a procedure for other material systems. Additionally, these distributions will be used in future probabilistic analyses of spacecraft structures. The specific properties that are characterized are the ultimate strength, modulus, and Poisson??s ratio by using a commercially available statistical package. Results are displayed using graphical and semigraphical methods and are included in the accompanying appendixes.

  2. An Exploratory Data Analysis System for Support in Medical Decision-Making

    PubMed Central

    Copeland, J. A.; Hamel, B.; Bourne, J. R.

    1979-01-01

    An experimental system was developed to allow retrieval and analysis of data collected during a study of neurobehavioral correlates of renal disease. After retrieving data organized in a relational data base, simple bivariate statistics of parametric and nonparametric nature could be conducted. An “exploratory” mode in which the system provided guidance in selection of appropriate statistical analyses was also available to the user. The system traversed a decision tree using the inherent qualities of the data (e.g., the identity and number of patients, tests, and time epochs) to search for the appropriate analyses to employ.

  3. A multi-criteria evaluation system for marine litter pollution based on statistical analyses of OSPAR beach litter monitoring time series.

    PubMed

    Schulz, Marcus; Neumann, Daniel; Fleet, David M; Matthies, Michael

    2013-12-01

    During the last decades, marine pollution with anthropogenic litter has become a worldwide major environmental concern. Standardized monitoring of litter since 2001 on 78 beaches selected within the framework of the Convention for the Protection of the Marine Environment of the North-East Atlantic (OSPAR) has been used to identify temporal trends of marine litter. Based on statistical analyses of this dataset a two-part multi-criteria evaluation system for beach litter pollution of the North-East Atlantic and the North Sea is proposed. Canonical correlation analyses, linear regression analyses, and non-parametric analyses of variance were used to identify different temporal trends. A classification of beaches was derived from cluster analyses and served to define different states of beach quality according to abundances of 17 input variables. The evaluation system is easily applicable and relies on the above-mentioned classification and on significant temporal trends implied by significant rank correlations. Copyright © 2013 Elsevier Ltd. All rights reserved.

  4. Antimicrobial susceptibility of Escherichia coli F4, Pasteurella multocida, and Streptococcus suis isolates from a diagnostic veterinary laboratory and recommendations for a surveillance system

    PubMed Central

    Glass-Kaastra, Shiona K.; Pearl, David L.; Reid-Smith, Richard J.; McEwen, Beverly; Slavic, Durda; McEwen, Scott A.; Fairles, Jim

    2014-01-01

    Antimicrobial susceptibility data on Escherichia coli F4, Pasteurella multocida, and Streptococcus suis isolates from Ontario swine (January 1998 to October 2010) were acquired from a comprehensive diagnostic veterinary laboratory in Ontario, Canada. In relation to the possible development of a surveillance system for antimicrobial resistance, data were assessed for ease of management, completeness, consistency, and applicability for temporal and spatial statistical analyses. Limited farm location data precluded spatial analyses and missing demographic data limited their use as predictors within multivariable statistical models. Changes in the standard panel of antimicrobials used for susceptibility testing reduced the number of antimicrobials available for temporal analyses. Data consistency and quality could improve over time in this and similar diagnostic laboratory settings by encouraging complete reporting with sample submission and by modifying database systems to limit free-text data entry. These changes could make more statistical methods available for disease surveillance and cluster detection. PMID:24688133

  5. Antimicrobial susceptibility of Escherichia coli F4, Pasteurella multocida, and Streptococcus suis isolates from a diagnostic veterinary laboratory and recommendations for a surveillance system.

    PubMed

    Glass-Kaastra, Shiona K; Pearl, David L; Reid-Smith, Richard J; McEwen, Beverly; Slavic, Durda; McEwen, Scott A; Fairles, Jim

    2014-04-01

    Antimicrobial susceptibility data on Escherichia coli F4, Pasteurella multocida, and Streptococcus suis isolates from Ontario swine (January 1998 to October 2010) were acquired from a comprehensive diagnostic veterinary laboratory in Ontario, Canada. In relation to the possible development of a surveillance system for antimicrobial resistance, data were assessed for ease of management, completeness, consistency, and applicability for temporal and spatial statistical analyses. Limited farm location data precluded spatial analyses and missing demographic data limited their use as predictors within multivariable statistical models. Changes in the standard panel of antimicrobials used for susceptibility testing reduced the number of antimicrobials available for temporal analyses. Data consistency and quality could improve over time in this and similar diagnostic laboratory settings by encouraging complete reporting with sample submission and by modifying database systems to limit free-text data entry. These changes could make more statistical methods available for disease surveillance and cluster detection.

  6. ISSUES IN THE STATISTICAL ANALYSIS OF SMALL-AREA HEALTH DATA. (R825173)

    EPA Science Inventory

    The availability of geographically indexed health and population data, with advances in computing, geographical information systems and statistical methodology, have opened the way for serious exploration of small area health statistics based on routine data. Such analyses may be...

  7. A data storage, retrieval and analysis system for endocrine research. [for Skylab

    NASA Technical Reports Server (NTRS)

    Newton, L. E.; Johnston, D. A.

    1975-01-01

    This retrieval system builds, updates, retrieves, and performs basic statistical analyses on blood, urine, and diet parameters for the M071 and M073 Skylab and Apollo experiments. This system permits data entry from cards to build an indexed sequential file. Programs are easily modified for specialized analyses.

  8. Computer program for prediction of fuel consumption statistical data for an upper stage three-axes stabilized on-off control system

    NASA Technical Reports Server (NTRS)

    1982-01-01

    A FORTRAN coded computer program and method to predict the reaction control fuel consumption statistics for a three axis stabilized rocket vehicle upper stage is described. A Monte Carlo approach is used which is more efficient by using closed form estimates of impulses. The effects of rocket motor thrust misalignment, static unbalance, aerodynamic disturbances, and deviations in trajectory, mass properties and control system characteristics are included. This routine can be applied to many types of on-off reaction controlled vehicles. The pseudorandom number generation and statistical analyses subroutines including the output histograms can be used for other Monte Carlo analyses problems.

  9. SEER Cancer Query Systems (CanQues)

    Cancer.gov

    These applications provide access to cancer statistics including incidence, mortality, survival, prevalence, and probability of developing or dying from cancer. Users can display reports of the statistics or extract them for additional analyses.

  10. Dynamic systems approaches and levels of analysis in the nervous system

    PubMed Central

    Parker, David; Srivastava, Vipin

    2013-01-01

    Various analyses are applied to physiological signals. While epistemological diversity is necessary to address effects at different levels, there is often a sense of competition between analyses rather than integration. This is evidenced by the differences in the criteria needed to claim understanding in different approaches. In the nervous system, neuronal analyses that attempt to explain network outputs in cellular and synaptic terms are rightly criticized as being insufficient to explain global effects, emergent or otherwise, while higher-level statistical and mathematical analyses can provide quantitative descriptions of outputs but can only hypothesize on their underlying mechanisms. The major gap in neuroscience is arguably our inability to translate what should be seen as complementary effects between levels. We thus ultimately need approaches that allow us to bridge between different spatial and temporal levels. Analytical approaches derived from critical phenomena in the physical sciences are increasingly being applied to physiological systems, including the nervous system, and claim to provide novel insight into physiological mechanisms and opportunities for their control. Analyses of criticality have suggested several important insights that should be considered in cellular analyses. However, there is a mismatch between lower-level neurophysiological approaches and statistical phenomenological analyses that assume that lower-level effects can be abstracted away, which means that these effects are unknown or inaccessible to experimentalists. As a result experimental designs often generate data that is insufficient for analyses of criticality. This review considers the relevance of insights from analyses of criticality to neuronal network analyses, and highlights that to move the analyses forward and close the gap between the theoretical and neurobiological levels, it is necessary to consider that effects at each level are complementary rather than in competition. PMID:23386835

  11. A Monte Carlo Analysis of the Thrust Imbalance for the Space Launch System Booster During Both the Ignition Transient and Steady State Operation

    NASA Technical Reports Server (NTRS)

    Foster, Winfred A., Jr.; Crowder, Winston; Steadman, Todd E.

    2014-01-01

    This paper presents the results of statistical analyses performed to predict the thrust imbalance between two solid rocket motor boosters to be used on the Space Launch System (SLS) vehicle. Two legacy internal ballistics codes developed for the Space Shuttle program were coupled with a Monte Carlo analysis code to determine a thrust imbalance envelope for the SLS vehicle based on the performance of 1000 motor pairs. Thirty three variables which could impact the performance of the motors during the ignition transient and thirty eight variables which could impact the performance of the motors during steady state operation of the motor were identified and treated as statistical variables for the analyses. The effects of motor to motor variation as well as variations between motors of a single pair were included in the analyses. The statistical variations of the variables were defined based on data provided by NASA's Marshall Space Flight Center for the upgraded five segment booster and from the Space Shuttle booster when appropriate. The results obtained for the statistical envelope are compared with the design specification thrust imbalance limits for the SLS launch vehicle.

  12. Using DEWIS and R for Multi-Staged Statistics e-Assessments

    ERIC Educational Resources Information Center

    Gwynllyw, D. Rhys; Weir, Iain S.; Henderson, Karen L.

    2016-01-01

    We demonstrate how the DEWIS e-Assessment system may use embedded R code to facilitate the assessment of students' ability to perform involved statistical analyses. The R code has been written to emulate SPSS output and thus the statistical results for each bespoke data set can be generated efficiently and accurately using standard R routines.…

  13. Bayesian methods in reliability

    NASA Astrophysics Data System (ADS)

    Sander, P.; Badoux, R.

    1991-11-01

    The present proceedings from a course on Bayesian methods in reliability encompasses Bayesian statistical methods and their computational implementation, models for analyzing censored data from nonrepairable systems, the traits of repairable systems and growth models, the use of expert judgment, and a review of the problem of forecasting software reliability. Specific issues addressed include the use of Bayesian methods to estimate the leak rate of a gas pipeline, approximate analyses under great prior uncertainty, reliability estimation techniques, and a nonhomogeneous Poisson process. Also addressed are the calibration sets and seed variables of expert judgment systems for risk assessment, experimental illustrations of the use of expert judgment for reliability testing, and analyses of the predictive quality of software-reliability growth models such as the Weibull order statistics.

  14. Data Processing System (DPS) software with experimental design, statistical analysis and data mining developed for use in entomological research.

    PubMed

    Tang, Qi-Yi; Zhang, Chuan-Xi

    2013-04-01

    A comprehensive but simple-to-use software package called DPS (Data Processing System) has been developed to execute a range of standard numerical analyses and operations used in experimental design, statistics and data mining. This program runs on standard Windows computers. Many of the functions are specific to entomological and other biological research and are not found in standard statistical software. This paper presents applications of DPS to experimental design, statistical analysis and data mining in entomology. © 2012 The Authors Insect Science © 2012 Institute of Zoology, Chinese Academy of Sciences.

  15. Method and data evaluation at NASA endocrine laboratory. [Skylab 3 experiments

    NASA Technical Reports Server (NTRS)

    Johnston, D. A.

    1974-01-01

    The biomedical data of the astronauts on Skylab 3 were analyzed to evaluate the univariate statistical methods for comparing endocrine series experiments in relation to other medical experiments. It was found that an information storage and retrieval system was needed to facilitate statistical analyses.

  16. Unconscious analyses of visual scenes based on feature conjunctions.

    PubMed

    Tachibana, Ryosuke; Noguchi, Yasuki

    2015-06-01

    To efficiently process a cluttered scene, the visual system analyzes statistical properties or regularities of visual elements embedded in the scene. It is controversial, however, whether those scene analyses could also work for stimuli unconsciously perceived. Here we show that our brain performs the unconscious scene analyses not only using a single featural cue (e.g., orientation) but also based on conjunctions of multiple visual features (e.g., combinations of color and orientation information). Subjects foveally viewed a stimulus array (duration: 50 ms) where 4 types of bars (red-horizontal, red-vertical, green-horizontal, and green-vertical) were intermixed. Although a conscious perception of those bars was inhibited by a subsequent mask stimulus, the brain correctly analyzed the information about color, orientation, and color-orientation conjunctions of those invisible bars. The information of those features was then used for the unconscious configuration analysis (statistical processing) of the central bars, which induced a perceptual bias and illusory feature binding in visible stimuli at peripheral locations. While statistical analyses and feature binding are normally 2 key functions of the visual system to construct coherent percepts of visual scenes, our results show that a high-level analysis combining those 2 functions is correctly performed by unconscious computations in the brain. (c) 2015 APA, all rights reserved).

  17. Performance simulation in high altitude platforms (HAPs) communications systems

    NASA Astrophysics Data System (ADS)

    Ulloa-Vásquez, Fernando; Delgado-Penin, J. A.

    2002-07-01

    This paper considers the analysis by simulation of a digital narrowband communication system for an scenario which consists of a High-Altitude aeronautical Platform (HAP) and fixed/mobile terrestrial transceivers. The aeronautical channel is modelled considering geometrical (angle of elevation vs. horizontal distance of the terrestrial reflectors) and statistical arguments and under these circumstances a serial concatenated coded digital transmission is analysed for several hypothesis related to radio-electric coverage areas. The results indicate a good feasibility for the communication system proposed and analysed.

  18. Responding to the Medical Malpractice Insurance Crisis: A National Risk Management Information System

    PubMed Central

    Wess, Bernard P.; Jacobson, Gary

    1987-01-01

    In the process of forming a new medical malpractice reinsurance company, the authors analyzed thousands of medical malpractice cases, settlements, and verdicts. The evidence of those analyses indicated that the medical malpractice crisis is (1)emerging nation- and world-wide, (2)exacerbated by but not primarily a result of “predatory” legal action, (3)statistically determined by a small percentage of physicians and procedures, (4)overburdened with data but poor on information, (5)subject to classic forms of quality control and automation. The management information system developed to address this problem features a tiered data base architecture to accommodate medical, administrative, procedural, statistical, and actuarial analyses necessary to predict claims from untoward events, not merely to report them.

  19. System for analysing sickness absenteeism in Poland.

    PubMed

    Indulski, J A; Szubert, Z

    1997-01-01

    The National System of Sickness Absenteeism Statistics has been functioning in Poland since 1977, as the part of the national health statistics. The system is based on a 15-percent random sample of copies of certificates of temporary incapacity for work issued by all health care units and authorised private medical practitioners. A certificate of temporary incapacity for work is received by every insured employee who is compelled to stop working due to sickness, accident, or due to the necessity to care for a sick member of his/her family. The certificate is required on the first day of sickness. Analyses of disease- and accident-related sickness absenteeism carried out each year in Poland within the statistical system lead to the main conclusions: 1. Diseases of the musculoskeletal and peripheral nervous systems accounting, when combined, for 1/3 of the total sickness absenteeism, are a major health problem of the working population in Poland. During the past five years, incapacity for work caused by these diseases in males increased 2.5 times. 2. Circulatory diseases, and arterial hypertension and ischaemic heart disease in particular (41% and 27% of sickness days, respectively), create an essential health problem among males at productive age, especially, in the 40 and older age group. Absenteeism due to these diseases has increased in males more than two times.

  20. Metal and physico-chemical variations at a hydroelectric reservoir analyzed by Multivariate Analyses and Artificial Neural Networks: environmental management and policy/decision-making tools.

    PubMed

    Cavalcante, Y L; Hauser-Davis, R A; Saraiva, A C F; Brandão, I L S; Oliveira, T F; Silveira, A M

    2013-01-01

    This paper compared and evaluated seasonal variations in physico-chemical parameters and metals at a hydroelectric power station reservoir by applying Multivariate Analyses and Artificial Neural Networks (ANN) statistical techniques. A Factor Analysis was used to reduce the number of variables: the first factor was composed of elements Ca, K, Mg and Na, and the second by Chemical Oxygen Demand. The ANN showed 100% correct classifications in training and validation samples. Physico-chemical analyses showed that water pH values were not statistically different between the dry and rainy seasons, while temperature, conductivity, alkalinity, ammonia and DO were higher in the dry period. TSS, hardness and COD, on the other hand, were higher during the rainy season. The statistical analyses showed that Ca, K, Mg and Na are directly connected to the Chemical Oxygen Demand, which indicates a possibility of their input into the reservoir system by domestic sewage and agricultural run-offs. These statistical applications, thus, are also relevant in cases of environmental management and policy decision-making processes, to identify which factors should be further studied and/or modified to recover degraded or contaminated water bodies. Copyright © 2012 Elsevier B.V. All rights reserved.

  1. School Finance Court Cases and Disparate Racial Impact: The Contribution of Statistical Analysis in New York

    ERIC Educational Resources Information Center

    Stiefel, Leanna; Schwartz, Amy Ellen; Berne, Robert; Chellman, Colin C.

    2005-01-01

    Although analyses of state school finance systems rarely focus on the distribution of funds to students of different races, the advent of racial discrimination as an issue in school finance court cases may change that situation. In this article, we describe the background, analyses, and results of plaintiffs' testimony regarding racial…

  2. Quasi-Static Probabilistic Structural Analyses Process and Criteria

    NASA Technical Reports Server (NTRS)

    Goldberg, B.; Verderaime, V.

    1999-01-01

    Current deterministic structural methods are easily applied to substructures and components, and analysts have built great design insights and confidence in them over the years. However, deterministic methods cannot support systems risk analyses, and it was recently reported that deterministic treatment of statistical data is inconsistent with error propagation laws that can result in unevenly conservative structural predictions. Assuming non-nal distributions and using statistical data formats throughout prevailing stress deterministic processes lead to a safety factor in statistical format, which integrated into the safety index, provides a safety factor and first order reliability relationship. The embedded safety factor in the safety index expression allows a historically based risk to be determined and verified over a variety of quasi-static metallic substructures consistent with the traditional safety factor methods and NASA Std. 5001 criteria.

  3. Use of Spatial Epidemiology and Hot Spot Analysis to Target Women Eligible for Prenatal Women, Infants, and Children Services

    PubMed Central

    Krawczyk, Christopher; Gradziel, Pat; Geraghty, Estella M.

    2014-01-01

    Objectives. We used a geographic information system and cluster analyses to determine locations in need of enhanced Special Supplemental Nutrition Program for Women, Infants, and Children (WIC) Program services. Methods. We linked documented births in the 2010 California Birth Statistical Master File with the 2010 data from the WIC Integrated Statewide Information System. Analyses focused on the density of pregnant women who were eligible for but not receiving WIC services in California’s 7049 census tracts. We used incremental spatial autocorrelation and hot spot analyses to identify clusters of WIC-eligible nonparticipants. Results. We detected clusters of census tracts with higher-than-expected densities, compared with the state mean density of WIC-eligible nonparticipants, in 21 of 58 (36.2%) California counties (P < .05). In subsequent county-level analyses, we located neighborhood-level clusters of higher-than-expected densities of eligible nonparticipants in Sacramento, San Francisco, Fresno, and Los Angeles Counties (P < .05). Conclusions. Hot spot analyses provided a rigorous and objective approach to determine the locations of statistically significant clusters of WIC-eligible nonparticipants. Results helped inform WIC program and funding decisions, including the opening of new WIC centers, and offered a novel approach for targeting public health services. PMID:24354821

  4. A Monte Carlo Analysis of the Thrust Imbalance for the RSRMV Booster During Both the Ignition Transient and Steady State Operation

    NASA Technical Reports Server (NTRS)

    Foster, Winfred A., Jr.; Crowder, Winston; Steadman, Todd E.

    2014-01-01

    This paper presents the results of statistical analyses performed to predict the thrust imbalance between two solid rocket motor boosters to be used on the Space Launch System (SLS) vehicle. Two legacy internal ballistics codes developed for the Space Shuttle program were coupled with a Monte Carlo analysis code to determine a thrust imbalance envelope for the SLS vehicle based on the performance of 1000 motor pairs. Thirty three variables which could impact the performance of the motors during the ignition transient and thirty eight variables which could impact the performance of the motors during steady state operation of the motor were identified and treated as statistical variables for the analyses. The effects of motor to motor variation as well as variations between motors of a single pair were included in the analyses. The statistical variations of the variables were defined based on data provided by NASA's Marshall Space Flight Center for the upgraded five segment booster and from the Space Shuttle booster when appropriate. The results obtained for the statistical envelope are compared with the design specification thrust imbalance limits for the SLS launch vehicle

  5. Selected chemical analyses of water from formations of Mesozoic and Paleozoic age in parts of Oklahoma, northern Texas, and Union County, New Mexico

    USGS Publications Warehouse

    Parkhurst, R.S.; Christenson, S.C.

    1987-01-01

    Hydrochemical data were compiled into a data base as part of the Central Midwest Regional Aquifer System Analysis project. The data consist of chemical analyses of water samples collected from wells that are completed in formations of Mesozoic and Paleozoic age. The data base includes data from the National Water Data Storage and Retrieval System, the Petroleum Data System, the National Uranium Resource Evaluation, and selected publications. Chemical analyses were selected for inclusion within the hydrochemical data base if the total concentration of the cations differed from the total 10 percent or less of the total concentration of all ions. Those analyses which lacked the necessary data for an ionic balance were included if the ratios of dissolved-solids concentration to specific conductance were between 0.55 and 0.75. The tabulated chemical analyses, grouped by county, and a statistical summary of the analyses, listed by geologic unit, are presented.

  6. Parsimonious Use of Indicators for Evaluating Sustainability Systems with Multivariate Statistical Analyses

    EPA Science Inventory

    Indicators are commonly used for evaluating relative sustainability for competing products and processes. When a set of indicators is chosen for a particular system of study, it is important to ensure that they are variable independently of each other. Often the number of indicat...

  7. Meta-analysis of randomized clinical trials in the era of individual patient data sharing.

    PubMed

    Kawahara, Takuya; Fukuda, Musashi; Oba, Koji; Sakamoto, Junichi; Buyse, Marc

    2018-06-01

    Individual patient data (IPD) meta-analysis is considered to be a gold standard when the results of several randomized trials are combined. Recent initiatives on sharing IPD from clinical trials offer unprecedented opportunities for using such data in IPD meta-analyses. First, we discuss the evidence generated and the benefits obtained by a long-established prospective IPD meta-analysis in early breast cancer. Next, we discuss a data-sharing system that has been adopted by several pharmaceutical sponsors. We review a number of retrospective IPD meta-analyses that have already been proposed using this data-sharing system. Finally, we discuss the role of data sharing in IPD meta-analysis in the future. Treatment effects can be more reliably estimated in both types of IPD meta-analyses than with summary statistics extracted from published papers. Specifically, with rich covariate information available on each patient, prognostic and predictive factors can be identified or confirmed. Also, when several endpoints are available, surrogate endpoints can be assessed statistically. Although there are difficulties in conducting, analyzing, and interpreting retrospective IPD meta-analysis utilizing the currently available data-sharing systems, data sharing will play an important role in IPD meta-analysis in the future.

  8. Integrating ecosystem sampling, gradient modeling, remote sensing, and ecosystem simulation to create spatially explicit landscape inventories

    Treesearch

    Robert E. Keane; Matthew G. Rollins; Cecilia H. McNicoll; Russell A. Parsons

    2002-01-01

    Presented is a prototype of the Landscape Ecosystem Inventory System (LEIS), a system for creating maps of important landscape characteristics for natural resource planning. This system uses gradient-based field inventories coupled with gradient modeling remote sensing, ecosystem simulation, and statistical analyses to derive spatial data layers required for ecosystem...

  9. Statistical physics of human beings in games: Controlled experiments

    NASA Astrophysics Data System (ADS)

    Liang, Yuan; Huang, Ji-Ping

    2014-07-01

    It is important to know whether the laws or phenomena in statistical physics for natural systems with non-adaptive agents still hold for social human systems with adaptive agents, because this implies whether it is possible to study or understand social human systems by using statistical physics originating from natural systems. For this purpose, we review the role of human adaptability in four kinds of specific human behaviors, namely, normal behavior, herd behavior, contrarian behavior, and hedge behavior. The approach is based on controlled experiments in the framework of market-directed resource-allocation games. The role of the controlled experiments could be at least two-fold: adopting the real human decision-making process so that the system under consideration could reflect the performance of genuine human beings; making it possible to obtain macroscopic physical properties of a human system by tuning a particular factor of the system, thus directly revealing cause and effect. As a result, both computer simulations and theoretical analyses help to show a few counterparts of some laws or phenomena in statistical physics for social human systems: two-phase phenomena or phase transitions, entropy-related phenomena, and a non-equilibrium steady state. This review highlights the role of human adaptability in these counterparts, and makes it possible to study or understand some particular social human systems by means of statistical physics coming from natural systems.

  10. Statistical analysis plan for the Alveolar Recruitment for Acute Respiratory Distress Syndrome Trial (ART). A randomized controlled trial

    PubMed Central

    Damiani, Lucas Petri; Berwanger, Otavio; Paisani, Denise; Laranjeira, Ligia Nasi; Suzumura, Erica Aranha; Amato, Marcelo Britto Passos; Carvalho, Carlos Roberto Ribeiro; Cavalcanti, Alexandre Biasi

    2017-01-01

    Background The Alveolar Recruitment for Acute Respiratory Distress Syndrome Trial (ART) is an international multicenter randomized pragmatic controlled trial with allocation concealment involving 120 intensive care units in Brazil, Argentina, Colombia, Italy, Poland, Portugal, Malaysia, Spain, and Uruguay. The primary objective of ART is to determine whether maximum stepwise alveolar recruitment associated with PEEP titration, adjusted according to the static compliance of the respiratory system (ART strategy), is able to increase 28-day survival in patients with acute respiratory distress syndrome compared to conventional treatment (ARDSNet strategy). Objective To describe the data management process and statistical analysis plan. Methods The statistical analysis plan was designed by the trial executive committee and reviewed and approved by the trial steering committee. We provide an overview of the trial design with a special focus on describing the primary (28-day survival) and secondary outcomes. We describe our data management process, data monitoring committee, interim analyses, and sample size calculation. We describe our planned statistical analyses for primary and secondary outcomes as well as pre-specified subgroup analyses. We also provide details for presenting results, including mock tables for baseline characteristics, adherence to the protocol and effect on clinical outcomes. Conclusion According to best trial practice, we report our statistical analysis plan and data management plan prior to locking the database and beginning analyses. We anticipate that this document will prevent analysis bias and enhance the utility of the reported results. Trial registration ClinicalTrials.gov number, NCT01374022. PMID:28977255

  11. Testing a Coupled Global-limited-area Data Assimilation System using Observations from the 2004 Pacific Typhoon Season

    NASA Astrophysics Data System (ADS)

    Holt, C. R.; Szunyogh, I.; Gyarmati, G.; Hoffman, R. N.; Leidner, M.

    2011-12-01

    Tropical cyclone (TC) track and intensity forecasts have improved in recent years due to increased model resolution, improved data assimilation, and the rapid increase in the number of routinely assimilated observations over oceans. The data assimilation approach that has received the most attention in recent years is Ensemble Kalman Filtering (EnKF). The most attractive feature of the EnKF is that it uses a fully flow-dependent estimate of the error statistics, which can have important benefits for the analysis of rapidly developing TCs. We implement the Local Ensemble Transform Kalman Filter algorithm, a vari- ation of the EnKF, on a reduced-resolution version of the National Centers for Environmental Prediction (NCEP) Global Forecast System (GFS) model and the NCEP Regional Spectral Model (RSM) to build a coupled global-limited area anal- ysis/forecast system. This is the first time, to our knowledge, that such a system is used for the analysis and forecast of tropical cyclones. We use data from summer 2004 to study eight tropical cyclones in the Northwest Pacific. The benchmark data sets that we use to assess the performance of our system are the NCEP Reanalysis and the NCEP Operational GFS analyses from 2004. These benchmark analyses were both obtained by the Statistical Spectral Interpolation, which was the operational data assimilation system of NCEP in 2004. The GFS Operational analysis assimilated a large number of satellite radiance observations in addition to the observations assimilated in our system. All analyses are verified against the Joint Typhoon Warning Center Best Track data set. The errors are calculated for the position and intensity of the TCs. The global component of the ensemble-based system shows improvement in po- sition analysis over the NCEP Reanalysis, but shows no significant difference from the NCEP operational analysis for most of the storm tracks. The regional com- ponent of our system improves position analysis over all the global analyses. The intensity analyses, measured by the minimum sea level pressure, are of similar quality in all of the analyses. Regional deterministic forecasts started from our analyses are generally not significantly different from those started from the GFS operational analysis. On average, the regional experiments performed better for longer than 48 h sea level pressure forecasts, while the global forecast performed better in predicting the position for longer than 48 h.

  12. Detecting differential DNA methylation from sequencing of bisulfite converted DNA of diverse species.

    PubMed

    Huh, Iksoo; Wu, Xin; Park, Taesung; Yi, Soojin V

    2017-07-21

    DNA methylation is one of the most extensively studied epigenetic modifications of genomic DNA. In recent years, sequencing of bisulfite-converted DNA, particularly via next-generation sequencing technologies, has become a widely popular method to study DNA methylation. This method can be readily applied to a variety of species, dramatically expanding the scope of DNA methylation studies beyond the traditionally studied human and mouse systems. In parallel to the increasing wealth of genomic methylation profiles, many statistical tools have been developed to detect differentially methylated loci (DMLs) or differentially methylated regions (DMRs) between biological conditions. We discuss and summarize several key properties of currently available tools to detect DMLs and DMRs from sequencing of bisulfite-converted DNA. However, the majority of the statistical tools developed for DML/DMR analyses have been validated using only mammalian data sets, and less priority has been placed on the analyses of invertebrate or plant DNA methylation data. We demonstrate that genomic methylation profiles of non-mammalian species are often highly distinct from those of mammalian species using examples of honey bees and humans. We then discuss how such differences in data properties may affect statistical analyses. Based on these differences, we provide three specific recommendations to improve the power and accuracy of DML and DMR analyses of invertebrate data when using currently available statistical tools. These considerations should facilitate systematic and robust analyses of DNA methylation from diverse species, thus advancing our understanding of DNA methylation. © The Author 2017. Published by Oxford University Press.

  13. Systems and methods for detection of blowout precursors in combustors

    DOEpatents

    Lieuwen, Tim C.; Nair, Suraj

    2006-08-15

    The present invention comprises systems and methods for detecting flame blowout precursors in combustors. The blowout precursor detection system comprises a combustor, a pressure measuring device, and blowout precursor detection unit. A combustion controller may also be used to control combustor parameters. The methods of the present invention comprise receiving pressure data measured by an acoustic pressure measuring device, performing one or a combination of spectral analysis, statistical analysis, and wavelet analysis on received pressure data, and determining the existence of a blowout precursor based on such analyses. The spectral analysis, statistical analysis, and wavelet analysis further comprise their respective sub-methods to determine the existence of blowout precursors.

  14. Statistics for NAEG: past efforts, new results, and future plans

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gilbert, R.O.; Simpson, J.C.; Kinnison, R.R.

    A brief review of Nevada Applied Ecology Group (NAEG) objectives is followed by a summary of past statistical analyses conducted by Pacific Northwest Laboratory for the NAEG. Estimates of spatial pattern of radionuclides and other statistical analyses at NS's 201, 219 and 221 are reviewed as background for new analyses presented in this paper. Suggested NAEG activities and statistical analyses needed for the projected termination date of NAEG studies in March 1986 are given.

  15. Distributed management of scientific projects - An analysis of two computer-conferencing experiments at NASA

    NASA Technical Reports Server (NTRS)

    Vallee, J.; Gibbs, B.

    1976-01-01

    Between August 1975 and March 1976, two NASA projects with geographically separated participants used a computer-conferencing system developed by the Institute for the Future for portions of their work. Monthly usage statistics for the system were collected in order to examine the group and individual participation figures for all conferences. The conference transcripts were analysed to derive observations about the use of the medium. In addition to the results of these analyses, the attitudes of users and the major components of the costs of computer conferencing are discussed.

  16. permGPU: Using graphics processing units in RNA microarray association studies.

    PubMed

    Shterev, Ivo D; Jung, Sin-Ho; George, Stephen L; Owzar, Kouros

    2010-06-16

    Many analyses of microarray association studies involve permutation, bootstrap resampling and cross-validation, that are ideally formulated as embarrassingly parallel computing problems. Given that these analyses are computationally intensive, scalable approaches that can take advantage of multi-core processor systems need to be developed. We have developed a CUDA based implementation, permGPU, that employs graphics processing units in microarray association studies. We illustrate the performance and applicability of permGPU within the context of permutation resampling for a number of test statistics. An extensive simulation study demonstrates a dramatic increase in performance when using permGPU on an NVIDIA GTX 280 card compared to an optimized C/C++ solution running on a conventional Linux server. permGPU is available as an open-source stand-alone application and as an extension package for the R statistical environment. It provides a dramatic increase in performance for permutation resampling analysis in the context of microarray association studies. The current version offers six test statistics for carrying out permutation resampling analyses for binary, quantitative and censored time-to-event traits.

  17. Detection of semi-volatile organic compounds in permeable ...

    EPA Pesticide Factsheets

    Abstract The Edison Environmental Center (EEC) has a research and demonstration permeable parking lot comprised of three different permeable systems: permeable asphalt, porous concrete and interlocking concrete permeable pavers. Water quality and quantity analysis has been ongoing since January, 2010. This paper describes a subset of the water quality analysis, analysis of semivolatile organic compounds (SVOCs) to determine if hydrocarbons were in water infiltrated through the permeable surfaces. SVOCs were analyzed in samples collected from 11 dates over a 3 year period, from 2/8/2010 to 4/1/2013.Results are broadly divided into three categories: 42 chemicals were never detected; 12 chemicals (11 chemical test) were detected at a rate of less than 10% or less; and 22 chemicals were detected at a frequency of 10% or greater (ranging from 10% to 66.5% detections). Fundamental and exploratory statistical analyses were performed on these latter analyses results by grouping results by surface type. The statistical analyses were limited due to low frequency of detections and dilutions of samples which impacted detection limits. The infiltrate data through three permeable surfaces were analyzed as non-parametric data by the Kaplan-Meier estimation method for fundamental statistics; there were some statistically observable difference in concentration between pavement types when using Tarone-Ware Comparison Hypothesis Test. Additionally Spearman Rank order non-parame

  18. Generalised Central Limit Theorems for Growth Rate Distribution of Complex Systems

    NASA Astrophysics Data System (ADS)

    Takayasu, Misako; Watanabe, Hayafumi; Takayasu, Hideki

    2014-04-01

    We introduce a solvable model of randomly growing systems consisting of many independent subunits. Scaling relations and growth rate distributions in the limit of infinite subunits are analysed theoretically. Various types of scaling properties and distributions reported for growth rates of complex systems in a variety of fields can be derived from this basic physical model. Statistical data of growth rates for about 1 million business firms are analysed as a real-world example of randomly growing systems. Not only are the scaling relations consistent with the theoretical solution, but the entire functional form of the growth rate distribution is fitted with a theoretical distribution that has a power-law tail.

  19. High order statistical signatures from source-driven measurements of subcritical fissile systems

    NASA Astrophysics Data System (ADS)

    Mattingly, John Kelly

    1998-11-01

    This research focuses on the development and application of high order statistical analyses applied to measurements performed with subcritical fissile systems driven by an introduced neutron source. The signatures presented are derived from counting statistics of the introduced source and radiation detectors that observe the response of the fissile system. It is demonstrated that successively higher order counting statistics possess progressively higher sensitivity to reactivity. Consequently, these signatures are more sensitive to changes in the composition, fissile mass, and configuration of the fissile assembly. Furthermore, it is shown that these techniques are capable of distinguishing the response of the fissile system to the introduced source from its response to any internal or inherent sources. This ability combined with the enhanced sensitivity of higher order signatures indicates that these techniques will be of significant utility in a variety of applications. Potential applications include enhanced radiation signature identification of weapons components for nuclear disarmament and safeguards applications and augmented nondestructive analysis of spent nuclear fuel. In general, these techniques expand present capabilities in the analysis of subcritical measurements.

  20. Identifying and characterizing hepatitis C virus hotspots in Massachusetts: a spatial epidemiological approach.

    PubMed

    Stopka, Thomas J; Goulart, Michael A; Meyers, David J; Hutcheson, Marga; Barton, Kerri; Onofrey, Shauna; Church, Daniel; Donahue, Ashley; Chui, Kenneth K H

    2017-04-20

    Hepatitis C virus (HCV) infections have increased during the past decade but little is known about geographic clustering patterns. We used a unique analytical approach, combining geographic information systems (GIS), spatial epidemiology, and statistical modeling to identify and characterize HCV hotspots, statistically significant clusters of census tracts with elevated HCV counts and rates. We compiled sociodemographic and HCV surveillance data (n = 99,780 cases) for Massachusetts census tracts (n = 1464) from 2002 to 2013. We used a five-step spatial epidemiological approach, calculating incremental spatial autocorrelations and Getis-Ord Gi* statistics to identify clusters. We conducted logistic regression analyses to determine factors associated with the HCV hotspots. We identified nine HCV clusters, with the largest in Boston, New Bedford/Fall River, Worcester, and Springfield (p < 0.05). In multivariable analyses, we found that HCV hotspots were independently and positively associated with the percent of the population that was Hispanic (adjusted odds ratio [AOR]: 1.07; 95% confidence interval [CI]: 1.04, 1.09) and the percent of households receiving food stamps (AOR: 1.83; 95% CI: 1.22, 2.74). HCV hotspots were independently and negatively associated with the percent of the population that were high school graduates or higher (AOR: 0.91; 95% CI: 0.89, 0.93) and the percent of the population in the "other" race/ethnicity category (AOR: 0.88; 95% CI: 0.85, 0.91). We identified locations where HCV clusters were a concern, and where enhanced HCV prevention, treatment, and care can help combat the HCV epidemic in Massachusetts. GIS, spatial epidemiological and statistical analyses provided a rigorous approach to identify hotspot clusters of disease, which can inform public health policy and intervention targeting. Further studies that incorporate spatiotemporal cluster analyses, Bayesian spatial and geostatistical models, spatially weighted regression analyses, and assessment of associations between HCV clustering and the built environment are needed to expand upon our combined spatial epidemiological and statistical methods.

  1. Statistical Exposé of a Multiple-Compartment Anaerobic Reactor Treating Domestic Wastewater.

    PubMed

    Pfluger, Andrew R; Hahn, Martha J; Hering, Amanda S; Munakata-Marr, Junko; Figueroa, Linda

    2018-06-01

      Mainstream anaerobic treatment of domestic wastewater is a promising energy-generating treatment strategy; however, such reactors operated in colder regions are not well characterized. Performance data from a pilot-scale, multiple-compartment anaerobic reactor taken over 786 days were subjected to comprehensive statistical analyses. Results suggest that chemical oxygen demand (COD) was a poor proxy for organics in anaerobic systems as oxygen demand from dissolved inorganic material, dissolved methane, and colloidal material influence dissolved and particulate COD measurements. Additionally, univariate and functional boxplots were useful in visualizing variability in contaminant concentrations and identifying statistical outliers. Further, significantly different dissolved organic removal and methane production was observed between operational years, suggesting that anaerobic reactor systems may not achieve steady-state performance within one year. Last, modeling multiple-compartment reactor systems will require data collected over at least two years to capture seasonal variations of the major anaerobic microbial functions occurring within each reactor compartment.

  2. Earthquake triggering in southeast Africa following the 2012 Indian Ocean earthquake

    NASA Astrophysics Data System (ADS)

    Neves, Miguel; Custódio, Susana; Peng, Zhigang; Ayorinde, Adebayo

    2018-02-01

    In this paper we present evidence of earthquake dynamic triggering in southeast Africa. We analysed seismic waveforms recorded at 53 broad-band and short-period stations in order to identify possible increases in the rate of microearthquakes and tremor due to the passage of teleseismic waves generated by the Mw8.6 2012 Indian Ocean earthquake. We found evidence of triggered local earthquakes and no evidence of triggered tremor in the region. We assessed the statistical significance of the increase in the number of local earthquakes using β-statistics. Statistically significant dynamic triggering of local earthquakes was observed at 7 out of the 53 analysed stations. Two of these stations are located in the northeast coast of Madagascar and the other five stations are located in the Kaapvaal Craton, southern Africa. We found no evidence of dynamically triggered seismic activity in stations located near the structures of the East African Rift System. Hydrothermal activity exists close to the stations that recorded dynamic triggering, however, it also exists near the East African Rift System structures where no triggering was observed. Our results suggest that factors other than solely tectonic regime and geothermalism are needed to explain the mechanisms that underlie earthquake triggering.

  3. SU-F-T-551: Beam Hardening and Attenuation of Photon Beams Using Integral Quality Monitor in Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Casar, B; Carot, I Mendez; Peterlin, P

    2016-06-15

    Purpose: Aim of the multi-centre study was to analyse beam hardening effect of the Integral Quality Monitor (IQM) for high energy photon beams used in radiotherapy with linear accelerators. Generic values for attenuation coefficient k(IQM) of IQM system were additionally investigated. Methods: Beam hardening effect of the IQM system was studied for a set of standard nominal photon energies (6 MV–18 MV) and two flattening filter free (FFF) energies (6 MV FFF and 10 MV FFF). PDD curves were measured and analysed for various square radiation fields, with and without IQM in place. Differences between PDD curves were statistically analysedmore » through comparison of respective PDD-20,10 values. Attenuation coefficients k(IQM) were determined for the same range of photon energies. Results: Statistically significant differences in beam qualities for all evaluated high energy photon beams were found, comparing PDD-20,10 values derived from PDD curves with and without IQM in place. Significance of beam hardening effect was statistically proven with high confidence (p < 0,01) for all analysed photon beams except for 15 MV (p = 0,078), although relative differences in beam qualities were minimal, ranging from 0,1 % to 0,5 %. Attenuation of the IQM system showed negligible dependence on radiation field size. However, clinically important dependence of kIQM versus TPRs20,10 was found: 0,941 for 6 MV photon beams, to 0,959 for 18 MV photon beams, with highest uncertainty below 0,006. k(IQM) versus TPRs were tabulated and polynomial equation for the determination of k(IQM) is suggested for clinical use. Conclusion: There was no clinically relevant beam hardening, when IQM system was on linear accelerators. Consequently, no additional commissioning is needed for the IQM system regarding the determination of beam qualities. Generic values for k(IQM) are proposed and can be used as tray factors for complete range of examined photon beam energies.« less

  4. Spatial variation of volcanic rock geochemistry in the Virunga Volcanic Province: Statistical analysis of an integrated database

    NASA Astrophysics Data System (ADS)

    Barette, Florian; Poppe, Sam; Smets, Benoît; Benbakkar, Mhammed; Kervyn, Matthieu

    2017-10-01

    We present an integrated, spatially-explicit database of existing geochemical major-element analyses available from (post-) colonial scientific reports, PhD Theses and international publications for the Virunga Volcanic Province, located in the western branch of the East African Rift System. This volcanic province is characterised by alkaline volcanism, including silica-undersaturated, alkaline and potassic lavas. The database contains a total of 908 geochemical analyses of eruptive rocks for the entire volcanic province with a localisation for most samples. A preliminary analysis of the overall consistency of the database, using statistical techniques on sets of geochemical analyses with contrasted analytical methods or dates, demonstrates that the database is consistent. We applied a principal component analysis and cluster analysis on whole-rock major element compositions included in the database to study the spatial variation of the chemical composition of eruptive products in the Virunga Volcanic Province. These statistical analyses identify spatially distributed clusters of eruptive products. The known geochemical contrasts are highlighted by the spatial analysis, such as the unique geochemical signature of Nyiragongo lavas compared to other Virunga lavas, the geochemical heterogeneity of the Bulengo area, and the trachyte flows of Karisimbi volcano. Most importantly, we identified separate clusters of eruptive products which originate from primitive magmatic sources. These lavas of primitive composition are preferentially located along NE-SW inherited rift structures, often at distance from the central Virunga volcanoes. Our results illustrate the relevance of a spatial analysis on integrated geochemical data for a volcanic province, as a complement to classical petrological investigations. This approach indeed helps to characterise geochemical variations within a complex of magmatic systems and to identify specific petrologic and geochemical investigations that should be tackled within a study area.

  5. An evaluation system for electronic retrospective analyses in radiation oncology: implemented exemplarily for pancreatic cancer

    NASA Astrophysics Data System (ADS)

    Kessel, Kerstin A.; Jäger, Andreas; Bohn, Christian; Habermehl, Daniel; Zhang, Lanlan; Engelmann, Uwe; Bougatf, Nina; Bendl, Rolf; Debus, Jürgen; Combs, Stephanie E.

    2013-03-01

    To date, conducting retrospective clinical analyses is rather difficult and time consuming. Especially in radiation oncology, handling voluminous datasets from various information systems and different documentation styles efficiently is crucial for patient care and research. With the example of patients with pancreatic cancer treated with radio-chemotherapy, we performed a therapy evaluation by using analysis tools connected with a documentation system. A total number of 783 patients have been documented into a professional, web-based documentation system. Information about radiation therapy, diagnostic images and dose distributions have been imported. For patients with disease progression after neoadjuvant chemoradiation, we designed and established an analysis workflow. After automatic registration of the radiation plans with the follow-up images, the recurrence volumes are segmented manually. Based on these volumes the DVH (dose-volume histogram) statistic is calculated, followed by the determination of the dose applied to the region of recurrence. All results are stored in the database and included in statistical calculations. The main goal of using an automatic evaluation system is to reduce time and effort conducting clinical analyses, especially with large patient groups. We showed a first approach and use of some existing tools, however manual interaction is still necessary. Further steps need to be taken to enhance automation. Already, it has become apparent that the benefits of digital data management and analysis lie in the central storage of data and reusability of the results. Therefore, we intend to adapt the evaluation system to other types of tumors in radiation oncology.

  6. Gait patterns for crime fighting: statistical evaluation

    NASA Astrophysics Data System (ADS)

    Sulovská, Kateřina; Bělašková, Silvie; Adámek, Milan

    2013-10-01

    The criminality is omnipresent during the human history. Modern technology brings novel opportunities for identification of a perpetrator. One of these opportunities is an analysis of video recordings, which may be taken during the crime itself or before/after the crime. The video analysis can be classed as identification analyses, respectively identification of a person via externals. The bipedal locomotion focuses on human movement on the basis of their anatomical-physiological features. Nowadays, the human gait is tested by many laboratories to learn whether the identification via bipedal locomotion is possible or not. The aim of our study is to use 2D components out of 3D data from the VICON Mocap system for deep statistical analyses. This paper introduces recent results of a fundamental study focused on various gait patterns during different conditions. The study contains data from 12 participants. Curves obtained from these measurements were sorted, averaged and statistically tested to estimate the stability and distinctiveness of this biometrics. Results show satisfactory distinctness of some chosen points, while some do not embody significant difference. However, results presented in this paper are of initial phase of further deeper and more exacting analyses of gait patterns under different conditions.

  7. Inter-model variability in hydrological extremes projections for Amazonian sub-basins

    NASA Astrophysics Data System (ADS)

    Andres Rodriguez, Daniel; Garofolo, Lucas; Lázaro de Siqueira Júnior, José; Samprogna Mohor, Guilherme; Tomasella, Javier

    2014-05-01

    Irreducible uncertainties due to knowledge's limitations, chaotic nature of climate system and human decision-making process drive uncertainties in Climate Change projections. Such uncertainties affect the impact studies, mainly when associated to extreme events, and difficult the decision-making process aimed at mitigation and adaptation. However, these uncertainties allow the possibility to develop exploratory analyses on system's vulnerability to different sceneries. The use of different climate model's projections allows to aboard uncertainties issues allowing the use of multiple runs to explore a wide range of potential impacts and its implications for potential vulnerabilities. Statistical approaches for analyses of extreme values are usually based on stationarity assumptions. However, nonstationarity is relevant at the time scales considered for extreme value analyses and could have great implications in dynamic complex systems, mainly under climate change transformations. Because this, it is required to consider the nonstationarity in the statistical distribution parameters. We carried out a study of the dispersion in hydrological extremes projections using climate change projections from several climate models to feed the Distributed Hydrological Model of the National Institute for Spatial Research, MHD-INPE, applied in Amazonian sub-basins. This model is a large-scale hydrological model that uses a TopModel approach to solve runoff generation processes at the grid-cell scale. MHD-INPE model was calibrated for 1970-1990 using observed meteorological data and comparing observed and simulated discharges by using several performance coeficients. Hydrological Model integrations were performed for present historical time (1970-1990) and for future period (2010-2100). Because climate models simulate the variability of the climate system in statistical terms rather than reproduce the historical behavior of climate variables, the performances of the model's runs during the historical period, when feed with climate model data, were tested using descriptors of the Flow Duration Curves. The analyses of projected extreme values were carried out considering the nonstationarity of the GEV distribution parameters and compared with extremes events in present time. Results show inter-model variability in a broad dispersion on projected extreme's values. Such dispersion implies different degrees of socio-economic impacts associated to extreme hydrological events. Despite the no existence of one optimum result, this variability allows the analyses of adaptation strategies and its potential vulnerabilities.

  8. On-Orbit System Identification

    NASA Technical Reports Server (NTRS)

    Mettler, E.; Milman, M. H.; Bayard, D.; Eldred, D. B.

    1987-01-01

    Information derived from accelerometer readings benefits important engineering and control functions. Report discusses methodology for detection, identification, and analysis of motions within space station. Techniques of vibration and rotation analyses, control theory, statistics, filter theory, and transform methods integrated to form system for generating models and model parameters that characterize total motion of complicated space station, with respect to both control-induced and random mechanical disturbances.

  9. Classroom Assessments of 6000 Teachers: What Do the Results Show about the Effectiveness of Teaching and Learning?

    ERIC Educational Resources Information Center

    Hill, Flo H.; And Others

    This paper presents the results of a series of summary analyses of descriptive statistics concerning 5,720 Louisiana teachers who were assessed with the System for Teaching and Learning Assessment and Review (STAR)--a comprehensive on-the-job statewide teacher assessment system--during the second pilot year (1989-90). Data were collected by about…

  10. Harmonizing health information systems with information systems in other social and economic sectors.

    PubMed Central

    Macfarlane, Sarah B.

    2005-01-01

    Efforts to strengthen health information systems in low- and middle-income countries should include forging links with systems in other social and economic sectors. Governments are seeking comprehensive socioeconomic data on the basis of which to implement strategies for poverty reduction and to monitor achievement of the Millennium Development Goals. The health sector is looking to take action on the social factors that determine health outcomes. But there are duplications and inconsistencies between sectors in the collection, reporting, storage and analysis of socioeconomic data. National offices of statistics give higher priority to collection and analysis of economic than to social statistics. The Report of the Commission for Africa has estimated that an additional US$ 60 million a year is needed to improve systems to collect and analyse statistics in Africa. Some donors recognize that such systems have been weakened by numerous international demands for indicators, and have pledged support for national initiatives to strengthen statistical systems, as well as sectoral information systems such as those in health and education. Many governments are working to coordinate information systems to monitor and evaluate poverty reduction strategies. There is therefore an opportunity for the health sector to collaborate with other sectors to lever international resources to rationalize definition and measurement of indicators common to several sectors; streamline the content, frequency and timing of household surveys; and harmonize national and subnational databases that store socioeconomic data. Without long-term commitment to improve training and build career structures for statisticians and information technicians working in the health and other sectors, improvements in information and statistical systems cannot be sustained. PMID:16184278

  11. A survey of design methods for failure detection in dynamic systems

    NASA Technical Reports Server (NTRS)

    Willsky, A. S.

    1975-01-01

    A number of methods for detecting abrupt changes (such as failures) in stochastic dynamical systems are surveyed. The class of linear systems is concentrated on but the basic concepts, if not the detailed analyses, carry over to other classes of systems. The methods surveyed range from the design of specific failure-sensitive filters, to the use of statistical tests on filter innovations, to the development of jump process formulations. Tradeoffs in complexity versus performance are discussed.

  12. The Technologies of EXPER SIM.

    ERIC Educational Resources Information Center

    Hedberg, John G.

    EXPER SIM has been translated into two basic software systems: the Michigan Experimental Simulation Supervisor (MESS) and Louisville Experiment Simulation Supervisor (LESS). MESS and LESS have been programed to facilitate student interaction with the computer for research purposes. The programs contain models for several statistical analyses, and…

  13. Formative Assessment in Mathematics for Engineering Students

    ERIC Educational Resources Information Center

    Ní Fhloinn, Eabhnat; Carr, Michael

    2017-01-01

    In this paper, we present a range of formative assessment types for engineering mathematics, including in-class exercises, homework, mock examination questions, table quizzes, presentations, critical analyses of statistical papers, peer-to-peer teaching, online assessments and electronic voting systems. We provide practical tips for the…

  14. Fatality Reduction by Air Bags: Analyses of Accident Data through Early 1996

    DOT National Transportation Integrated Search

    1996-08-01

    The fatality risk of front-seat occupants of passenger cars and light trucks equipped with air bags is compared to the corresponding risk in similar vehicles without air bags, based on statistical analysis of Fatal Accident Reporting System (FARS)dat...

  15. Cognitive and Behavioral Treatments of Agoraphobia: Clinical, Behavioral, and Psychophysiological Outcomes.

    ERIC Educational Resources Information Center

    Michelson, Larry; And Others

    1985-01-01

    Agoraphobics (N=37) were randomly assigned to one of three cognitive-behavioral treatments: paradoxical intention, graduated exposure, or progressive deep muscle relaxation training. Results of follow-up analyses revealed statistically significant differences across treatments, tripartite response systems, and assessment phases. (Author/BL)

  16. Real-time quality assurance testing using photonic techniques: Application to iodine water system

    NASA Technical Reports Server (NTRS)

    Arendale, W. F.; Hatcher, Richard; Garlington, Yadilett; Harwell, Jack; Everett, Tracey

    1990-01-01

    A feasibility study of the use of inspection systems incorporating photonic sensors and multivariate analyses to provide an instrumentation system that in real-time assures quality and that the system in control has been conducted. A system is in control when the near future of the product quality is predictable. Off-line chemical analyses can be used for a chemical process when slow kinetics allows time to take a sample to the laboratory and the system provides a recovery mechanism that returns the system to statistical control without intervention of the operator. The objective for this study has been the implementation of do-it-right-the-first-time and just-in-time philosophies. The Environment Control and Life Support Systems (ECLSS) water reclamation system that adds iodine for biocidal control is an ideal candidate for the study and implementation of do-it-right-the-first-time technologies.

  17. Statistical Performances of Resistive Active Power Splitter

    NASA Astrophysics Data System (ADS)

    Lalléchère, Sébastien; Ravelo, Blaise; Thakur, Atul

    2016-03-01

    In this paper, the synthesis and sensitivity analysis of an active power splitter (PWS) is proposed. It is based on the active cell composed of a Field Effect Transistor in cascade with shunted resistor at the input and the output (resistive amplifier topology). The PWS uncertainty versus resistance tolerances is suggested by using stochastic method. Furthermore, with the proposed topology, we can control easily the device gain while varying a resistance. This provides useful tool to analyse the statistical sensitivity of the system in uncertain environment.

  18. "What If" Analyses: Ways to Interpret Statistical Significance Test Results Using EXCEL or "R"

    ERIC Educational Resources Information Center

    Ozturk, Elif

    2012-01-01

    The present paper aims to review two motivations to conduct "what if" analyses using Excel and "R" to understand the statistical significance tests through the sample size context. "What if" analyses can be used to teach students what statistical significance tests really do and in applied research either prospectively to estimate what sample size…

  19. Association Study between Lead and Zinc Accumulation at Different Physiological Systems of Cattle by Canonical Correlation and Canonical Correspondence Analyses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karmakar, Partha; Das, Pradip Kumar; Mondal, Seema Sarkar

    2010-10-26

    Pb pollution from automobile exhausts around highways is a persistent problem in India. Pb intoxication in mammalian body is a complex phenomenon which is influence by agonistic and antagonistic interactions of several other heavy metals and micronutrients. An attempt has been made to study the association between Pb and Zn accumulation in different physiological systems of cattles (n = 200) by application of both canonical correlation and canonical correspondence analyses. Pb was estimated from plasma, liver, bone, muscle, kidney, blood and milk where as Zn was measured from all these systems except bone, blood and milk. Both statistical techniques demonstratedmore » that there was a strong association among blood-Pb, liver-Zn, kidney-Zn and muscle-Zn. From observations, it can be assumed that Zn accumulation in cattles' muscle, liver and kidney directs Pb mobilization from those organs which in turn increases Pb pool in blood. It indicates antagonistic activity of Zn to the accumulation of Pb. Although there were some contradictions between the observations obtained from the two different statistical methods, the overall pattern of Pb accumulation in various organs as influenced by Zn were same. It is mainly due to the fact that canonical correlation is actually a special type of canonical correspondence analyses where linear relationship is followed between two groups of variables instead of Gaussian relationship.« less

  20. Association Study between Lead and Zinc Accumulation at Different Physiological Systems of Cattle by Canonical Correlation and Canonical Correspondence Analyses

    NASA Astrophysics Data System (ADS)

    Karmakar, Partha; Das, Pradip Kumar; Mondal, Seema Sarkar; Karmakar, Sougata; Mazumdar, Debasis

    2010-10-01

    Pb pollution from automobile exhausts around highways is a persistent problem in India. Pb intoxication in mammalian body is a complex phenomenon which is influence by agonistic and antagonistic interactions of several other heavy metals and micronutrients. An attempt has been made to study the association between Pb and Zn accumulation in different physiological systems of cattles (n = 200) by application of both canonical correlation and canonical correspondence analyses. Pb was estimated from plasma, liver, bone, muscle, kidney, blood and milk where as Zn was measured from all these systems except bone, blood and milk. Both statistical techniques demonstrated that there was a strong association among blood-Pb, liver-Zn, kidney-Zn and muscle-Zn. From observations, it can be assumed that Zn accumulation in cattles' muscle, liver and kidney directs Pb mobilization from those organs which in turn increases Pb pool in blood. It indicates antagonistic activity of Zn to the accumulation of Pb. Although there were some contradictions between the observations obtained from the two different statistical methods, the overall pattern of Pb accumulation in various organs as influenced by Zn were same. It is mainly due to the fact that canonical correlation is actually a special type of canonical correspondence analyses where linear relationship is followed between two groups of variables instead of Gaussian relationship.

  1. Determination of quality parameters from statistical analysis of routine TLD dosimetry data.

    PubMed

    German, U; Weinstein, M; Pelled, O

    2006-01-01

    Following the as low as reasonably achievable (ALARA) practice, there is a need to measure very low doses, of the same order of magnitude as the natural background, and the limits of detection of the dosimetry systems. The different contributions of the background signals to the total zero dose reading of thermoluminescence dosemeter (TLD) cards were analysed by using the common basic definitions of statistical indicators: the critical level (L(C)), the detection limit (L(D)) and the determination limit (L(Q)). These key statistical parameters for the system operated at NRC-Negev were quantified, based on the history of readings of the calibration cards in use. The electronic noise seems to play a minor role, but the reading of the Teflon coating (without the presence of a TLD crystal) gave a significant contribution.

  2. Comparison of corneal endothelial image analysis by Konan SP8000 noncontact and Bio-Optics Bambi systems.

    PubMed

    Benetz, B A; Diaconu, E; Bowlin, S J; Oak, S S; Laing, R A; Lass, J H

    1999-01-01

    Compare corneal endothelial image analysis by Konan SP8000 and Bio-Optics Bambi image-analysis systems. Corneal endothelial images from 98 individuals (191 eyes), ranging in age from 4 to 87 years, with a normal slit-lamp examination and no history of ocular trauma, intraocular surgery, or intraocular inflammation were obtained by the Konan SP8000 noncontact specular microscope. One observer analyzed these images by using the Konan system and a second observer by using the Bio-Optics Bambi system. Three methods of analyses were used: a fixed-frame method to obtain cell density (for both Konan and Bio-Optics Bambi) and a "dot" (Konan) or "corners" (Bio-Optics Bambi) method to determine morphometric parameters. The cell density determined by the Konan fixed-frame method was significantly higher (157 cells/mm2) than the Bio-Optics Bambi fixed-frame method determination (p<0.0001). However, the difference in cell density, although still statistically significant, was smaller and reversed comparing the Konan fixed-frame method with both Konan dot and Bio-Optics Bambi comers method (-74 cells/mm2, p<0.0001; -55 cells/mm2, p<0.0001, respectively). Small but statistically significant morphometric analyses differences between Konan and Bio-Optics Bambi were seen: cell density, +19 cells/mm2 (p = 0.03); cell area, -3.0 microm2 (p = 0.008); and coefficient of variation, +1.0 (p = 0.003). There was no statistically significant difference between these two methods in the percentage of six-sided cells detected (p = 0.55). Cell densities measured by the Konan fixed-frame method were comparable with Konan and Bio-Optics Bambi's morphometric analysis, but not with the Bio-Optics Bambi fixed-frame method. The two morphometric analyses were comparable with minimal or no differences for the parameters that were studied. The Konan SP8000 endothelial image-analysis system may be useful for large-scale clinical trials determining cell loss; its noncontact system has many clinical benefits (including patient comfort, safety, ease of use, and short procedure time) and provides reliable cell-density calculations.

  3. A survey of design methods for failure detection in dynamic systems

    NASA Technical Reports Server (NTRS)

    Willsky, A. S.

    1975-01-01

    A number of methods for the detection of abrupt changes (such as failures) in stochastic dynamical systems were surveyed. The class of linear systems were emphasized, but the basic concepts, if not the detailed analyses, carry over to other classes of systems. The methods surveyed range from the design of specific failure-sensitive filters, to the use of statistical tests on filter innovations, to the development of jump process formulations. Tradeoffs in complexity versus performance are discussed.

  4. How Big of a Problem is Analytic Error in Secondary Analyses of Survey Data?

    PubMed

    West, Brady T; Sakshaug, Joseph W; Aurelien, Guy Alain S

    2016-01-01

    Secondary analyses of survey data collected from large probability samples of persons or establishments further scientific progress in many fields. The complex design features of these samples improve data collection efficiency, but also require analysts to account for these features when conducting analysis. Unfortunately, many secondary analysts from fields outside of statistics, biostatistics, and survey methodology do not have adequate training in this area, and as a result may apply incorrect statistical methods when analyzing these survey data sets. This in turn could lead to the publication of incorrect inferences based on the survey data that effectively negate the resources dedicated to these surveys. In this article, we build on the results of a preliminary meta-analysis of 100 peer-reviewed journal articles presenting analyses of data from a variety of national health surveys, which suggested that analytic errors may be extremely prevalent in these types of investigations. We first perform a meta-analysis of a stratified random sample of 145 additional research products analyzing survey data from the Scientists and Engineers Statistical Data System (SESTAT), which describes features of the U.S. Science and Engineering workforce, and examine trends in the prevalence of analytic error across the decades used to stratify the sample. We once again find that analytic errors appear to be quite prevalent in these studies. Next, we present several example analyses of real SESTAT data, and demonstrate that a failure to perform these analyses correctly can result in substantially biased estimates with standard errors that do not adequately reflect complex sample design features. Collectively, the results of this investigation suggest that reviewers of this type of research need to pay much closer attention to the analytic methods employed by researchers attempting to publish or present secondary analyses of survey data.

  5. How Big of a Problem is Analytic Error in Secondary Analyses of Survey Data?

    PubMed Central

    West, Brady T.; Sakshaug, Joseph W.; Aurelien, Guy Alain S.

    2016-01-01

    Secondary analyses of survey data collected from large probability samples of persons or establishments further scientific progress in many fields. The complex design features of these samples improve data collection efficiency, but also require analysts to account for these features when conducting analysis. Unfortunately, many secondary analysts from fields outside of statistics, biostatistics, and survey methodology do not have adequate training in this area, and as a result may apply incorrect statistical methods when analyzing these survey data sets. This in turn could lead to the publication of incorrect inferences based on the survey data that effectively negate the resources dedicated to these surveys. In this article, we build on the results of a preliminary meta-analysis of 100 peer-reviewed journal articles presenting analyses of data from a variety of national health surveys, which suggested that analytic errors may be extremely prevalent in these types of investigations. We first perform a meta-analysis of a stratified random sample of 145 additional research products analyzing survey data from the Scientists and Engineers Statistical Data System (SESTAT), which describes features of the U.S. Science and Engineering workforce, and examine trends in the prevalence of analytic error across the decades used to stratify the sample. We once again find that analytic errors appear to be quite prevalent in these studies. Next, we present several example analyses of real SESTAT data, and demonstrate that a failure to perform these analyses correctly can result in substantially biased estimates with standard errors that do not adequately reflect complex sample design features. Collectively, the results of this investigation suggest that reviewers of this type of research need to pay much closer attention to the analytic methods employed by researchers attempting to publish or present secondary analyses of survey data. PMID:27355817

  6. [Database supported electronic retrospective analyses in radiation oncology: establishing a workflow using the example of pancreatic cancer].

    PubMed

    Kessel, K A; Habermehl, D; Bohn, C; Jäger, A; Floca, R O; Zhang, L; Bougatf, N; Bendl, R; Debus, J; Combs, S E

    2012-12-01

    Especially in the field of radiation oncology, handling a large variety of voluminous datasets from various information systems in different documentation styles efficiently is crucial for patient care and research. To date, conducting retrospective clinical analyses is rather difficult and time consuming. With the example of patients with pancreatic cancer treated with radio-chemotherapy, we performed a therapy evaluation by using an analysis system connected with a documentation system. A total number of 783 patients have been documented into a professional, database-based documentation system. Information about radiation therapy, diagnostic images and dose distributions have been imported into the web-based system. For 36 patients with disease progression after neoadjuvant chemoradiation, we designed and established an analysis workflow. After an automatic registration of the radiation plans with the follow-up images, the recurrence volumes are segmented manually. Based on these volumes the DVH (dose volume histogram) statistic is calculated, followed by the determination of the dose applied to the region of recurrence. All results are saved in the database and included in statistical calculations. The main goal of using an automatic analysis tool is to reduce time and effort conducting clinical analyses, especially with large patient groups. We showed a first approach and use of some existing tools, however manual interaction is still necessary. Further steps need to be taken to enhance automation. Already, it has become apparent that the benefits of digital data management and analysis lie in the central storage of data and reusability of the results. Therefore, we intend to adapt the analysis system to other types of tumors in radiation oncology.

  7. Statistical issues in quality control of proteomic analyses: good experimental design and planning.

    PubMed

    Cairns, David A

    2011-03-01

    Quality control is becoming increasingly important in proteomic investigations as experiments become more multivariate and quantitative. Quality control applies to all stages of an investigation and statistics can play a key role. In this review, the role of statistical ideas in the design and planning of an investigation is described. This involves the design of unbiased experiments using key concepts from statistical experimental design, the understanding of the biological and analytical variation in a system using variance components analysis and the determination of a required sample size to perform a statistically powerful investigation. These concepts are described through simple examples and an example data set from a 2-D DIGE pilot experiment. Each of these concepts can prove useful in producing better and more reproducible data. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  8. A Space–Time Permutation Scan Statistic for Disease Outbreak Detection

    PubMed Central

    Kulldorff, Martin; Heffernan, Richard; Hartman, Jessica; Assunção, Renato; Mostashari, Farzad

    2005-01-01

    Background The ability to detect disease outbreaks early is important in order to minimize morbidity and mortality through timely implementation of disease prevention and control measures. Many national, state, and local health departments are launching disease surveillance systems with daily analyses of hospital emergency department visits, ambulance dispatch calls, or pharmacy sales for which population-at-risk information is unavailable or irrelevant. Methods and Findings We propose a prospective space–time permutation scan statistic for the early detection of disease outbreaks that uses only case numbers, with no need for population-at-risk data. It makes minimal assumptions about the time, geographical location, or size of the outbreak, and it adjusts for natural purely spatial and purely temporal variation. The new method was evaluated using daily analyses of hospital emergency department visits in New York City. Four of the five strongest signals were likely local precursors to citywide outbreaks due to rotavirus, norovirus, and influenza. The number of false signals was at most modest. Conclusion If such results hold up over longer study times and in other locations, the space–time permutation scan statistic will be an important tool for local and national health departments that are setting up early disease detection surveillance systems. PMID:15719066

  9. Operational seasonal and interannual predictions of ocean conditions

    NASA Technical Reports Server (NTRS)

    Leetmaa, Ants

    1992-01-01

    Dr. Leetmaa described current work at the U.S. National Meteorological Center (NMC) on coupled systems leading to a seasonal prediction system. He described the way in which ocean thermal data is quality controlled and used in a four dimensional data assimilation system. This consists of a statistical interpolation scheme, a primitive equation ocean general circulation model, and the atmospheric fluxes that are required to force this. This whole process generated dynamically consist thermohaline and velocity fields for the ocean. Currently routine weekly analyses are performed for the Atlantic and Pacific oceans. These analyses are used for ocean climate diagnostics and as initial conditions for coupled forecast models. Specific examples of output products were shown both in the Pacific and the Atlantic Ocean.

  10. Modeling Human-Computer Decision Making with Covariance Structure Analysis.

    ERIC Educational Resources Information Center

    Coovert, Michael D.; And Others

    Arguing that sufficient theory exists about the interplay between human information processing, computer systems, and the demands of various tasks to construct useful theories of human-computer interaction, this study presents a structural model of human-computer interaction and reports the results of various statistical analyses of this model.…

  11. Juvenile Offenders and Victims: 2006 National Report

    ERIC Educational Resources Information Center

    Snyder, Howard N.; Sickmund, Melissa

    2006-01-01

    This report presents comprehensive information on juvenile crime, violence, and victimization and on the juvenile justice system. This report brings together the latest available statistics from a variety of sources and includes numerous tables, graphs, and maps, accompanied by analyses in clear, nontechnical language. The report offers Congress,…

  12. A Prototype Regional GSI-based EnKF-Variational Hybrid Data Assimilation System for the Rapid Refresh Forecasting System: Dual-Resolution Implementation and Testing Results

    NASA Astrophysics Data System (ADS)

    Pan, Yujie; Xue, Ming; Zhu, Kefeng; Wang, Mingjun

    2018-05-01

    A dual-resolution (DR) version of a regional ensemble Kalman filter (EnKF)-3D ensemble variational (3DEnVar) coupled hybrid data assimilation system is implemented as a prototype for the operational Rapid Refresh forecasting system. The DR 3DEnVar system combines a high-resolution (HR) deterministic background forecast with lower-resolution (LR) EnKF ensemble perturbations used for flow-dependent background error covariance to produce a HR analysis. The computational cost is substantially reduced by running the ensemble forecasts and EnKF analyses at LR. The DR 3DEnVar system is tested with 3-h cycles over a 9-day period using a 40/˜13-km grid spacing combination. The HR forecasts from the DR hybrid analyses are compared with forecasts launched from HR Gridpoint Statistical Interpolation (GSI) 3D variational (3DVar) analyses, and single LR hybrid analyses interpolated to the HR grid. With the DR 3DEnVar system, a 90% weight for the ensemble covariance yields the lowest forecast errors and the DR hybrid system clearly outperforms the HR GSI 3DVar. Humidity and wind forecasts are also better than those launched from interpolated LR hybrid analyses, but the temperature forecasts are slightly worse. The humidity forecasts are improved most. For precipitation forecasts, the DR 3DEnVar always outperforms HR GSI 3DVar. It also outperforms the LR 3DEnVar, except for the initial forecast period and lower thresholds.

  13. Comparative statistical component analysis of transgenic, cyanophycin-producing potatoes in greenhouse and field trials.

    PubMed

    Schmidt, Kerstin; Schmidtke, Jörg; Mast, Yvonne; Waldvogel, Eva; Wohlleben, Wolfgang; Klemke, Friederike; Lockau, Wolfgang; Hausmann, Tina; Hühns, Maja; Broer, Inge

    2017-08-01

    Potatoes are a promising system for industrial production of the biopolymer cyanophycin as a second compound in addition to starch. To assess the efficiency in the field, we analysed the stability of the system, specifically its sensitivity to environmental factors. Field and greenhouse trials with transgenic potatoes (two independent events) were carried out for three years. The influence of environmental factors was measured and target compounds in the transgenic plants (cyanophycin, amino acids) were analysed for differences to control plants. Furthermore, non-target parameters (starch content, number, weight and size of tubers) were analysed for equivalence with control plants. The huge amount of data received was handled using modern statistical approaches to model the correlation between influencing environmental factors (year of cultivation, nitrogen fertilization, origin of plants, greenhouse or field cultivation) and key components (starch, amino acids, cyanophycin) and agronomic characteristics. General linear models were used for modelling, and standard effect sizes were applied to compare conventional and genetically modified plants. Altogether, the field trials prove that significant cyanophycin production is possible without reduction of starch content. Non-target compound composition seems to be equivalent under varying environmental conditions. Additionally, a quick test to measure cyanophycin content gives similar results compared to the extensive enzymatic test. This work facilitates the commercial cultivation of cyanophycin potatoes.

  14. Implementation of novel statistical procedures and other advanced approaches to improve analysis of CASA data.

    PubMed

    Ramón, M; Martínez-Pastor, F

    2018-04-23

    Computer-aided sperm analysis (CASA) produces a wealth of data that is frequently ignored. The use of multiparametric statistical methods can help explore these datasets, unveiling the subpopulation structure of sperm samples. In this review we analyse the significance of the internal heterogeneity of sperm samples and its relevance. We also provide a brief description of the statistical tools used for extracting sperm subpopulations from the datasets, namely unsupervised clustering (with non-hierarchical, hierarchical and two-step methods) and the most advanced supervised methods, based on machine learning. The former method has allowed exploration of subpopulation patterns in many species, whereas the latter offering further possibilities, especially considering functional studies and the practical use of subpopulation analysis. We also consider novel approaches, such as the use of geometric morphometrics or imaging flow cytometry. Finally, although the data provided by CASA systems provides valuable information on sperm samples by applying clustering analyses, there are several caveats. Protocols for capturing and analysing motility or morphometry should be standardised and adapted to each experiment, and the algorithms should be open in order to allow comparison of results between laboratories. Moreover, we must be aware of new technology that could change the paradigm for studying sperm motility and morphology.

  15. Statistical Characterization of School Bus Drive Cycles Collected via Onboard Logging Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duran, A.; Walkowicz, K.

    In an effort to characterize the dynamics typical of school bus operation, National Renewable Energy Laboratory (NREL) researchers set out to gather in-use duty cycle data from school bus fleets operating across the country. Employing a combination of Isaac Instruments GPS/CAN data loggers in conjunction with existing onboard telemetric systems resulted in the capture of operating information for more than 200 individual vehicles in three geographically unique domestic locations. In total, over 1,500 individual operational route shifts from Washington, New York, and Colorado were collected. Upon completing the collection of in-use field data using either NREL-installed data acquisition devices ormore » existing onboard telemetry systems, large-scale duty-cycle statistical analyses were performed to examine underlying vehicle dynamics trends within the data and to explore vehicle operation variations between fleet locations. Based on the results of these analyses, high, low, and average vehicle dynamics requirements were determined, resulting in the selection of representative standard chassis dynamometer test cycles for each condition. In this paper, the methodology and accompanying results of the large-scale duty-cycle statistical analysis are presented, including graphical and tabular representations of a number of relationships between key duty-cycle metrics observed within the larger data set. In addition to presenting the results of this analysis, conclusions are drawn and presented regarding potential applications of advanced vehicle technology as it relates specifically to school buses.« less

  16. Statistical Data Analyses of Trace Chemical, Biochemical, and Physical Analytical Signatures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Udey, Ruth Norma

    Analytical and bioanalytical chemistry measurement results are most meaningful when interpreted using rigorous statistical treatments of the data. The same data set may provide many dimensions of information depending on the questions asked through the applied statistical methods. Three principal projects illustrated the wealth of information gained through the application of statistical data analyses to diverse problems.

  17. On Improving the Quality and Interpretation of Environmental Assessments using Statistical Analysis and Geographic Information Systems

    NASA Astrophysics Data System (ADS)

    Karuppiah, R.; Faldi, A.; Laurenzi, I.; Usadi, A.; Venkatesh, A.

    2014-12-01

    An increasing number of studies are focused on assessing the environmental footprint of different products and processes, especially using life cycle assessment (LCA). This work shows how combining statistical methods and Geographic Information Systems (GIS) with environmental analyses can help improve the quality of results and their interpretation. Most environmental assessments in literature yield single numbers that characterize the environmental impact of a process/product - typically global or country averages, often unchanging in time. In this work, we show how statistical analysis and GIS can help address these limitations. For example, we demonstrate a method to separately quantify uncertainty and variability in the result of LCA models using a power generation case study. This is important for rigorous comparisons between the impacts of different processes. Another challenge is lack of data that can affect the rigor of LCAs. We have developed an approach to estimate environmental impacts of incompletely characterized processes using predictive statistical models. This method is applied to estimate unreported coal power plant emissions in several world regions. There is also a general lack of spatio-temporal characterization of the results in environmental analyses. For instance, studies that focus on water usage do not put in context where and when water is withdrawn. Through the use of hydrological modeling combined with GIS, we quantify water stress on a regional and seasonal basis to understand water supply and demand risks for multiple users. Another example where it is important to consider regional dependency of impacts is when characterizing how agricultural land occupation affects biodiversity in a region. We developed a data-driven methodology used in conjuction with GIS to determine if there is a statistically significant difference between the impacts of growing different crops on different species in various biomes of the world.

  18. Surgical adverse outcome reporting as part of routine clinical care.

    PubMed

    Kievit, J; Krukerink, M; Marang-van de Mheen, P J

    2010-12-01

    In The Netherlands, health professionals have created a doctor-driven standardised system to report and analyse adverse outcomes (AO). The aim is to improve healthcare by learning from past experiences. The key elements of this system are (1) an unequivocal definition of an adverse outcome, (2) appropriate contextual information and (3) a three-dimensional hierarchical classification system. First, to assess whether routine doctor-driven AO reporting is feasible. Second, to investigate how doctors can learn from AO reporting and analysis to improve the quality of care. Feasibility was assessed by how well doctors reported AO in the surgical department of a Dutch university hospital over a period of 9 years. AO incidence was analysed per patient subgroup and over time, in a time-trend analysis of three equal 3-year periods. AO were analysed case by case and statistically, to learn lessons from past events. In 19,907 surgical admissions, 9189 AOs were reported: one or more AO in 18.2% of admissions. On average, 55 lessons were learnt each year (in 4.3% of AO). More AO were reported in P3 than P1 (OR 1.39 (1.23-1.57)). Although minor AO increased, fatal AO decreased over time (OR 0.59 (0.45-0.77)). Doctor-driven AO reporting is shown to be feasible. Lessons can be learnt from case-by-case analyses of individual AO, as well as by statistical analysis of AO groups and subgroups (illustrated by time-trend analysis), thus contributing to the improvement of the quality of care. Moreover, by standardising AO reporting, data can be compared across departments or hospitals, to generate (confidential) mirror information for professionals cooperating in a peer-review setting.

  19. Surgical adverse outcome reporting as part of routine clinical care

    PubMed Central

    Krukerink, M; Marang-van de Mheen, P J

    2010-01-01

    Background In The Netherlands, health professionals have created a doctor-driven standardised system to report and analyse adverse outcomes (AO). The aim is to improve healthcare by learning from past experiences. The key elements of this system are (1) an unequivocal definition of an adverse outcome, (2) appropriate contextual information and (3) a three-dimensional hierarchical classification system. Objectives First, to assess whether routine doctor-driven AO reporting is feasible. Second, to investigate how doctors can learn from AO reporting and analysis to improve the quality of care. Methods Feasibility was assessed by how well doctors reported AO in the surgical department of a Dutch university hospital over a period of 9 years. AO incidence was analysed per patient subgroup and over time, in a time-trend analysis of three equal 3-year periods. AO were analysed case by case and statistically, to learn lessons from past events. Results In 19 907 surgical admissions, 9189 AOs were reported: one or more AO in 18.2% of admissions. On average, 55 lessons were learnt each year (in 4.3% of AO). More AO were reported in P3 than P1 (OR 1.39 (1.23–1.57)). Although minor AO increased, fatal AO decreased over time (OR 0.59 (0.45–0.77)). Conclusions Doctor-driven AO reporting is shown to be feasible. Lessons can be learnt from case-by-case analyses of individual AO, as well as by statistical analysis of AO groups and subgroups (illustrated by time-trend analysis), thus contributing to the improvement of the quality of care. Moreover, by standardising AO reporting, data can be compared across departments or hospitals, to generate (confidential) mirror information for professionals cooperating in a peer-review setting. PMID:20430928

  20. Assessment and statistics of surgically induced astigmatism.

    PubMed

    Naeser, Kristian

    2008-05-01

    The aim of the thesis was to develop methods for assessment of surgically induced astigmatism (SIA) in individual eyes, and in groups of eyes. The thesis is based on 12 peer-reviewed publications, published over a period of 16 years. In these publications older and contemporary literature was reviewed(1). A new method (the polar system) for analysis of SIA was developed. Multivariate statistical analysis of refractive data was described(2-4). Clinical validation studies were performed. The description of a cylinder surface with polar values and differential geometry was compared. The main results were: refractive data in the form of sphere, cylinder and axis may define an individual patient or data set, but are unsuited for mathematical and statistical analyses(1). The polar value system converts net astigmatisms to orthonormal components in dioptric space. A polar value is the difference in meridional power between two orthogonal meridians(5,6). Any pair of polar values, separated by an arch of 45 degrees, characterizes a net astigmatism completely(7). The two polar values represent the net curvital and net torsional power over the chosen meridian(8). The spherical component is described by the spherical equivalent power. Several clinical studies demonstrated the efficiency of multivariate statistical analysis of refractive data(4,9-11). Polar values and formal differential geometry describe astigmatic surfaces with similar concepts and mathematical functions(8). Other contemporary methods, such as Long's power matrix, Holladay's and Alpins' methods, Zernike(12) and Fourier analyses(8), are correlated to the polar value system. In conclusion, analysis of SIA should be performed with polar values or other contemporary component systems. The study was supported by Statens Sundhedsvidenskabeligt Forskningsråd, Cykelhandler P. Th. Rasmussen og Hustrus Mindelegat, Hotelejer Carl Larsen og Hustru Nicoline Larsens Mindelegat, Landsforeningen til Vaern om Synet, Forskningsinitiativet for Arhus Amt, Alcon Denmark, and Desirée and Niels Ydes Fond.

  1. Approach for Input Uncertainty Propagation and Robust Design in CFD Using Sensitivity Derivatives

    NASA Technical Reports Server (NTRS)

    Putko, Michele M.; Taylor, Arthur C., III; Newman, Perry A.; Green, Lawrence L.

    2002-01-01

    An implementation of the approximate statistical moment method for uncertainty propagation and robust optimization for quasi 3-D Euler CFD code is presented. Given uncertainties in statistically independent, random, normally distributed input variables, first- and second-order statistical moment procedures are performed to approximate the uncertainty in the CFD output. Efficient calculation of both first- and second-order sensitivity derivatives is required. In order to assess the validity of the approximations, these moments are compared with statistical moments generated through Monte Carlo simulations. The uncertainties in the CFD input variables are also incorporated into a robust optimization procedure. For this optimization, statistical moments involving first-order sensitivity derivatives appear in the objective function and system constraints. Second-order sensitivity derivatives are used in a gradient-based search to successfully execute a robust optimization. The approximate methods used throughout the analyses are found to be valid when considering robustness about input parameter mean values.

  2. Space shuttle solid rocket booster recovery system definition, volume 1

    NASA Technical Reports Server (NTRS)

    1973-01-01

    The performance requirements, preliminary designs, and development program plans for an airborne recovery system for the space shuttle solid rocket booster are discussed. The analyses performed during the study phase of the program are presented. The basic considerations which established the system configuration are defined. A Monte Carlo statistical technique using random sampling of the probability distribution for the critical water impact parameters was used to determine the failure probability of each solid rocket booster component as functions of impact velocity and component strength capability.

  3. Early Warning Signs of Suicide in Service Members Who Engage in Unauthorized Acts of Violence

    DTIC Science & Technology

    2016-06-01

    observable to military law enforcement personnel. Statistical analyses tested for differences in warning signs between cases of suicide, violence, or...indicators, (2) Behavioral Change indicators, (3) Social indicators, and (4) Occupational indicators. Statistical analyses were conducted to test for...6 Coding _________________________________________________________________ 7 Statistical

  4. Spatial Ensemble Postprocessing of Precipitation Forecasts Using High Resolution Analyses

    NASA Astrophysics Data System (ADS)

    Lang, Moritz N.; Schicker, Irene; Kann, Alexander; Wang, Yong

    2017-04-01

    Ensemble prediction systems are designed to account for errors or uncertainties in the initial and boundary conditions, imperfect parameterizations, etc. However, due to sampling errors and underestimation of the model errors, these ensemble forecasts tend to be underdispersive, and to lack both reliability and sharpness. To overcome such limitations, statistical postprocessing methods are commonly applied to these forecasts. In this study, a full-distributional spatial post-processing method is applied to short-range precipitation forecasts over Austria using Standardized Anomaly Model Output Statistics (SAMOS). Following Stauffer et al. (2016), observation and forecast fields are transformed into standardized anomalies by subtracting a site-specific climatological mean and dividing by the climatological standard deviation. Due to the need of fitting only a single regression model for the whole domain, the SAMOS framework provides a computationally inexpensive method to create operationally calibrated probabilistic forecasts for any arbitrary location or for all grid points in the domain simultaneously. Taking advantage of the INCA system (Integrated Nowcasting through Comprehensive Analysis), high resolution analyses are used for the computation of the observed climatology and for model training. The INCA system operationally combines station measurements and remote sensing data into real-time objective analysis fields at 1 km-horizontal resolution and 1 h-temporal resolution. The precipitation forecast used in this study is obtained from a limited area model ensemble prediction system also operated by ZAMG. The so called ALADIN-LAEF provides, by applying a multi-physics approach, a 17-member forecast at a horizontal resolution of 10.9 km and a temporal resolution of 1 hour. The performed SAMOS approach statistically combines the in-house developed high resolution analysis and ensemble prediction system. The station-based validation of 6 hour precipitation sums shows a mean improvement of more than 40% in CRPS when compared to bilinearly interpolated uncalibrated ensemble forecasts. The validation on randomly selected grid points, representing the true height distribution over Austria, still indicates a mean improvement of 35%. The applied statistical model is currently set up for 6-hourly and daily accumulation periods, but will be extended to a temporal resolution of 1-3 hours within a new probabilistic nowcasting system operated by ZAMG.

  5. [Statistical analysis using freely-available "EZR (Easy R)" software].

    PubMed

    Kanda, Yoshinobu

    2015-10-01

    Clinicians must often perform statistical analyses for purposes such evaluating preexisting evidence and designing or executing clinical studies. R is a free software environment for statistical computing. R supports many statistical analysis functions, but does not incorporate a statistical graphical user interface (GUI). The R commander provides an easy-to-use basic-statistics GUI for R. However, the statistical function of the R commander is limited, especially in the field of biostatistics. Therefore, the author added several important statistical functions to the R commander and named it "EZR (Easy R)", which is now being distributed on the following website: http://www.jichi.ac.jp/saitama-sct/. EZR allows the application of statistical functions that are frequently used in clinical studies, such as survival analyses, including competing risk analyses and the use of time-dependent covariates and so on, by point-and-click access. In addition, by saving the script automatically created by EZR, users can learn R script writing, maintain the traceability of the analysis, and assure that the statistical process is overseen by a supervisor.

  6. Performing statistical analyses on quantitative data in Taverna workflows: an example using R and maxdBrowse to identify differentially-expressed genes from microarray data.

    PubMed

    Li, Peter; Castrillo, Juan I; Velarde, Giles; Wassink, Ingo; Soiland-Reyes, Stian; Owen, Stuart; Withers, David; Oinn, Tom; Pocock, Matthew R; Goble, Carole A; Oliver, Stephen G; Kell, Douglas B

    2008-08-07

    There has been a dramatic increase in the amount of quantitative data derived from the measurement of changes at different levels of biological complexity during the post-genomic era. However, there are a number of issues associated with the use of computational tools employed for the analysis of such data. For example, computational tools such as R and MATLAB require prior knowledge of their programming languages in order to implement statistical analyses on data. Combining two or more tools in an analysis may also be problematic since data may have to be manually copied and pasted between separate user interfaces for each tool. Furthermore, this transfer of data may require a reconciliation step in order for there to be interoperability between computational tools. Developments in the Taverna workflow system have enabled pipelines to be constructed and enacted for generic and ad hoc analyses of quantitative data. Here, we present an example of such a workflow involving the statistical identification of differentially-expressed genes from microarray data followed by the annotation of their relationships to cellular processes. This workflow makes use of customised maxdBrowse web services, a system that allows Taverna to query and retrieve gene expression data from the maxdLoad2 microarray database. These data are then analysed by R to identify differentially-expressed genes using the Taverna RShell processor which has been developed for invoking this tool when it has been deployed as a service using the RServe library. In addition, the workflow uses Beanshell scripts to reconcile mismatches of data between services as well as to implement a form of user interaction for selecting subsets of microarray data for analysis as part of the workflow execution. A new plugin system in the Taverna software architecture is demonstrated by the use of renderers for displaying PDF files and CSV formatted data within the Taverna workbench. Taverna can be used by data analysis experts as a generic tool for composing ad hoc analyses of quantitative data by combining the use of scripts written in the R programming language with tools exposed as services in workflows. When these workflows are shared with colleagues and the wider scientific community, they provide an approach for other scientists wanting to use tools such as R without having to learn the corresponding programming language to analyse their own data.

  7. Performing statistical analyses on quantitative data in Taverna workflows: An example using R and maxdBrowse to identify differentially-expressed genes from microarray data

    PubMed Central

    Li, Peter; Castrillo, Juan I; Velarde, Giles; Wassink, Ingo; Soiland-Reyes, Stian; Owen, Stuart; Withers, David; Oinn, Tom; Pocock, Matthew R; Goble, Carole A; Oliver, Stephen G; Kell, Douglas B

    2008-01-01

    Background There has been a dramatic increase in the amount of quantitative data derived from the measurement of changes at different levels of biological complexity during the post-genomic era. However, there are a number of issues associated with the use of computational tools employed for the analysis of such data. For example, computational tools such as R and MATLAB require prior knowledge of their programming languages in order to implement statistical analyses on data. Combining two or more tools in an analysis may also be problematic since data may have to be manually copied and pasted between separate user interfaces for each tool. Furthermore, this transfer of data may require a reconciliation step in order for there to be interoperability between computational tools. Results Developments in the Taverna workflow system have enabled pipelines to be constructed and enacted for generic and ad hoc analyses of quantitative data. Here, we present an example of such a workflow involving the statistical identification of differentially-expressed genes from microarray data followed by the annotation of their relationships to cellular processes. This workflow makes use of customised maxdBrowse web services, a system that allows Taverna to query and retrieve gene expression data from the maxdLoad2 microarray database. These data are then analysed by R to identify differentially-expressed genes using the Taverna RShell processor which has been developed for invoking this tool when it has been deployed as a service using the RServe library. In addition, the workflow uses Beanshell scripts to reconcile mismatches of data between services as well as to implement a form of user interaction for selecting subsets of microarray data for analysis as part of the workflow execution. A new plugin system in the Taverna software architecture is demonstrated by the use of renderers for displaying PDF files and CSV formatted data within the Taverna workbench. Conclusion Taverna can be used by data analysis experts as a generic tool for composing ad hoc analyses of quantitative data by combining the use of scripts written in the R programming language with tools exposed as services in workflows. When these workflows are shared with colleagues and the wider scientific community, they provide an approach for other scientists wanting to use tools such as R without having to learn the corresponding programming language to analyse their own data. PMID:18687127

  8. Krypton and xenon in lunar fines

    NASA Technical Reports Server (NTRS)

    Basford, J. R.; Dragon, J. C.; Pepin, R. O.; Coscio, M. R., Jr.; Murthy, V. R.

    1973-01-01

    Data from grain-size separates, stepwise-heated fractions, and bulk analyses of 20 samples of fines and breccias from five lunar sites are used to define three-isotope and ordinate intercept correlations in an attempt to resolve the lunar heavy rare gas system in a statistically valid approach. Tables of concentrations and isotope compositions are given.

  9. The Development of a Decision Support System for Mobile Learning: A Case Study in Taiwan

    ERIC Educational Resources Information Center

    Chiu, Po-Sheng; Huang, Yueh-Min

    2016-01-01

    While mobile learning (m-learning) has considerable potential, most of previous strategies for developing this new approach to education were analysed using the knowledge, experience and judgement of individuals, with the support of statistical software. Although these methods provide systematic steps for the implementation of m-learning…

  10. Adaptive variation in Pinus ponderosa from Intermountain regions. II. Middle Columbia River system

    Treesearch

    Gerald Rehfeldt

    1986-01-01

    Seedling populations were grown and compared in common environments. Statistical analyses detected genetic differences between populations for numerous traits reflecting growth potential and periodicity of shoot elongation. Multiple regression models described an adaptive landscape in which populations from low elevations have a high growth potential while those from...

  11. A preliminary study of the statistical analyses and sampling strategies associated with the integration of remote sensing capabilities into the current agricultural crop forecasting system

    NASA Technical Reports Server (NTRS)

    Sand, F.; Christie, R.

    1975-01-01

    Extending the crop survey application of remote sensing from small experimental regions to state and national levels requires that a sample of agricultural fields be chosen for remote sensing of crop acreage, and that a statistical estimate be formulated with measurable characteristics. The critical requirements for the success of the application are reviewed in this report. The problem of sampling in the presence of cloud cover is discussed. Integration of remotely sensed information about crops into current agricultural crop forecasting systems is treated on the basis of the USDA multiple frame survey concepts, with an assumed addition of a new frame derived from remote sensing. Evolution of a crop forecasting system which utilizes LANDSAT and future remote sensing systems is projected for the 1975-1990 time frame.

  12. Moisture Forecast Bias Correction in GEOS DAS

    NASA Technical Reports Server (NTRS)

    Dee, D.

    1999-01-01

    Data assimilation methods rely on numerous assumptions about the errors involved in measuring and forecasting atmospheric fields. One of the more disturbing of these is that short-term model forecasts are assumed to be unbiased. In case of atmospheric moisture, for example, observational evidence shows that the systematic component of errors in forecasts and analyses is often of the same order of magnitude as the random component. we have implemented a sequential algorithm for estimating forecast moisture bias from rawinsonde data in the Goddard Earth Observing System Data Assimilation System (GEOS DAS). The algorithm is designed to remove the systematic component of analysis errors and can be easily incorporated in an existing statistical data assimilation system. We will present results of initial experiments that show a significant reduction of bias in the GEOS DAS moisture analyses.

  13. Seminar presentation on the economic evaluation of the space shuttle system

    NASA Technical Reports Server (NTRS)

    1973-01-01

    The proceedings of a seminar on the economic aspects of the space shuttle system are presented. Emphasis was placed on the problems of economic analysis of large scale public investments, the state of the art of cost estimation, the statistical data base for estimating costs of new technological systems, and the role of the main economic parameters affecting the results of the analyses. An explanation of the system components of a space program and the present choice of launch vehicles, spacecraft, and instruments was conducted.

  14. Statistical analysis of fNIRS data: a comprehensive review.

    PubMed

    Tak, Sungho; Ye, Jong Chul

    2014-01-15

    Functional near-infrared spectroscopy (fNIRS) is a non-invasive method to measure brain activities using the changes of optical absorption in the brain through the intact skull. fNIRS has many advantages over other neuroimaging modalities such as positron emission tomography (PET), functional magnetic resonance imaging (fMRI), or magnetoencephalography (MEG), since it can directly measure blood oxygenation level changes related to neural activation with high temporal resolution. However, fNIRS signals are highly corrupted by measurement noises and physiology-based systemic interference. Careful statistical analyses are therefore required to extract neuronal activity-related signals from fNIRS data. In this paper, we provide an extensive review of historical developments of statistical analyses of fNIRS signal, which include motion artifact correction, short source-detector separation correction, principal component analysis (PCA)/independent component analysis (ICA), false discovery rate (FDR), serially-correlated errors, as well as inference techniques such as the standard t-test, F-test, analysis of variance (ANOVA), and statistical parameter mapping (SPM) framework. In addition, to provide a unified view of various existing inference techniques, we explain a linear mixed effect model with restricted maximum likelihood (ReML) variance estimation, and show that most of the existing inference methods for fNIRS analysis can be derived as special cases. Some of the open issues in statistical analysis are also described. Copyright © 2013 Elsevier Inc. All rights reserved.

  15. [Health Statistics in Senegal: between political stakes and role playing].

    PubMed

    Hane, Fatoumata

    2017-01-01

    Many countries have developed disease surveillance systems to deal with epidemics, but although health information systems have existed for more than two decades, constraints and biases in data collection limit the relevance of policy decisions and strategies in the field of health, as priority has been given to education and health in developing countries. Donor support has led to the development of systems for the production of statistics, designed, among other things, to more clearly target interventions in terms of educational objectives, action and credibility and enable health systems to continue to benefit from external funding. We used a classical anthropology approach based on observations and in-depth interviews with local and national health system actors. The aim of this article is to analyse the real effects of the production of health statistics in health care systems and to determine the relevance of these figures in the context in which they apply. Health priorities defined by international organizations and technical and financial partners focus on diseases considered to be ?priorities? to the detriment of neglected diseases, which are perceived as being more important at the local level due to their impact on the already limited health systems. We describe how health actors within healthcare structures adjust and adapt to public health requirements.

  16. Statistical approaches in published ophthalmic clinical science papers: a comparison to statistical practice two decades ago.

    PubMed

    Zhang, Harrison G; Ying, Gui-Shuang

    2018-02-09

    The aim of this study is to evaluate the current practice of statistical analysis of eye data in clinical science papers published in British Journal of Ophthalmology ( BJO ) and to determine whether the practice of statistical analysis has improved in the past two decades. All clinical science papers (n=125) published in BJO in January-June 2017 were reviewed for their statistical analysis approaches for analysing primary ocular measure. We compared our findings to the results from a previous paper that reviewed BJO papers in 1995. Of 112 papers eligible for analysis, half of the studies analysed the data at an individual level because of the nature of observation, 16 (14%) studies analysed data from one eye only, 36 (32%) studies analysed data from both eyes at ocular level, one study (1%) analysed the overall summary of ocular finding per individual and three (3%) studies used the paired comparison. Among studies with data available from both eyes, 50 (89%) of 56 papers in 2017 did not analyse data from both eyes or ignored the intereye correlation, as compared with in 60 (90%) of 67 papers in 1995 (P=0.96). Among studies that analysed data from both eyes at an ocular level, 33 (92%) of 36 studies completely ignored the intereye correlation in 2017, as compared with in 16 (89%) of 18 studies in 1995 (P=0.40). A majority of studies did not analyse the data properly when data from both eyes were available. The practice of statistical analysis did not improve in the past two decades. Collaborative efforts should be made in the vision research community to improve the practice of statistical analysis for ocular data. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  17. Using R-Project for Free Statistical Analysis in Extension Research

    ERIC Educational Resources Information Center

    Mangiafico, Salvatore S.

    2013-01-01

    One option for Extension professionals wishing to use free statistical software is to use online calculators, which are useful for common, simple analyses. A second option is to use a free computing environment capable of performing statistical analyses, like R-project. R-project is free, cross-platform, powerful, and respected, but may be…

  18. Quantifying Safety Margin Using the Risk-Informed Safety Margin Characterization (RISMC)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grabaskas, David; Bucknor, Matthew; Brunett, Acacia

    2015-04-26

    The Risk-Informed Safety Margin Characterization (RISMC), developed by Idaho National Laboratory as part of the Light-Water Reactor Sustainability Project, utilizes a probabilistic safety margin comparison between a load and capacity distribution, rather than a deterministic comparison between two values, as is usually done in best-estimate plus uncertainty analyses. The goal is to determine the failure probability, or in other words, the probability of the system load equaling or exceeding the system capacity. While this method has been used in pilot studies, there has been little work conducted investigating the statistical significance of the resulting failure probability. In particular, it ismore » difficult to determine how many simulations are necessary to properly characterize the failure probability. This work uses classical (frequentist) statistics and confidence intervals to examine the impact in statistical accuracy when the number of simulations is varied. Two methods are proposed to establish confidence intervals related to the failure probability established using a RISMC analysis. The confidence interval provides information about the statistical accuracy of the method utilized to explore the uncertainty space, and offers a quantitative method to gauge the increase in statistical accuracy due to performing additional simulations.« less

  19. Scheduler software for tracking and data relay satellite system loading analysis: User manual and programmer guide

    NASA Technical Reports Server (NTRS)

    Craft, R.; Dunn, C.; Mccord, J.; Simeone, L.

    1980-01-01

    A user guide and programmer documentation is provided for a system of PRIME 400 minicomputer programs. The system was designed to support loading analyses on the Tracking Data Relay Satellite System (TDRSS). The system is a scheduler for various types of data relays (including tape recorder dumps and real time relays) from orbiting payloads to the TDRSS. Several model options are available to statistically generate data relay requirements. TDRSS time lines (representing resources available for scheduling) and payload/TDRSS acquisition and loss of sight time lines are input to the scheduler from disk. Tabulated output from the interactive system includes a summary of the scheduler activities over time intervals specified by the user and overall summary of scheduler input and output information. A history file, which records every event generated by the scheduler, is written to disk to allow further scheduling on remaining resources and to provide data for graphic displays or additional statistical analysis.

  20. Estimating mortality using data from civil registration: a cross-sectional study in India

    PubMed Central

    Rao, Chalapati; Lakshmi, PVM; Prinja, Shankar; Kumar, Rajesh

    2016-01-01

    Abstract Objective To analyse the design and operational status of India’s civil registration and vital statistics system and facilitate the system’s development into an accurate and reliable source of mortality data. Methods We assessed the national civil registration and vital statistics system’s legal framework, administrative structure and design through document review. We did a cross-sectional study for the year 2013 at national level and in Punjab state to assess the quality of the system’s mortality data through analyses of life tables and investigation of the completeness of death registration and the proportion of deaths assigned ill-defined causes. We interviewed registrars, medical officers and coders in Punjab state to assess their knowledge and practice. Findings Although we found the legal framework and system design to be appropriate, data collection was based on complex intersectoral collaborations at state and local level and the collected data were found to be of poor quality. The registration data were inadequate for a robust estimate of mortality at national level. A medically certified cause of death was only recorded for 965 992 (16.8%) of the 5 735 082 deaths registered. Conclusion The data recorded by India’s civil registration and vital statistics system in 2011 were incomplete. If improved, the system could be used to reliably estimate mortality. We recommend improving political support and intersectoral coordination, capacity building, computerization and state-level initiatives to ensure that every death is registered and that reliable causes of death are recorded – at least within an adequate sample of registration units within each state. PMID:26769992

  1. A review of approaches to identifying patient phenotype cohorts using electronic health records

    PubMed Central

    Shivade, Chaitanya; Raghavan, Preethi; Fosler-Lussier, Eric; Embi, Peter J; Elhadad, Noemie; Johnson, Stephen B; Lai, Albert M

    2014-01-01

    Objective To summarize literature describing approaches aimed at automatically identifying patients with a common phenotype. Materials and methods We performed a review of studies describing systems or reporting techniques developed for identifying cohorts of patients with specific phenotypes. Every full text article published in (1) Journal of American Medical Informatics Association, (2) Journal of Biomedical Informatics, (3) Proceedings of the Annual American Medical Informatics Association Symposium, and (4) Proceedings of Clinical Research Informatics Conference within the past 3 years was assessed for inclusion in the review. Only articles using automated techniques were included. Results Ninety-seven articles met our inclusion criteria. Forty-six used natural language processing (NLP)-based techniques, 24 described rule-based systems, 41 used statistical analyses, data mining, or machine learning techniques, while 22 described hybrid systems. Nine articles described the architecture of large-scale systems developed for determining cohort eligibility of patients. Discussion We observe that there is a rise in the number of studies associated with cohort identification using electronic medical records. Statistical analyses or machine learning, followed by NLP techniques, are gaining popularity over the years in comparison with rule-based systems. Conclusions There are a variety of approaches for classifying patients into a particular phenotype. Different techniques and data sources are used, and good performance is reported on datasets at respective institutions. However, no system makes comprehensive use of electronic medical records addressing all of their known weaknesses. PMID:24201027

  2. The Problem of Auto-Correlation in Parasitology

    PubMed Central

    Pollitt, Laura C.; Reece, Sarah E.; Mideo, Nicole; Nussey, Daniel H.; Colegrave, Nick

    2012-01-01

    Explaining the contribution of host and pathogen factors in driving infection dynamics is a major ambition in parasitology. There is increasing recognition that analyses based on single summary measures of an infection (e.g., peak parasitaemia) do not adequately capture infection dynamics and so, the appropriate use of statistical techniques to analyse dynamics is necessary to understand infections and, ultimately, control parasites. However, the complexities of within-host environments mean that tracking and analysing pathogen dynamics within infections and among hosts poses considerable statistical challenges. Simple statistical models make assumptions that will rarely be satisfied in data collected on host and parasite parameters. In particular, model residuals (unexplained variance in the data) should not be correlated in time or space. Here we demonstrate how failure to account for such correlations can result in incorrect biological inference from statistical analysis. We then show how mixed effects models can be used as a powerful tool to analyse such repeated measures data in the hope that this will encourage better statistical practices in parasitology. PMID:22511865

  3. Simulating Visual Attention Allocation of Pilots in an Advanced Cockpit Environment

    NASA Technical Reports Server (NTRS)

    Frische, F.; Osterloh, J.-P.; Luedtke, A.

    2011-01-01

    This paper describes the results of experiments conducted with human line pilots and a cognitive pilot model during interaction with a new 40 Flight Management System (FMS). The aim of these experiments was to gather human pilot behavior data in order to calibrate the behavior of the model. Human behavior is mainly triggered by visual perception. Thus, the main aspect was to setup a profile of human pilots' visual attention allocation in a cockpit environment containing the new FMS. We first performed statistical analyses of eye tracker data and then compared our results to common results of familiar analyses in standard cockpit environments. The comparison has shown a significant influence of the new system on the visual performance of human pilots. Further on, analyses of the pilot models' visual performance have been performed. A comparison to human pilots' visual performance revealed important improvement potentials.

  4. The response of numerical weather prediction analysis systems to FGGE 2b data

    NASA Technical Reports Server (NTRS)

    Hollingsworth, A.; Lorenc, A.; Tracton, S.; Arpe, K.; Cats, G.; Uppala, S.; Kallberg, P.

    1985-01-01

    An intercomparison of analyses of the main PGGE Level IIb data set is presented with three advanced analysis systems. The aims of the work are to estimate the extent and magnitude of the differences between the analyses, to identify the reasons for the differences, and finally to estimate the significance of the differences. Extratropical analyses only are considered. Objective evaluations of analysis quality, such as fit to observations, statistics of analysis differences, and mean fields are discussed. In addition, substantial emphasis is placed on subjective evaluation of a series of case studies that were selected to illustrate the importance of different aspects of the analysis procedures, such as quality control, data selection, resolution, dynamical balance, and the role of the assimilating forecast model. In some cases, the forecast models are used as selective amplifiers of analysis differences to assist in deciding which analysis was more nearly correct in the treatment of particular data.

  5. Regional variation in the severity of pesticide exposure outcomes: applications of geographic information systems and spatial scan statistics.

    PubMed

    Sudakin, Daniel L; Power, Laura E

    2009-03-01

    Geographic information systems and spatial scan statistics have been utilized to assess regional clustering of symptomatic pesticide exposures reported to a state Poison Control Center (PCC) during a single year. In the present study, we analyzed five subsequent years of PCC data to test whether there are significant geographic differences in pesticide exposure incidents resulting in serious (moderate, major, and fatal) medical outcomes. A PCC provided the data on unintentional pesticide exposures for the time period 2001-2005. The geographic location of the caller, the location where the exposure occurred, the exposure route, and the medical outcome were abstracted. There were 273 incidents resulting in moderate effects (n = 261), major effects (n = 10), or fatalities (n = 2). Spatial scan statistics identified a geographic area consisting of two adjacent counties (one urban, one rural), where statistically significant clustering of serious outcomes was observed. The relative risk of moderate, major, and fatal outcomes was 2.0 in this spatial cluster (p = 0.0005). PCC data, geographic information systems, and spatial scan statistics can identify clustering of serious outcomes from human exposure to pesticides. These analyses may be useful for public health officials to target preventive interventions. Further investigation is warranted to understand better the potential explanations for geographical clustering, and to assess whether preventive interventions have an impact on reducing pesticide exposure incidents resulting in serious medical outcomes.

  6. Integrating community-based verbal autopsy into civil registration and vital statistics (CRVS): system-level considerations

    PubMed Central

    de Savigny, Don; Riley, Ian; Chandramohan, Daniel; Odhiambo, Frank; Nichols, Erin; Notzon, Sam; AbouZahr, Carla; Mitra, Raj; Cobos Muñoz, Daniel; Firth, Sonja; Maire, Nicolas; Sankoh, Osman; Bronson, Gay; Setel, Philip; Byass, Peter; Jakob, Robert; Boerma, Ties; Lopez, Alan D.

    2017-01-01

    ABSTRACT Background: Reliable and representative cause of death (COD) statistics are essential to inform public health policy, respond to emerging health needs, and document progress towards Sustainable Development Goals. However, less than one-third of deaths worldwide are assigned a cause. Civil registration and vital statistics (CRVS) systems in low- and lower-middle-income countries are failing to provide timely, complete and accurate vital statistics, and it will still be some time before they can provide physician-certified COD for every death. Proposals: Verbal autopsy (VA) is a method to ascertain the probable COD and, although imperfect, it is the best alternative in the absence of medical certification. There is extensive experience with VA in research settings but only a few examples of its use on a large scale. Data collection using electronic questionnaires on mobile devices and computer algorithms to analyse responses and estimate probable COD have increased the potential for VA to be routinely applied in CRVS systems. However, a number of CRVS and health system integration issues should be considered in planning, piloting and implementing a system-wide intervention such as VA. These include addressing the multiplicity of stakeholders and sub-systems involved, integration with existing CRVS work processes and information flows, linking VA results to civil registration records, information technology requirements and data quality assurance. Conclusions: Integrating VA within CRVS systems is not simply a technical undertaking. It will have profound system-wide effects that should be carefully considered when planning for an effective implementation. This paper identifies and discusses the major system-level issues and emerging practices, provides a planning checklist of system-level considerations and proposes an overview for how VA can be integrated into routine CRVS systems. PMID:28137194

  7. Statistical analysis of field data for aircraft warranties

    NASA Astrophysics Data System (ADS)

    Lakey, Mary J.

    Air Force and Navy maintenance data collection systems were researched to determine their scientific applicability to the warranty process. New and unique algorithms were developed to extract failure distributions which were then used to characterize how selected families of equipment typically fails. Families of similar equipment were identified in terms of function, technology and failure patterns. Statistical analyses and applications such as goodness-of-fit test, maximum likelihood estimation and derivation of confidence intervals for the probability density function parameters were applied to characterize the distributions and their failure patterns. Statistical and reliability theory, with relevance to equipment design and operational failures were also determining factors in characterizing the failure patterns of the equipment families. Inferences about the families with relevance to warranty needs were then made.

  8. Geospatial Characterization of Fluvial Wood Arrangement in a Semi-confined Alluvial River

    NASA Astrophysics Data System (ADS)

    Martin, D. J.; Harden, C. P.; Pavlowsky, R. T.

    2014-12-01

    Large woody debris (LWD) has become universally recognized as an integral component of fluvial systems, and as a result, has become increasingly common as a river restoration tool. However, "natural" processes of wood recruitment and the subsequent arrangement of LWD within the river network are poorly understood. This research used a suite of spatial statistics to investigate longitudinal arrangement patterns of LWD in a low-gradient, Midwestern river. First, a large-scale GPS inventory of LWD, performed on the Big River in the eastern Missouri Ozarks, resulted in over 4,000 logged positions of LWD along seven river segments that covered nearly 100 km of the 237 km river system. A global Moran's I analysis indicates that LWD density is spatially autocorrelated and displays a clustering tendency within all seven river segments (P-value range = 0.000 to 0.054). A local Moran's I analysis identified specific locations along the segments where clustering occurs and revealed that, on average, clusters of LWD density (high or low) spanned 400 m. Spectral analyses revealed that, in some segments, LWD density is spatially periodic. Two segments displayed strong periodicity, while the remaining segments displayed varying degrees of noisiness. Periodicity showed a positive association with gravel bar spacing and meander wavelength, although there were insufficient data to statistically confirm the relationship. A wavelet analysis was then performed to investigate periodicity relative to location along the segment. The wavelet analysis identified significant (α = 0.05) periodicity at discrete locations along each of the segments. Those reaches yielding strong periodicity showed stronger relationships between LWD density and the geomorphic/riparian independent variables tested. Analyses consistently identified valley width and sinuosity as being associated with LWD density. The results of these analyses contribute a new perspective on the longitudinal distribution of LWD in a river system, which should help identify physical and/or riparian control mechanisms of LWD arrangement and support the development of models of LWD arrangement. Additionally, the spatial statistical tools presented here have shown to be valuable for identifying longitudinal patterns in river system components.

  9. Language Policy in British Colonial Education: Evidence from Nineteenth-Century Hong Kong

    ERIC Educational Resources Information Center

    Evans, Stephen

    2006-01-01

    This article examines the evolution of language-in-education policy in Hong Kong during the first six decades of British rule (1842-1902). In particular, it analyses the changing roles and status of the English and Chinese languages during this formative period in the development of the colony's education system. The textual and statistical data…

  10. Ontario Universities Statistical Compendium, 1970-71 to 1978-79. Part A, Macro-Indicators.

    ERIC Educational Resources Information Center

    Council of Ontario Universities, Toronto.

    Macro-indicators on the conditions of Ontario universities and supporting data that might be used to generate such indicators were developed, and analyses of both indicators and data were undertaken. Overall objectives were as follows: (1) to measure the real resources available to the Ontario university system as a function of the volume of…

  11. Letting the daylight in: Reviewing the reviewers and other ways to maximize transparency in science

    PubMed Central

    Wicherts, Jelte M.; Kievit, Rogier A.; Bakker, Marjan; Borsboom, Denny

    2012-01-01

    With the emergence of online publishing, opportunities to maximize transparency of scientific research have grown considerably. However, these possibilities are still only marginally used. We argue for the implementation of (1) peer-reviewed peer review, (2) transparent editorial hierarchies, and (3) online data publication. First, peer-reviewed peer review entails a community-wide review system in which reviews are published online and rated by peers. This ensures accountability of reviewers, thereby increasing academic quality of reviews. Second, reviewers who write many highly regarded reviews may move to higher editorial positions. Third, online publication of data ensures the possibility of independent verification of inferential claims in published papers. This counters statistical errors and overly positive reporting of statistical results. We illustrate the benefits of these strategies by discussing an example in which the classical publication system has gone awry, namely controversial IQ research. We argue that this case would have likely been avoided using more transparent publication practices. We argue that the proposed system leads to better reviews, meritocratic editorial hierarchies, and a higher degree of replicability of statistical analyses. PMID:22536180

  12. NASA DOE POD NDE Capabilities Data Book

    NASA Technical Reports Server (NTRS)

    Generazio, Edward R.

    2015-01-01

    This data book contains the Directed Design of Experiments for Validating Probability of Detection (POD) Capability of NDE Systems (DOEPOD) analyses of the nondestructive inspection data presented in the NTIAC, Nondestructive Evaluation (NDE) Capabilities Data Book, 3rd ed., NTIAC DB-97-02. DOEPOD is designed as a decision support system to validate inspection system, personnel, and protocol demonstrating 0.90 POD with 95% confidence at critical flaw sizes, a90/95. The test methodology used in DOEPOD is based on the field of statistical sequential analysis founded by Abraham Wald. Sequential analysis is a method of statistical inference whose characteristic feature is that the number of observations required by the procedure is not determined in advance of the experiment. The decision to terminate the experiment depends, at each stage, on the results of the observations previously made. A merit of the sequential method, as applied to testing statistical hypotheses, is that test procedures can be constructed which require, on average, a substantially smaller number of observations than equally reliable test procedures based on a predetermined number of observations.

  13. Reduction of Complications of Local Anaesthesia in Dental Healthcare Setups by Application of the Six Sigma Methodology: A Statistical Quality Improvement Technique.

    PubMed

    Akifuddin, Syed; Khatoon, Farheen

    2015-12-01

    Health care faces challenges due to complications, inefficiencies and other concerns that threaten the safety of patients. The purpose of his study was to identify causes of complications encountered after administration of local anaesthesia for dental and oral surgical procedures and to reduce the incidence of complications by introduction of six sigma methodology. DMAIC (Define, Measure, Analyse, Improve and Control) process of Six Sigma was taken into consideration to reduce the incidence of complications encountered after administration of local anaesthesia injections for dental and oral surgical procedures using failure mode and effect analysis. Pareto analysis was taken into consideration to analyse the most recurring complications. Paired z-sample test using Minitab Statistical Inference and Fisher's exact test was used to statistically analyse the obtained data. The p-value <0.05 was considered as significant value. Total 54 systemic and 62 local complications occurred during three months of analyse and measure phase. Syncope, failure of anaesthesia, trismus, auto mordeduras and pain at injection site was found to be most recurring complications. Cumulative defective percentage was 7.99 in case of pre-improved data and decreased to 4.58 in the control phase. Estimate for difference was 0.0341228 and 95% lower bound for difference was 0.0193966. p-value was found to be highly significant with p= 0.000. The application of six sigma improvement methodology in healthcare tends to deliver consistently better results to the patients as well as hospitals and results in better patient compliance as well as satisfaction.

  14. Towards intelligent diagnostic system employing integration of mathematical and engineering model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Isa, Nor Ashidi Mat

    The development of medical diagnostic system has been one of the main research fields during years. The goal of the medical diagnostic system is to place a nosological system that could ease the diagnostic evaluation normally performed by scientists and doctors. Efficient diagnostic evaluation is essentials and requires broad knowledge in order to improve conventional diagnostic system. Several approaches on developing the medical diagnostic system have been designed and tested since the earliest 60s. Attempts on improving their performance have been made which utilizes the fields of artificial intelligence, statistical analyses, mathematical model and engineering theories. With the availability ofmore » the microcomputer and software development as well as the promising aforementioned fields, medical diagnostic prototypes could be developed. In general, the medical diagnostic system consists of several stages, namely the 1) data acquisition, 2) feature extraction, 3) feature selection, and 4) classifications stages. Data acquisition stage plays an important role in converting the inputs measured from the real world physical conditions to the digital numeric values that can be manipulated by the computer system. One of the common medical inputs could be medical microscopic images, radiographic images, magnetic resonance image (MRI) as well as medical signals such as electrocardiogram (ECG) and electroencephalogram (EEG). Normally, the scientist or doctors have to deal with myriad of data and redundant to be processed. In order to reduce the complexity of the diagnosis process, only the significant features of the raw data such as peak value of the ECG signal or size of lesion in the mammogram images will be extracted and considered in the subsequent stages. Mathematical models and statistical analyses will be performed to select the most significant features to be classified. The statistical analyses such as principal component analysis and discriminant analysis as well as mathematical model of clustering technique have been widely used in developing the medical diagnostic systems. The selected features will be classified using mathematical models that embedded engineering theory such as artificial intelligence, support vector machine, neural network and fuzzy-neuro system. These classifiers will provide the diagnostic results without human intervention. Among many publishable researches, several prototypes have been developed namely NeuralPap, Neural Mammo, and Cervix Kit. The former system (NeuralPap) is an automatic intelligent diagnostic system for classifying and distinguishing between the normal and cervical cancerous cells. Meanwhile, the Cervix Kit is a portable Field-programmable gate array (FPGA)-based cervical diagnostic kit that could automatically diagnose the cancerous cell based on the images obtained during sampling test. Besides the cervical diagnostic system, the Neural Mammo system is developed to specifically aid the diagnosis of breast cancer using a fine needle aspiration image.« less

  15. Towards intelligent diagnostic system employing integration of mathematical and engineering model

    NASA Astrophysics Data System (ADS)

    Isa, Nor Ashidi Mat

    2015-05-01

    The development of medical diagnostic system has been one of the main research fields during years. The goal of the medical diagnostic system is to place a nosological system that could ease the diagnostic evaluation normally performed by scientists and doctors. Efficient diagnostic evaluation is essentials and requires broad knowledge in order to improve conventional diagnostic system. Several approaches on developing the medical diagnostic system have been designed and tested since the earliest 60s. Attempts on improving their performance have been made which utilizes the fields of artificial intelligence, statistical analyses, mathematical model and engineering theories. With the availability of the microcomputer and software development as well as the promising aforementioned fields, medical diagnostic prototypes could be developed. In general, the medical diagnostic system consists of several stages, namely the 1) data acquisition, 2) feature extraction, 3) feature selection, and 4) classifications stages. Data acquisition stage plays an important role in converting the inputs measured from the real world physical conditions to the digital numeric values that can be manipulated by the computer system. One of the common medical inputs could be medical microscopic images, radiographic images, magnetic resonance image (MRI) as well as medical signals such as electrocardiogram (ECG) and electroencephalogram (EEG). Normally, the scientist or doctors have to deal with myriad of data and redundant to be processed. In order to reduce the complexity of the diagnosis process, only the significant features of the raw data such as peak value of the ECG signal or size of lesion in the mammogram images will be extracted and considered in the subsequent stages. Mathematical models and statistical analyses will be performed to select the most significant features to be classified. The statistical analyses such as principal component analysis and discriminant analysis as well as mathematical model of clustering technique have been widely used in developing the medical diagnostic systems. The selected features will be classified using mathematical models that embedded engineering theory such as artificial intelligence, support vector machine, neural network and fuzzy-neuro system. These classifiers will provide the diagnostic results without human intervention. Among many publishable researches, several prototypes have been developed namely NeuralPap, Neural Mammo, and Cervix Kit. The former system (NeuralPap) is an automatic intelligent diagnostic system for classifying and distinguishing between the normal and cervical cancerous cells. Meanwhile, the Cervix Kit is a portable Field-programmable gate array (FPGA)-based cervical diagnostic kit that could automatically diagnose the cancerous cell based on the images obtained during sampling test. Besides the cervical diagnostic system, the Neural Mammo system is developed to specifically aid the diagnosis of breast cancer using a fine needle aspiration image.

  16. Rear-facing versus forward-facing child restraints: an updated assessment.

    PubMed

    McMurry, Timothy L; Arbogast, Kristy B; Sherwood, Christopher P; Vaca, Federico; Bull, Marilyn; Crandall, Jeff R; Kent, Richard W

    2018-02-01

    The National Highway Traffic Safety Administration and the American Academy of Pediatrics recommend children be placed in rear-facing child restraint systems (RFCRS) until at least age 2. These recommendations are based on laboratory biomechanical tests and field data analyses. Due to concerns raised by an independent researcher, we re-evaluated the field evidence in favour of RFCRS using the National Automotive Sampling System Crashworthiness Data System (NASS-CDS) database. Children aged 0 or 1 year old (0-23 months) riding in either rear-facing or forward-facing child restraint systems (FFCRS) were selected from the NASS-CDS database, and injury rates were compared by seat orientation using survey-weighted χ 2 tests. In order to compare with previous work, we analysed NASS-CDS years 1988-2003, and then updated the analyses to include all available data using NASS-CDS years 1988-2015. Years 1988-2015 of NASS-CDS contained 1107 children aged 0 or 1 year old meeting inclusion criteria, with 47 of these children sustaining injuries with Injury Severity Score of at least 9. Both 0-year-old and 1-year-old children in RFCRS had lower rates of injury than children in FFCRS, but the available sample size was too small for reasonable statistical power or to allow meaningful regression controlling for covariates. Non-US field data and laboratory tests support the recommendation that children be kept in RFCRS for as long as possible, but the US NASS-CDS field data are too limited to serve as a strong statistical basis for these recommendations. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  17. System Synthesis in Preliminary Aircraft Design using Statistical Methods

    NASA Technical Reports Server (NTRS)

    DeLaurentis, Daniel; Mavris, Dimitri N.; Schrage, Daniel P.

    1996-01-01

    This paper documents an approach to conceptual and preliminary aircraft design in which system synthesis is achieved using statistical methods, specifically design of experiments (DOE) and response surface methodology (RSM). These methods are employed in order to more efficiently search the design space for optimum configurations. In particular, a methodology incorporating three uses of these techniques is presented. First, response surface equations are formed which represent aerodynamic analyses, in the form of regression polynomials, which are more sophisticated than generally available in early design stages. Next, a regression equation for an overall evaluation criterion is constructed for the purpose of constrained optimization at the system level. This optimization, though achieved in a innovative way, is still traditional in that it is a point design solution. The methodology put forward here remedies this by introducing uncertainty into the problem, resulting a solutions which are probabilistic in nature. DOE/RSM is used for the third time in this setting. The process is demonstrated through a detailed aero-propulsion optimization of a high speed civil transport. Fundamental goals of the methodology, then, are to introduce higher fidelity disciplinary analyses to the conceptual aircraft synthesis and provide a roadmap for transitioning from point solutions to probabalistic designs (and eventually robust ones).

  18. Cyber Risk Management for Critical Infrastructure: A Risk Analysis Model and Three Case Studies.

    PubMed

    Paté-Cornell, M-Elisabeth; Kuypers, Marshall; Smith, Matthew; Keller, Philip

    2018-02-01

    Managing cyber security in an organization involves allocating the protection budget across a spectrum of possible options. This requires assessing the benefits and the costs of these options. The risk analyses presented here are statistical when relevant data are available, and system-based for high-consequence events that have not happened yet. This article presents, first, a general probabilistic risk analysis framework for cyber security in an organization to be specified. It then describes three examples of forward-looking analyses motivated by recent cyber attacks. The first one is the statistical analysis of an actual database, extended at the upper end of the loss distribution by a Bayesian analysis of possible, high-consequence attack scenarios that may happen in the future. The second is a systems analysis of cyber risks for a smart, connected electric grid, showing that there is an optimal level of connectivity. The third is an analysis of sequential decisions to upgrade the software of an existing cyber security system or to adopt a new one to stay ahead of adversaries trying to find their way in. The results are distributions of losses to cyber attacks, with and without some considered countermeasures in support of risk management decisions based both on past data and anticipated incidents. © 2017 Society for Risk Analysis.

  19. From micro to mainframe. A practical approach to perinatal data processing.

    PubMed

    Yeh, S Y; Lincoln, T

    1985-04-01

    A new, practical approach to perinatal data processing for a large obstetric population is described. This was done with a microcomputer for data entry and a mainframe computer for data reduction. The Screen Oriented Data Access (SODA) program was used to generate the data entry form and to input data into the Apple II Plus computer. Data were stored on diskettes and transmitted through a modern and telephone line to the IBM 370/168 computer. The Statistical Analysis System (SAS) program was used for statistical analyses and report generations. This approach was found to be most practical, flexible, and economical.

  20. PIXE analysis of elements in gastric cancer and adjacent mucosa

    NASA Astrophysics Data System (ADS)

    Liu, Qixin; Zhong, Ming; Zhang, Xiaofeng; Yan, Lingnuo; Xu, Yongling; Ye, Simao

    1990-04-01

    The elemental regional distributions in 20 resected human stomach tissues were obtained using PIXE analysis. The samples were pathologically divided into four types: normal, adjacent mucosa A, adjacent mucosa B and cancer. The targets for PIXE analysis were prepared by wet digestion with a pressure bomb system. P, K, Fe, Cu, Zn and Se were measured and statistically analysed. We found significantly higher concentrations of P, K, Cu, Zn and a higher ratio of Cu compared to Zn in cancer tissue as compared with normal tissue, but statistically no significant difference between adjacent mucosa and cancer tissue was found.

  1. The History of the AutoChemist®: From Vision to Reality.

    PubMed

    Peterson, H E; Jungner, I

    2014-05-22

    This paper discusses the early history and development of a clinical analyser system in Sweden (AutoChemist, 1965). It highlights the importance of such high capacity system both for clinical use and health care screening. The device was developed to assure the quality of results and to automatically handle the orders, store the results in digital form for later statistical analyses and distribute the results to the patients' physicians by using the computer used for the analyser. The most important result of the construction of an analyser able to produce analytical results on a mass scale was the development of a mechanical multi-channel analyser for clinical laboratories that handled discrete sample technology and could prevent carry-over to the next test samples while incorporating computer technology to improve the quality of test results. The AutoChemist could handle 135 samples per hour in an 8-hour shift and up to 24 possible analyses channels resulting in 3,200 results per hour. Later versions would double this capacity. Some customers used the equipment 24 hours per day. With a capacity of 3,000 to 6,000 analyses per hour, pneumatic driven pipettes, special units for corrosive liquids or special activities, and an integrated computer, the AutoChemist system was unique and the largest of its kind for many years. Its follower - The AutoChemist PRISMA (PRogrammable Individually Selective Modular Analyzer) - was smaller in size but had a higher capacity. Both analysers established new standards of operation for clinical laboratories and encouraged others to use new technologies for building new analysers.

  2. Space Transportation System Liftoff Debris Mitigation Process Overview

    NASA Technical Reports Server (NTRS)

    Mitchell, Michael; Riley, Christopher

    2011-01-01

    Liftoff debris is a top risk to the Space Shuttle Vehicle. To manage the Liftoff debris risk, the Space Shuttle Program created a team with in the Propulsion Systems Engineering & Integration Office. The Shutt le Liftoff Debris Team harnesses the Systems Engineering process to i dentify, assess, mitigate, and communicate the Liftoff debris risk. T he Liftoff Debris Team leverages off the technical knowledge and expe rtise of engineering groups across multiple NASA centers to integrate total system solutions. These solutions connect the hardware and ana lyses to identify and characterize debris sources and zones contribut ing to the Liftoff debris risk. The solutions incorporate analyses sp anning: the definition and modeling of natural and induced environmen ts; material characterizations; statistical trending analyses, imager y based trajectory analyses; debris transport analyses, and risk asse ssments. The verification and validation of these analyses are bound by conservative assumptions and anchored by testing and flight data. The Liftoff debris risk mitigation is managed through vigilant collab orative work between the Liftoff Debris Team and Launch Pad Operation s personnel and through the management of requirements, interfaces, r isk documentation, configurations, and technical data. Furthermore, o n day of launch, decision analysis is used to apply the wealth of ana lyses to case specific identified risks. This presentation describes how the Liftoff Debris Team applies Systems Engineering in their proce sses to mitigate risk and improve the safety of the Space Shuttle Veh icle.

  3. Biomechanical Analysis of Military Boots. Phase 1. Materials Testing of Military and Commercial Footwear

    DTIC Science & Technology

    1992-10-01

    N=8) and Results of 44 Statistical Analyses for Impact Test Performed on Forefoot of Unworn Footwear A-2. Summary Statistics (N=8) and Results of...on Forefoot of Worn Footwear Vlll Tables (continued) Table Page B-2. Summary Statistics (N=4) and Results of 76 Statistical Analyses for Impact...used tests to assess heel and forefoot shock absorption, upper and sole durability, and flexibility (Cavanagh, 1978). Later, the number of tests was

  4. [The application of the prospective space-time statistic in early warning of infectious disease].

    PubMed

    Yin, Fei; Li, Xiao-Song; Feng, Zi-Jian; Ma, Jia-Qi

    2007-06-01

    To investigate the application of prospective space-time scan statistic in the early stage of detecting infectious disease outbreaks. The prospective space-time scan statistic was tested by mimicking daily prospective analyses of bacillary dysentery data of Chengdu city in 2005 (3212 cases in 102 towns and villages). And the results were compared with that of purely temporal scan statistic. The prospective space-time scan statistic could give specific messages both in spatial and temporal. The results of June indicated that the prospective space-time scan statistic could timely detect the outbreaks that started from the local site, and the early warning message was powerful (P = 0.007). When the merely temporal scan statistic for detecting the outbreak was sent two days later, and the signal was less powerful (P = 0.039). The prospective space-time scan statistic could make full use of the spatial and temporal information in infectious disease data and could timely and effectively detect the outbreaks that start from the local sites. The prospective space-time scan statistic could be an important tool for local and national CDC to set up early detection surveillance systems.

  5. Quantifying, displaying and accounting for heterogeneity in the meta-analysis of RCTs using standard and generalised Q statistics

    PubMed Central

    2011-01-01

    Background Clinical researchers have often preferred to use a fixed effects model for the primary interpretation of a meta-analysis. Heterogeneity is usually assessed via the well known Q and I2 statistics, along with the random effects estimate they imply. In recent years, alternative methods for quantifying heterogeneity have been proposed, that are based on a 'generalised' Q statistic. Methods We review 18 IPD meta-analyses of RCTs into treatments for cancer, in order to quantify the amount of heterogeneity present and also to discuss practical methods for explaining heterogeneity. Results Differing results were obtained when the standard Q and I2 statistics were used to test for the presence of heterogeneity. The two meta-analyses with the largest amount of heterogeneity were investigated further, and on inspection the straightforward application of a random effects model was not deemed appropriate. Compared to the standard Q statistic, the generalised Q statistic provided a more accurate platform for estimating the amount of heterogeneity in the 18 meta-analyses. Conclusions Explaining heterogeneity via the pre-specification of trial subgroups, graphical diagnostic tools and sensitivity analyses produced a more desirable outcome than an automatic application of the random effects model. Generalised Q statistic methods for quantifying and adjusting for heterogeneity should be incorporated as standard into statistical software. Software is provided to help achieve this aim. PMID:21473747

  6. Quantum behaviour of pumped and damped triangular Bose-Hubbard systems

    NASA Astrophysics Data System (ADS)

    Chianca, C. V.; Olsen, M. K.

    2017-12-01

    We propose and analyse analogs of optical cavities for atoms using three-well Bose-Hubbard models with pumping and losses. We consider triangular configurations. With one well pumped and one damped, we find that both the mean-field dynamics and the quantum statistics show a quantitative dependence on the choice of damped well. The systems we analyse remain far from equilibrium, preserving good coherence between the wells in the steady-state. We find quadrature squeezing and mode entanglement for some parameter regimes and demonstrate that the trimer with pumping and damping at the same well is the stronger option for producing non-classical states. Due to recent experimental advances, it should be possible to demonstrate the effects we investigate and predict.

  7. Power, effects, confidence, and significance: an investigation of statistical practices in nursing research.

    PubMed

    Gaskin, Cadeyrn J; Happell, Brenda

    2014-05-01

    To (a) assess the statistical power of nursing research to detect small, medium, and large effect sizes; (b) estimate the experiment-wise Type I error rate in these studies; and (c) assess the extent to which (i) a priori power analyses, (ii) effect sizes (and interpretations thereof), and (iii) confidence intervals were reported. Statistical review. Papers published in the 2011 volumes of the 10 highest ranked nursing journals, based on their 5-year impact factors. Papers were assessed for statistical power, control of experiment-wise Type I error, reporting of a priori power analyses, reporting and interpretation of effect sizes, and reporting of confidence intervals. The analyses were based on 333 papers, from which 10,337 inferential statistics were identified. The median power to detect small, medium, and large effect sizes was .40 (interquartile range [IQR]=.24-.71), .98 (IQR=.85-1.00), and 1.00 (IQR=1.00-1.00), respectively. The median experiment-wise Type I error rate was .54 (IQR=.26-.80). A priori power analyses were reported in 28% of papers. Effect sizes were routinely reported for Spearman's rank correlations (100% of papers in which this test was used), Poisson regressions (100%), odds ratios (100%), Kendall's tau correlations (100%), Pearson's correlations (99%), logistic regressions (98%), structural equation modelling/confirmatory factor analyses/path analyses (97%), and linear regressions (83%), but were reported less often for two-proportion z tests (50%), analyses of variance/analyses of covariance/multivariate analyses of variance (18%), t tests (8%), Wilcoxon's tests (8%), Chi-squared tests (8%), and Fisher's exact tests (7%), and not reported for sign tests, Friedman's tests, McNemar's tests, multi-level models, and Kruskal-Wallis tests. Effect sizes were infrequently interpreted. Confidence intervals were reported in 28% of papers. The use, reporting, and interpretation of inferential statistics in nursing research need substantial improvement. Most importantly, researchers should abandon the misleading practice of interpreting the results from inferential tests based solely on whether they are statistically significant (or not) and, instead, focus on reporting and interpreting effect sizes, confidence intervals, and significance levels. Nursing researchers also need to conduct and report a priori power analyses, and to address the issue of Type I experiment-wise error inflation in their studies. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chevallier, J.J.; Quetier, F.P.; Marshall, D.W.

    Sedco Forex has developed an integrated computer system to enhance the technical performance of the company at various operational levels and to increase the understanding and knowledge of the drill crews. This paper describes the system and how it is used for recording and processing drilling data at the rig site, for associated technical analyses, and for well design, planning, and drilling performance studies at the operational centers. Some capabilities related to the statistical analysis of the company's operational records are also described, and future development of rig computing systems for drilling applications and management tasks is discussed.

  9. [Data collection in anesthesia. Experiences with the inauguration of a new information system].

    PubMed

    Zbinden, A M; Rothenbühler, H; Häberli, B

    1997-06-01

    In many institutions information systems are used to process off-line anaesthesia data for invoices, statistical purposes, and quality assurance. Information systems are also increasingly being used to improve process control in order to reduce costs. Most of today's systems were created when information technology and working processes in anaesthesia were very different from those in use today. Thus, many institutions must now replace their computer systems but are probably not aware of how complex this change will be. Modern information systems mostly use client-server architecture and relational data bases. Substituting an old system with a new one is frequently a greater task than designing a system from scratch. This article gives the conclusions drawn from the experience obtained when a large departmental computer system is redesigned in an university hospital. The new system was based on a client-server architecture and was developed by an external company without preceding conceptual analysis. Modules for patient, anaesthesia, surgical, and pain-service data were included. Data were analysed using a separate statistical package (RS/1 from Bolt Beranek), taking advantage of its powerful precompiled procedures. Development and introduction of the new system took much more time and effort than expected despite the use of modern software tools. Introduction of the new program required intensive user training despite the choice of modem graphic screen layouts. Automatic data-reading systems could not be used, as too many faults occurred and the effort for the user was too high. However, after the initial problems were solved the system turned out to be a powerful tool for quality control (both process and outcome quality), billing, and scheduling. The statistical analysis of the data resulted in meaningful and relevant conclusions. Before creating a new information system, the working processes have to be analysed and, if possible, made more efficient; a detailed programme specification must then be made. A servicing and maintenance contract should be drawn up before the order is given to a company. Time periods of equal duration have to be scheduled for defining, writing, testing and introducing the program. Modern client-server systems with relational data bases are by no means simpler to establish and maintain than previous mainframe systems with hierarchical data bases, and thus, experienced computer specialists need to be close at hand. We recommend collecting data only once for both statistics and quality control. To verify data quality, a system of random spot-sampling has to be established. Despite the large investments needed to build up such a system, we consider it a powerful tool for helping to solve the difficult daily problems of managing a surgical and anaesthesia unit.

  10. Dynamical Systems Theory in Quantitative Psychology and Cognitive Science: A Fair Discrimination between Deterministic and Statistical Counterparts is Required.

    PubMed

    Gadomski, Adam; Ausloos, Marcel; Casey, Tahlia

    2017-04-01

    This article addresses a set of observations framed in both deterministic as well as statistical formal guidelines. It operates within the framework of nonlinear dynamical systems theory (NDS). It is argued that statistical approaches can manifest themselves ambiguously, creating practical discrepancies in psychological and cognitive data analyses both quantitatively and qualitatively. This is sometimes termed in literature as 'questionable research practices.' This communication points to the demand for a deeper awareness of the data 'initial conditions, allowing to focus on pertinent evolution constraints in such systems.' It also considers whether the exponential (Malthus-type) or the algebraic (Pareto-type) statistical distribution ought to be effectively considered in practical interpretations. The role of repetitive specific behaviors by patients seeking treatment is examined within the NDS frame. The significance of these behaviors, involving a certain memory effect seems crucial in determining a patient's progression or regression. With this perspective, it is discussed how a sensitively applied hazardous or triggering factor can be helpful for well-controlled psychological strategic treatments; those attributable to obsessive-compulsive disorders or self-injurious behaviors are recalled in particular. There are both inherent criticality- and complexity-exploiting (reduced-variance based) relations between a therapist and a patient that can be intrinsically included in NDS theory.

  11. [ADP system for automatic production of physician's letters as well as statistics analyses in the area of obstetrics and perinatology].

    PubMed

    Heinrich, J; Putzar, H; Seidenschnur, G

    1977-01-01

    Report about an electronic data processing system for obstetrics and perinatology. The developed data document design and data flowchart are shown. The continuously accumulated data allowed everytime a detailed interpretation record. For all clinical treated patients the computer printed out a final obstetrical report (physician report). By means of this simple system there is an improvement of the information and the typewriting work of medical staff has been reduced. Cost and work-expenditure for this system are limited in relation to our earlier hand made procedure.

  12. A dynamical study on extrasolar comets

    NASA Astrophysics Data System (ADS)

    Loibnegger, B.; Dvorak, R.

    2017-09-01

    Since the detection of absorption features in spectra of beta Pictoris varying on short time scales it is known that comets exist in other stellar systems. We investigate the dynamics of comets in two differently build systems (HD 10180 and HIP 14810). The outcomes of the scattering process, as there are collisions with the planets, captures and ejections from the systems are analysed statistically. Collisions and close encounters with the planets are investigated in more detail in order to conclude about transport of water and organic material. We will also investigate the possibility of detection of comets in other planetary systems.

  13. Multiscale volatility duration characteristics on financial multi-continuum percolation dynamics

    NASA Astrophysics Data System (ADS)

    Wang, Min; Wang, Jun

    A random stock price model based on the multi-continuum percolation system is developed to investigate the nonlinear dynamics of stock price volatility duration, in an attempt to explain various statistical facts found in financial data, and have a deeper understanding of mechanisms in the financial market. The continuum percolation system is usually referred to be a random coverage process or a Boolean model, it is a member of a class of statistical physics systems. In this paper, the multi-continuum percolation (with different values of radius) is employed to model and reproduce the dispersal of information among the investors. To testify the rationality of the proposed model, the nonlinear analyses of return volatility duration series are preformed by multifractal detrending moving average analysis and Zipf analysis. The comparison empirical results indicate the similar nonlinear behaviors for the proposed model and the actual Chinese stock market.

  14. Using conventional F-statistics to study unconventional sex-chromosome differentiation.

    PubMed

    Rodrigues, Nicolas; Dufresnes, Christophe

    2017-01-01

    Species with undifferentiated sex chromosomes emerge as key organisms to understand the astonishing diversity of sex-determination systems. Whereas new genomic methods are widening opportunities to study these systems, the difficulty to separately characterize their X and Y homologous chromosomes poses limitations. Here we demonstrate that two simple F -statistics calculated from sex-linked genotypes, namely the genetic distance ( F st ) between sexes and the inbreeding coefficient ( F is ) in the heterogametic sex, can be used as reliable proxies to compare sex-chromosome differentiation between populations. We correlated these metrics using published microsatellite data from two frog species ( Hyla arborea and Rana temporaria ), and show that they intimately relate to the overall amount of X-Y differentiation in populations. However, the fits for individual loci appear highly variable, suggesting that a dense genetic coverage will be needed for inferring fine-scale patterns of differentiation along sex-chromosomes. The applications of these F -statistics, which implies little sampling requirement, significantly facilitate population analyses of sex-chromosomes.

  15. Machine Learning Methods for Attack Detection in the Smart Grid.

    PubMed

    Ozay, Mete; Esnaola, Inaki; Yarman Vural, Fatos Tunay; Kulkarni, Sanjeev R; Poor, H Vincent

    2016-08-01

    Attack detection problems in the smart grid are posed as statistical learning problems for different attack scenarios in which the measurements are observed in batch or online settings. In this approach, machine learning algorithms are used to classify measurements as being either secure or attacked. An attack detection framework is provided to exploit any available prior knowledge about the system and surmount constraints arising from the sparse structure of the problem in the proposed approach. Well-known batch and online learning algorithms (supervised and semisupervised) are employed with decision- and feature-level fusion to model the attack detection problem. The relationships between statistical and geometric properties of attack vectors employed in the attack scenarios and learning algorithms are analyzed to detect unobservable attacks using statistical learning methods. The proposed algorithms are examined on various IEEE test systems. Experimental analyses show that machine learning algorithms can detect attacks with performances higher than attack detection algorithms that employ state vector estimation methods in the proposed attack detection framework.

  16. Use of model calibration to achieve high accuracy in analysis of computer networks

    DOEpatents

    Frogner, Bjorn; Guarro, Sergio; Scharf, Guy

    2004-05-11

    A system and method are provided for creating a network performance prediction model, and calibrating the prediction model, through application of network load statistical analyses. The method includes characterizing the measured load on the network, which may include background load data obtained over time, and may further include directed load data representative of a transaction-level event. Probabilistic representations of load data are derived to characterize the statistical persistence of the network performance variability and to determine delays throughout the network. The probabilistic representations are applied to the network performance prediction model to adapt the model for accurate prediction of network performance. Certain embodiments of the method and system may be used for analysis of the performance of a distributed application characterized as data packet streams.

  17. Mass spectrometry-based protein identification with accurate statistical significance assignment.

    PubMed

    Alves, Gelio; Yu, Yi-Kuo

    2015-03-01

    Assigning statistical significance accurately has become increasingly important as metadata of many types, often assembled in hierarchies, are constructed and combined for further biological analyses. Statistical inaccuracy of metadata at any level may propagate to downstream analyses, undermining the validity of scientific conclusions thus drawn. From the perspective of mass spectrometry-based proteomics, even though accurate statistics for peptide identification can now be achieved, accurate protein level statistics remain challenging. We have constructed a protein ID method that combines peptide evidences of a candidate protein based on a rigorous formula derived earlier; in this formula the database P-value of every peptide is weighted, prior to the final combination, according to the number of proteins it maps to. We have also shown that this protein ID method provides accurate protein level E-value, eliminating the need of using empirical post-processing methods for type-I error control. Using a known protein mixture, we find that this protein ID method, when combined with the Sorić formula, yields accurate values for the proportion of false discoveries. In terms of retrieval efficacy, the results from our method are comparable with other methods tested. The source code, implemented in C++ on a linux system, is available for download at ftp://ftp.ncbi.nlm.nih.gov/pub/qmbp/qmbp_ms/RAId/RAId_Linux_64Bit. Published by Oxford University Press 2014. This work is written by US Government employees and is in the public domain in the US.

  18. Statistical analysis of lightning electric field measured under Malaysian condition

    NASA Astrophysics Data System (ADS)

    Salimi, Behnam; Mehranzamir, Kamyar; Abdul-Malek, Zulkurnain

    2014-02-01

    Lightning is an electrical discharge during thunderstorms that can be either within clouds (Inter-Cloud), or between clouds and ground (Cloud-Ground). The Lightning characteristics and their statistical information are the foundation for the design of lightning protection system as well as for the calculation of lightning radiated fields. Nowadays, there are various techniques to detect lightning signals and to determine various parameters produced by a lightning flash. Each technique provides its own claimed performances. In this paper, the characteristics of captured broadband electric fields generated by cloud-to-ground lightning discharges in South of Malaysia are analyzed. A total of 130 cloud-to-ground lightning flashes from 3 separate thunderstorm events (each event lasts for about 4-5 hours) were examined. Statistical analyses of the following signal parameters were presented: preliminary breakdown pulse train time duration, time interval between preliminary breakdowns and return stroke, multiplicity of stroke, and percentages of single stroke only. The BIL model is also introduced to characterize the lightning signature patterns. Observations on the statistical analyses show that about 79% of lightning signals fit well with the BIL model. The maximum and minimum of preliminary breakdown time duration of the observed lightning signals are 84 ms and 560 us, respectively. The findings of the statistical results show that 7.6% of the flashes were single stroke flashes, and the maximum number of strokes recorded was 14 multiple strokes per flash. A preliminary breakdown signature in more than 95% of the flashes can be identified.

  19. 40 CFR 91.512 - Request for public hearing.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... plans and statistical analyses have been properly applied (specifically, whether sampling procedures and statistical analyses specified in this subpart were followed and whether there exists a basis for... will be made available to the public during Agency business hours. ...

  20. A retrospective survey of research design and statistical analyses in selected Chinese medical journals in 1998 and 2008.

    PubMed

    Jin, Zhichao; Yu, Danghui; Zhang, Luoman; Meng, Hong; Lu, Jian; Gao, Qingbin; Cao, Yang; Ma, Xiuqiang; Wu, Cheng; He, Qian; Wang, Rui; He, Jia

    2010-05-25

    High quality clinical research not only requires advanced professional knowledge, but also needs sound study design and correct statistical analyses. The number of clinical research articles published in Chinese medical journals has increased immensely in the past decade, but study design quality and statistical analyses have remained suboptimal. The aim of this investigation was to gather evidence on the quality of study design and statistical analyses in clinical researches conducted in China for the first decade of the new millennium. Ten (10) leading Chinese medical journals were selected and all original articles published in 1998 (N = 1,335) and 2008 (N = 1,578) were thoroughly categorized and reviewed. A well-defined and validated checklist on study design, statistical analyses, results presentation, and interpretation was used for review and evaluation. Main outcomes were the frequencies of different types of study design, error/defect proportion in design and statistical analyses, and implementation of CONSORT in randomized clinical trials. From 1998 to 2008: The error/defect proportion in statistical analyses decreased significantly ( = 12.03, p<0.001), 59.8% (545/1,335) in 1998 compared to 52.2% (664/1,578) in 2008. The overall error/defect proportion of study design also decreased ( = 21.22, p<0.001), 50.9% (680/1,335) compared to 42.40% (669/1,578). In 2008, design with randomized clinical trials remained low in single digit (3.8%, 60/1,578) with two-third showed poor results reporting (defects in 44 papers, 73.3%). Nearly half of the published studies were retrospective in nature, 49.3% (658/1,335) in 1998 compared to 48.2% (761/1,578) in 2008. Decreases in defect proportions were observed in both results presentation ( = 93.26, p<0.001), 92.7% (945/1,019) compared to 78.2% (1023/1,309) and interpretation ( = 27.26, p<0.001), 9.7% (99/1,019) compared to 4.3% (56/1,309), some serious ones persisted. Chinese medical research seems to have made significant progress regarding statistical analyses, but there remains ample room for improvement regarding study designs. Retrospective clinical studies are the most often used design, whereas randomized clinical trials are rare and often show methodological weaknesses. Urgent implementation of the CONSORT statement is imperative.

  1. A Meta-Meta-Analysis: Empirical Review of Statistical Power, Type I Error Rates, Effect Sizes, and Model Selection of Meta-Analyses Published in Psychology

    ERIC Educational Resources Information Center

    Cafri, Guy; Kromrey, Jeffrey D.; Brannick, Michael T.

    2010-01-01

    This article uses meta-analyses published in "Psychological Bulletin" from 1995 to 2005 to describe meta-analyses in psychology, including examination of statistical power, Type I errors resulting from multiple comparisons, and model choice. Retrospective power estimates indicated that univariate categorical and continuous moderators, individual…

  2. Approach for Uncertainty Propagation and Robust Design in CFD Using Sensitivity Derivatives

    NASA Technical Reports Server (NTRS)

    Putko, Michele M.; Newman, Perry A.; Taylor, Arthur C., III; Green, Lawrence L.

    2001-01-01

    This paper presents an implementation of the approximate statistical moment method for uncertainty propagation and robust optimization for a quasi 1-D Euler CFD (computational fluid dynamics) code. Given uncertainties in statistically independent, random, normally distributed input variables, a first- and second-order statistical moment matching procedure is performed to approximate the uncertainty in the CFD output. Efficient calculation of both first- and second-order sensitivity derivatives is required. In order to assess the validity of the approximations, the moments are compared with statistical moments generated through Monte Carlo simulations. The uncertainties in the CFD input variables are also incorporated into a robust optimization procedure. For this optimization, statistical moments involving first-order sensitivity derivatives appear in the objective function and system constraints. Second-order sensitivity derivatives are used in a gradient-based search to successfully execute a robust optimization. The approximate methods used throughout the analyses are found to be valid when considering robustness about input parameter mean values.

  3. Algorithm for Identifying Erroneous Rain-Gauge Readings

    NASA Technical Reports Server (NTRS)

    Rickman, Doug

    2005-01-01

    An algorithm analyzes rain-gauge data to identify statistical outliers that could be deemed to be erroneous readings. Heretofore, analyses of this type have been performed in burdensome manual procedures that have involved subjective judgements. Sometimes, the analyses have included computational assistance for detecting values falling outside of arbitrary limits. The analyses have been performed without statistically valid knowledge of the spatial and temporal variations of precipitation within rain events. In contrast, the present algorithm makes it possible to automate such an analysis, makes the analysis objective, takes account of the spatial distribution of rain gauges in conjunction with the statistical nature of spatial variations in rainfall readings, and minimizes the use of arbitrary criteria. The algorithm implements an iterative process that involves nonparametric statistics.

  4. Citation of previous meta-analyses on the same topic: a clue to perpetuation of incorrect methods?

    PubMed

    Li, Tianjing; Dickersin, Kay

    2013-06-01

    Systematic reviews and meta-analyses serve as a basis for decision-making and clinical practice guidelines and should be carried out using appropriate methodology to avoid incorrect inferences. We describe the characteristics, statistical methods used for meta-analyses, and citation patterns of all 21 glaucoma systematic reviews we identified pertaining to the effectiveness of prostaglandin analog eye drops in treating primary open-angle glaucoma, published between December 2000 and February 2012. We abstracted data, assessed whether appropriate statistical methods were applied in meta-analyses, and examined citation patterns of included reviews. We identified two forms of problematic statistical analyses in 9 of the 21 systematic reviews examined. Except in 1 case, none of the 9 reviews that used incorrect statistical methods cited a previously published review that used appropriate methods. Reviews that used incorrect methods were cited 2.6 times more often than reviews that used appropriate statistical methods. We speculate that by emulating the statistical methodology of previous systematic reviews, systematic review authors may have perpetuated incorrect approaches to meta-analysis. The use of incorrect statistical methods, perhaps through emulating methods described in previous research, calls conclusions of systematic reviews into question and may lead to inappropriate patient care. We urge systematic review authors and journal editors to seek the advice of experienced statisticians before undertaking or accepting for publication a systematic review and meta-analysis. The author(s) have no proprietary or commercial interest in any materials discussed in this article. Copyright © 2013 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.

  5. Reporting quality of statistical methods in surgical observational studies: protocol for systematic review.

    PubMed

    Wu, Robert; Glen, Peter; Ramsay, Tim; Martel, Guillaume

    2014-06-28

    Observational studies dominate the surgical literature. Statistical adjustment is an important strategy to account for confounders in observational studies. Research has shown that published articles are often poor in statistical quality, which may jeopardize their conclusions. The Statistical Analyses and Methods in the Published Literature (SAMPL) guidelines have been published to help establish standards for statistical reporting.This study will seek to determine whether the quality of statistical adjustment and the reporting of these methods are adequate in surgical observational studies. We hypothesize that incomplete reporting will be found in all surgical observational studies, and that the quality and reporting of these methods will be of lower quality in surgical journals when compared with medical journals. Finally, this work will seek to identify predictors of high-quality reporting. This work will examine the top five general surgical and medical journals, based on a 5-year impact factor (2007-2012). All observational studies investigating an intervention related to an essential component area of general surgery (defined by the American Board of Surgery), with an exposure, outcome, and comparator, will be included in this systematic review. Essential elements related to statistical reporting and quality were extracted from the SAMPL guidelines and include domains such as intent of analysis, primary analysis, multiple comparisons, numbers and descriptive statistics, association and correlation analyses, linear regression, logistic regression, Cox proportional hazard analysis, analysis of variance, survival analysis, propensity analysis, and independent and correlated analyses. Each article will be scored as a proportion based on fulfilling criteria in relevant analyses used in the study. A logistic regression model will be built to identify variables associated with high-quality reporting. A comparison will be made between the scores of surgical observational studies published in medical versus surgical journals. Secondary outcomes will pertain to individual domains of analysis. Sensitivity analyses will be conducted. This study will explore the reporting and quality of statistical analyses in surgical observational studies published in the most referenced surgical and medical journals in 2013 and examine whether variables (including the type of journal) can predict high-quality reporting.

  6. Image encryption based on a delayed fractional-order chaotic logistic system

    NASA Astrophysics Data System (ADS)

    Wang, Zhen; Huang, Xia; Li, Ning; Song, Xiao-Na

    2012-05-01

    A new image encryption scheme is proposed based on a delayed fractional-order chaotic logistic system. In the process of generating a key stream, the time-varying delay and fractional derivative are embedded in the proposed scheme to improve the security. Such a scheme is described in detail with security analyses including correlation analysis, information entropy analysis, run statistic analysis, mean-variance gray value analysis, and key sensitivity analysis. Experimental results show that the newly proposed image encryption scheme possesses high security.

  7. A Summary of the Naval Postgraduate School Research Program.

    DTIC Science & Technology

    1985-09-30

    new model will now be used in a variety of oceanic investigations including the response of the ocean to tropical and extratropical storms (R. L...Numerical Study of Maritime Extratropical e. Cyclones Using FGGE Data ........................... 249 Oceanic Current System Response to Atmospheric...In addition* Professor Jayachandran has performed statistical analyses of the storm tracking methodology used by the Naval Environmental Prediction

  8. Accuracy of Course Placement Validity Statistics under Various Soft Truncation Conditions. ACT Research Report Series 99-2.

    ERIC Educational Resources Information Center

    Schiel, Jeff L.; King, Jason E.

    Analyses of data from operational course placement systems are subject to the effects of truncation; students with low placement test scores may enroll in a remedial course, rather than a standard-level course, and therefore will not have outcome data from the standard course. In "soft" truncation, some (but not all) students who score…

  9. [Analyses of segment motor function in patients with degenerative lumbar disease on the treatment of WavefleX dynamic stabilization system].

    PubMed

    Wu, Junsong; Du, Junhua; Jiang, Xiangyun; Wang, Quan; Li, Xigong; Du, Jingyu; Lin, Xiangjin

    2014-06-17

    To explore the changes of range-of-motion (ROM) in patients with degenerative lumbar disease on the treatment of WavefleX dynamic stabilization system and examine the postoperative lumbar regularity and tendency of ROM. Nine patients with degenerative lumbar disease on the treatment of WavefleX dynamic stabilization system were followed up with respect to ROMs at 5 timepoints within 12 months. Records of ROM were made for instrumented segments, adjacent segments and total lumbar. Compared with preoperation, ROMs in non-fusional segments with WavefleX dynamic stabilization system decreased statistical significantly (P < 0.05 or P < 0.01) at different timepoints; ROMs in adjacent segments increased at some levels without wide statistical significance. The exception was single L3/4 at Month 12 (P < 0.05) versus control group simultaneously at the levels of L3/4, L4/5 and L5/S1, ROMs decreased at Months 6 and 12 with wide statistical significance (P < 0.05 or P < 0.01). ROMs in total lumbar had statistical significant decrease (P < 0.01) in both group of non-fusional segments and hybrid group of non-fusion and fusion. The trends of continuous augments were observed during follow-ups. Statistically significant augments were also acquired at 4 timepoints as compared to control group (P < 0.01). The treatment of degenerative lumbar diseases with WavefleX dynamic stabilization system may limit excessive extension/inflexion and preserve some motor functions. Moreover, it can sustain physiological lordosis, decrease and transfer disc load in adjacent segments to prevent early degeneration of adjacent segment. Trends of motor function augment in total lumbar need to be confirmed during future long-term follow-ups.

  10. Statistical analyses of commercial vehicle accident factors. Volume 1 Part 1

    DOT National Transportation Integrated Search

    1978-02-01

    Procedures for conducting statistical analyses of commercial vehicle accidents have been established and initially applied. A file of some 3,000 California Highway Patrol accident reports from two areas of California during a period of about one year...

  11. 40 CFR 90.712 - Request for public hearing.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... sampling plans and statistical analyses have been properly applied (specifically, whether sampling procedures and statistical analyses specified in this subpart were followed and whether there exists a basis... Clerk and will be made available to the public during Agency business hours. ...

  12. Tipping point analysis of ocean acoustic noise

    NASA Astrophysics Data System (ADS)

    Livina, Valerie N.; Brouwer, Albert; Harris, Peter; Wang, Lian; Sotirakopoulos, Kostas; Robinson, Stephen

    2018-02-01

    We apply tipping point analysis to a large record of ocean acoustic data to identify the main components of the acoustic dynamical system and study possible bifurcations and transitions of the system. The analysis is based on a statistical physics framework with stochastic modelling, where we represent the observed data as a composition of deterministic and stochastic components estimated from the data using time-series techniques. We analyse long-term and seasonal trends, system states and acoustic fluctuations to reconstruct a one-dimensional stochastic equation to approximate the acoustic dynamical system. We apply potential analysis to acoustic fluctuations and detect several changes in the system states in the past 14 years. These are most likely caused by climatic phenomena. We analyse trends in sound pressure level within different frequency bands and hypothesize a possible anthropogenic impact on the acoustic environment. The tipping point analysis framework provides insight into the structure of the acoustic data and helps identify its dynamic phenomena, correctly reproducing the probability distribution and scaling properties (power-law correlations) of the time series.

  13. Track analysis of laser-illuminated etched track detectors using an opto-digital imaging system

    NASA Astrophysics Data System (ADS)

    Eghan, Moses J.; Buah-Bassuah, Paul K.; Oppon, Osborne C.

    2007-11-01

    An opto-digital imaging system for counting and analysing tracks on a LR-115 detector is described. One batch of LR-115 track detectors was irradiated with Am-241 for a determined period and distance for linearity test and another batch was exposed to radon gas. The laser-illuminated etched track detector area was imaged, digitized and analysed by the system. The tracks that were counted on the opto-digital system with the aid of media cybernetics software as well as spark gap counter showed comparable track density results ranging between 1500 and 2750 tracks cm-2 and 65 tracks cm-2 in the two different batch detector samples with 0.5% and 1% track counts, respectively. Track sizes of the incident alpha particles from the radon gas on the LR-115 detector demonstrating different track energies are statistically and graphically represented. The opto-digital imaging system counts and measures other track parameters at an average process time of 3-5 s.

  14. A comparative study of two statistical approaches for the analysis of real seismicity sequences and synthetic seismicity generated by a stick-slip experimental model

    NASA Astrophysics Data System (ADS)

    Flores-Marquez, Leticia Elsa; Ramirez Rojaz, Alejandro; Telesca, Luciano

    2015-04-01

    The study of two statistical approaches is analyzed for two different types of data sets, one is the seismicity generated by the subduction processes occurred at south Pacific coast of Mexico between 2005 and 2012, and the other corresponds to the synthetic seismic data generated by a stick-slip experimental model. The statistical methods used for the present study are the visibility graph in order to investigate the time dynamics of the series and the scaled probability density function in the natural time domain to investigate the critical order of the system. This comparison has the purpose to show the similarities between the dynamical behaviors of both types of data sets, from the point of view of critical systems. The observed behaviors allow us to conclude that the experimental set up globally reproduces the behavior observed in the statistical approaches used to analyses the seismicity of the subduction zone. The present study was supported by the Bilateral Project Italy-Mexico Experimental Stick-slip models of tectonic faults: innovative statistical approaches applied to synthetic seismic sequences, jointly funded by MAECI (Italy) and AMEXCID (Mexico) in the framework of the Bilateral Agreement for Scientific and Technological Cooperation PE 2014-2016.

  15. Mars: Noachian hydrology by its statistics and topology

    NASA Technical Reports Server (NTRS)

    Cabrol, N. A.; Grin, E. A.

    1993-01-01

    Discrimination between fluvial features generated by surface drainage and subsurface aquifer discharges will provide clues to the understanding of early Mars' climatic history. Our approach is to define the process of formation of the oldest fluvial valleys by statistical and topological analyses. Formation of fluvial valley systems reached its highest statistical concentration during the Noachian Period. Nevertheless, they are a scarce phenomenom in Martian history, localized on the craterized upland, and subject to latitudinal distribution. They occur sparsely on Noachian geological units with a weak distribution density, and appear in reduced isolated surface (around 5 x 10(exp 3)(sq km)), filled by short streams (100-300 km length). Topological analysis of the internal organization of 71 surveyed Noachian fluvial valley networks also provides information on the mechanisms of formation.

  16. A wind proxy based on migrating dunes at the Baltic coast: statistical analysis of the link between wind conditions and sand movement

    NASA Astrophysics Data System (ADS)

    Bierstedt, Svenja E.; Hünicke, Birgit; Zorita, Eduardo; Ludwig, Juliane

    2017-07-01

    We statistically analyse the relationship between the structure of migrating dunes in the southern Baltic and the driving wind conditions over the past 26 years, with the long-term aim of using migrating dunes as a proxy for past wind conditions at an interannual resolution. The present analysis is based on the dune record derived from geo-radar measurements by Ludwig et al. (2017). The dune system is located at the Baltic Sea coast of Poland and is migrating from west to east along the coast. The dunes present layers with different thicknesses that can be assigned to absolute dates at interannual timescales and put in relation to seasonal wind conditions. To statistically analyse this record and calibrate it as a wind proxy, we used a gridded regional meteorological reanalysis data set (coastDat2) covering recent decades. The identified link between the dune annual layers and wind conditions was additionally supported by the co-variability between dune layers and observed sea level variations in the southern Baltic Sea. We include precipitation and temperature into our analysis, in addition to wind, to learn more about the dependency between these three atmospheric factors and their common influence on the dune system. We set up a statistical linear model based on the correlation between the frequency of days with specific wind conditions in a given season and dune migration velocities derived for that season. To some extent, the dune records can be seen as analogous to tree-ring width records, and hence we use a proxy validation method usually applied in dendrochronology, cross-validation with the leave-one-out method, when the observational record is short. The revealed correlations between the wind record from the reanalysis and the wind record derived from the dune structure is in the range between 0.28 and 0.63, yielding similar statistical validation skill as dendroclimatological records.

  17. Decomposing the gap in missed opportunities for vaccination between poor and non-poor in sub-Saharan Africa: A Multicountry Analyses.

    PubMed

    Ndwandwe, Duduzile; Uthman, Olalekan A; Adamu, Abdu A; Sambala, Evanson Z; Wiyeh, Alison B; Olukade, Tawa; Bishwajit, Ghose; Yaya, Sanni; Okwo-Bele, Jean-Marie; Wiysonge, Charles S

    2018-04-24

    Understanding the gaps in missed opportunities for vaccination (MOV) in sub-Saharan Africa would inform interventions for improving immunisation coverage to achieving universal childhood immunisation. We aimed to conduct a multicountry analyses to decompose the gap in MOV between poor and non-poor in SSA. We used cross-sectional data from 35 Demographic and Health Surveys in SSA conducted between 2007 and 2016. Descriptive statistics used to understand the gap in MOV between the urban poor and non-poor, and across the selected covariates. Out of the 35 countries included in this analysis, 19 countries showed pro-poor inequality, 5 showed pro-non-poor inequality and remaining 11 countries showed no statistically significant inequality. Among the countries with statistically significant pro-illiterate inequality, the risk difference ranged from 4.2% in DR Congo to 20.1% in Kenya. Important factors responsible for the inequality varied across countries. In Madagascar, the largest contributors to inequality in MOV were media access, number of under-five children, and maternal education. However, in Liberia media access narrowed inequality in MOV between poor and non-poor households. The findings indicate that in most SSA countries, children belonging to poor households are most likely to have MOV and that socio-economic inequality in is determined not only by health system functions, but also by factors beyond the scope of health authorities and care delivery system. The findings suggest the need for addressing social determinants of health.

  18. Supply Chain Collaboration: Information Sharing in a Tactical Operating Environment

    DTIC Science & Technology

    2013-06-01

    architecture, there are four tiers: Client (Web Application Clients ), Presentation (Web-Server), Processing (Application-Server), Data (Database...organization in each period. This data will be collected to analyze. i) Analyses and Validation: We will do a statistics test in this data, Pareto ...notes, outstanding deliveries, and inventory. i) Analyses and Validation: We will do a statistics test in this data, Pareto analyses and confirmation

  19. Research of Extension of the Life Cycle of Helicopter Rotor Blade in Hungary

    DTIC Science & Technology

    2003-02-01

    Radiography (DXR), and (iii) Vibration Diagnostics (VD) with Statistical Energy Analysis (SEA) were semi- simultaneously applied [1]. The used three...2.2. Vibration Diagnostics (VD)) Parallel to the NDT measurements the Statistical Energy Analysis (SEA) as a vibration diagnostical tool were...noises were analysed with a dual-channel real time frequency analyser (BK2035). In addition to the Statistical Energy Analysis measurement a small

  20. Reduction of Complications of Local Anaesthesia in Dental Healthcare Setups by Application of the Six Sigma Methodology: A Statistical Quality Improvement Technique

    PubMed Central

    Khatoon, Farheen

    2015-01-01

    Background Health care faces challenges due to complications, inefficiencies and other concerns that threaten the safety of patients. Aim The purpose of his study was to identify causes of complications encountered after administration of local anaesthesia for dental and oral surgical procedures and to reduce the incidence of complications by introduction of six sigma methodology. Materials and Methods DMAIC (Define, Measure, Analyse, Improve and Control) process of Six Sigma was taken into consideration to reduce the incidence of complications encountered after administration of local anaesthesia injections for dental and oral surgical procedures using failure mode and effect analysis. Pareto analysis was taken into consideration to analyse the most recurring complications. Paired z-sample test using Minitab Statistical Inference and Fisher’s exact test was used to statistically analyse the obtained data. The p-value <0.05 was considered as significant value. Results Total 54 systemic and 62 local complications occurred during three months of analyse and measure phase. Syncope, failure of anaesthesia, trismus, auto mordeduras and pain at injection site was found to be most recurring complications. Cumulative defective percentage was 7.99 in case of pre-improved data and decreased to 4.58 in the control phase. Estimate for difference was 0.0341228 and 95% lower bound for difference was 0.0193966. p-value was found to be highly significant with p= 0.000. Conclusion The application of six sigma improvement methodology in healthcare tends to deliver consistently better results to the patients as well as hospitals and results in better patient compliance as well as satisfaction. PMID:26816989

  1. Visual Exploration of Genetic Association with Voxel-based Imaging Phenotypes in an MCI/AD Study

    PubMed Central

    Kim, Sungeun; Shen, Li; Saykin, Andrew J.; West, John D.

    2010-01-01

    Neuroimaging genomics is a new transdisciplinary research field, which aims to examine genetic effects on brain via integrated analyses of high throughput neuroimaging and genomic data. We report our recent work on (1) developing an imaging genomic browsing system that allows for whole genome and entire brain analyses based on visual exploration and (2) applying the system to the imaging genomic analysis of an existing MCI/AD cohort. Voxel-based morphometry is used to define imaging phenotypes. ANCOVA is employed to evaluate the effect of the interaction of genotypes and diagnosis in relation to imaging phenotypes while controlling for relevant covariates. Encouraging experimental results suggest that the proposed system has substantial potential for enabling discovery of imaging genomic associations through visual evaluation and for localizing candidate imaging regions and genomic regions for refined statistical modeling. PMID:19963597

  2. Langevin equation with time dependent linear force and periodic load force: stochastic resonance

    NASA Astrophysics Data System (ADS)

    Sau Fa, Kwok

    2017-11-01

    The motion of a particle described by the Langevin equation with constant diffusion coefficient, time dependent linear force (ω (1+α \\cos ({ω }1t))x) and periodic load force ({A}0\\cos ({{Ω }}t)) is investigated. Analytical solutions for the probability density function (PDF) and n-moment are obtained and analysed. For {ω }1\\gg α ω the influence of the periodic term α \\cos ({ω }1t) is negligible to the PDF and n-moment for any time; this result shows that the statistical averages such as n-moments and the PDF have no access to some information of the system. For small and intermediate values of {ω }1 the influence of the periodic term α \\cos ({ω }1t) to the system is also analysed; in particular the system may present multiresonance. The solutions are obtained in a direct and pedagogical manner readily understandable by graduate students.

  3. A systematic review of the quality of statistical methods employed for analysing quality of life data in cancer randomised controlled trials.

    PubMed

    Hamel, Jean-Francois; Saulnier, Patrick; Pe, Madeline; Zikos, Efstathios; Musoro, Jammbe; Coens, Corneel; Bottomley, Andrew

    2017-09-01

    Over the last decades, Health-related Quality of Life (HRQoL) end-points have become an important outcome of the randomised controlled trials (RCTs). HRQoL methodology in RCTs has improved following international consensus recommendations. However, no international recommendations exist concerning the statistical analysis of such data. The aim of our study was to identify and characterise the quality of the statistical methods commonly used for analysing HRQoL data in cancer RCTs. Building on our recently published systematic review, we analysed a total of 33 published RCTs studying the HRQoL methods reported in RCTs since 1991. We focussed on the ability of the methods to deal with the three major problems commonly encountered when analysing HRQoL data: their multidimensional and longitudinal structure and the commonly high rate of missing data. All studies reported HRQoL being assessed repeatedly over time for a period ranging from 2 to 36 months. Missing data were common, with compliance rates ranging from 45% to 90%. From the 33 studies considered, 12 different statistical methods were identified. Twenty-nine studies analysed each of the questionnaire sub-dimensions without type I error adjustment. Thirteen studies repeated the HRQoL analysis at each assessment time again without type I error adjustment. Only 8 studies used methods suitable for repeated measurements. Our findings show a lack of consistency in statistical methods for analysing HRQoL data. Problems related to multiple comparisons were rarely considered leading to a high risk of false positive results. It is therefore critical that international recommendations for improving such statistical practices are developed. Copyright © 2017. Published by Elsevier Ltd.

  4. Sunspot activity and influenza pandemics: a statistical assessment of the purported association.

    PubMed

    Towers, S

    2017-10-01

    Since 1978, a series of papers in the literature have claimed to find a significant association between sunspot activity and the timing of influenza pandemics. This paper examines these analyses, and attempts to recreate the three most recent statistical analyses by Ertel (1994), Tapping et al. (2001), and Yeung (2006), which all have purported to find a significant relationship between sunspot numbers and pandemic influenza. As will be discussed, each analysis had errors in the data. In addition, in each analysis arbitrary selections or assumptions were also made, and the authors did not assess the robustness of their analyses to changes in those arbitrary assumptions. Varying the arbitrary assumptions to other, equally valid, assumptions negates the claims of significance. Indeed, an arbitrary selection made in one of the analyses appears to have resulted in almost maximal apparent significance; changing it only slightly yields a null result. This analysis applies statistically rigorous methodology to examine the purported sunspot/pandemic link, using more statistically powerful un-binned analysis methods, rather than relying on arbitrarily binned data. The analyses are repeated using both the Wolf and Group sunspot numbers. In all cases, no statistically significant evidence of any association was found. However, while the focus in this particular analysis was on the purported relationship of influenza pandemics to sunspot activity, the faults found in the past analyses are common pitfalls; inattention to analysis reproducibility and robustness assessment are common problems in the sciences, that are unfortunately not noted often enough in review.

  5. Atmospheric Convective Organization: Self-Organized Criticality or Homeostasis?

    NASA Astrophysics Data System (ADS)

    Yano, Jun-Ichi

    2015-04-01

    Atmospheric convection has a tendency organized on a hierarchy of scales ranging from the mesoscale to the planetary scales, with the latter especially manifested by the Madden-Julian oscillation. The present talk examines two major possible mechanisms of self-organization identified in wider literature from a phenomenological thermodynamic point of view by analysing a planetary-scale cloud-resolving model simulation. The first mechanism is self-organized criticality. A saturation tendency of precipitation rate with the increasing column-integrated water, reminiscence of critical phenomena, indicates self-organized criticality. The second is a self-regulation mechanism that is known as homeostasis in biology. A thermodynamic argument suggests that such self-regulation maintains the column-integrated water below a threshold by increasing the precipitation rate. Previous analyses of both observational data as well as cloud-resolving model (CRM) experiments give mixed results. A satellite data analysis suggests self-organized criticality. Some observational data as well as CRM experiments support homeostasis. Other analyses point to a combination of these two interpretations. In this study, a CRM experiment over a planetary-scale domain with a constant sea-surface temperature is analyzed. This analysis shows that the relation between the column-integrated total water and precipitation suggests self-organized criticality, whereas the one between the column-integrated water vapor and precipitation suggests homeostasis. The concurrent presence of these two mechanisms are further elaborated by detailed statistical and budget analyses. These statistics are scale invariant, reflecting a spatial scaling of precipitation processes. These self-organization mechanisms are most likely be best theoretically understood by the energy cycle of the convective systems consisting of the kinetic energy and the cloud-work function. The author has already investigated the behavior of this cycle system under a zero-dimensional configuration. Preliminary simulations of this cycle system over a two-dimensional domain will be presented.

  6. Anatomical variations of hepatic arterial system, coeliac trunk and renal arteries: an analysis with multidetector CT angiography.

    PubMed

    Ugurel, M S; Battal, B; Bozlar, U; Nural, M S; Tasar, M; Ors, F; Saglam, M; Karademir, I

    2010-08-01

    The purpose of our investigation was to determine the anatomical variations in the coeliac trunk-hepatic arterial system and the renal arteries in patients who underwent multidetector CT (MDCT) angiography of the abdominal aorta for various reasons. A total of 100 patients were analysed retrospectively. The coeliac trunk, hepatic arterial system and renal arteries were analysed individually and anatomical variations were recorded. Statistical analysis of the relationship between hepatocoeliac variations and renal artery variations was performed using a chi(2) test. There was a coeliac trunk trifurcation in 89% and bifurcation in 8% of the cases. Coeliac trunk was absent in 1%, a hepatosplenomesenteric trunk was seen in 1% and a splenomesenteric trunk was present in 1%. Hepatic artery variation was present in 48% of patients. Coeliac trunk and/or hepatic arterial variation was present in 23 (39.7%) of the 58 patients with normal renal arteries, and in 27 (64.3%) of the 42 patients with accessory renal arteries. There was a statistically significant correlation between renal artery variations and coeliac trunk-hepatic arterial system variations (p = 0.015). MDCT angiography permits a correct and detailed evaluation of hepatic and renal vascular anatomy. The prevalence of variations in the coeliac trunk and/or hepatic arteries is increased in people with accessory renal arteries. For that reason, when undertaking angiographic examinations directed towards any single organ, the possibility of variations in the vascular structure of other organs should be kept in mind.

  7. A proposal for the measurement of graphical statistics effectiveness: Does it enhance or interfere with statistical reasoning?

    NASA Astrophysics Data System (ADS)

    Agus, M.; Penna, M. P.; Peró-Cebollero, M.; Guàrdia-Olmos, J.

    2015-02-01

    Numerous studies have examined students' difficulties in understanding some notions related to statistical problems. Some authors observed that the presentation of distinct visual representations could increase statistical reasoning, supporting the principle of graphical facilitation. But other researchers disagree with this viewpoint, emphasising the impediments related to the use of illustrations that could overcharge the cognitive system with insignificant data. In this work we aim at comparing the probabilistic statistical reasoning regarding two different formats of problem presentations: graphical and verbal-numerical. We have conceived and presented five pairs of homologous simple problems in the verbal numerical and graphical format to 311 undergraduate Psychology students (n=156 in Italy and n=155 in Spain) without statistical expertise. The purpose of our work was to evaluate the effect of graphical facilitation in probabilistic statistical reasoning. Every undergraduate has solved each pair of problems in two formats in different problem presentation orders and sequences. Data analyses have highlighted that the effect of graphical facilitation is infrequent in psychology undergraduates. This effect is related to many factors (as knowledge, abilities, attitudes, and anxiety); moreover it might be considered the resultant of interaction between individual and task characteristics.

  8. Biosurveillance applying scan statistics with multiple, disparate data sources.

    PubMed

    Burkom, Howard S

    2003-06-01

    Researchers working on the Department of Defense Global Emerging Infections System (DoD-GEIS) pilot system, the Electronic Surveillance System for the Early Notification of Community-Based Epidemics (ESSENCE), have applied scan statistics for early outbreak detection using both traditional and nontraditional data sources. These sources include medical data indexed by International Classification of Disease, 9th Revision (ICD-9) diagnosis codes, as well as less-specific, but potentially timelier, indicators such as records of over-the-counter remedy sales and of school absenteeism. Early efforts employed the Kulldorff scan statistic as implemented in the SaTScan software of the National Cancer Institute. A key obstacle to this application is that the input data streams are typically based on time-varying factors, such as consumer behavior, rather than simply on the populations of the component subregions. We have used both modeling and recent historical data distributions to obtain background spatial distributions. Data analyses have provided guidance on how to condition and model input data to avoid excessive clustering. We have used this methodology in combining data sources for both retrospective studies of known outbreaks and surveillance of high-profile events of concern to local public health authorities. We have integrated the scan statistic capability into a Microsoft Access-based system in which we may include or exclude data sources, vary time windows separately for different data sources, censor data from subsets of individual providers or subregions, adjust the background computation method, and run retrospective or simulated studies.

  9. Frustration in the pattern formation of polysyllabic words

    NASA Astrophysics Data System (ADS)

    Hayata, Kazuya

    2016-12-01

    A novel frustrated system is given for the analysis of (m + 1)-syllabled vocal sounds for languages with the m-vowel system, where the varieties of vowels are assumed to be m (m > 2). The necessary and sufficient condition for observing the sound frustration is that the configuration of m vowels in an m-syllabled word has a preference for the ‘repulsive’ type, in which there is no duplication of an identical vowel. For languages that meet this requirement, no (m + 1)-syllabled word can in principle select the present type because at most m different vowels are available and consequently the duplicated use of an identical vowel is inevitable. For languages showing a preference for the ‘attractive’ type, where an identical vowel aggregates in a word, there arises no such conflict. In this paper, we first elucidate for Arabic with m = 3 how to deal with the conflicting situation, where a statistical approach based on the chi-square testing is employed. In addition to the conventional three-vowel system, analyses are made also for Russian, where a polysyllabic word contains both a stressed and an indeterminate vowel. Through the statistical analyses the selection scheme for quadrisyllabic configurations is found to be strongly dependent on the parts of speech as well as the gender of nouns. In order to emphasize the relevance to the sound model of binary oppositions, analyzed results of Greek verbs are also given.

  10. A new image encryption algorithm based on the fractional-order hyperchaotic Lorenz system

    NASA Astrophysics Data System (ADS)

    Wang, Zhen; Huang, Xia; Li, Yu-Xia; Song, Xiao-Na

    2013-01-01

    We propose a new image encryption algorithm on the basis of the fractional-order hyperchaotic Lorenz system. While in the process of generating a key stream, the system parameters and the derivative order are embedded in the proposed algorithm to enhance the security. Such an algorithm is detailed in terms of security analyses, including correlation analysis, information entropy analysis, run statistic analysis, mean-variance gray value analysis, and key sensitivity analysis. The experimental results demonstrate that the proposed image encryption scheme has the advantages of large key space and high security for practical image encryption.

  11. Automatising the analysis of stochastic biochemical time-series

    PubMed Central

    2015-01-01

    Background Mathematical and computational modelling of biochemical systems has seen a lot of effort devoted to the definition and implementation of high-performance mechanistic simulation frameworks. Within these frameworks it is possible to analyse complex models under a variety of configurations, eventually selecting the best setting of, e.g., parameters for a target system. Motivation This operational pipeline relies on the ability to interpret the predictions of a model, often represented as simulation time-series. Thus, an efficient data analysis pipeline is crucial to automatise time-series analyses, bearing in mind that errors in this phase might mislead the modeller's conclusions. Results For this reason we have developed an intuitive framework-independent Python tool to automate analyses common to a variety of modelling approaches. These include assessment of useful non-trivial statistics for simulation ensembles, e.g., estimation of master equations. Intuitive and domain-independent batch scripts will allow the researcher to automatically prepare reports, thus speeding up the usual model-definition, testing and refinement pipeline. PMID:26051821

  12. Army Logistician. Volume 39, Issue 1, January-February 2007

    DTIC Science & Technology

    2007-02-01

    of electronic systems using statistical methods. P& C , however, requires advanced prognostic capabilities not only to detect the early onset of...patterns. Entities operating in a P& C -enabled environment will sense and understand contextual meaning , communicate their state and mission, and act to...accessing of historical and simulation patterns; on- board prognostics capabilities; physics of failure analyses; and predictive modeling. P& C also

  13. A Fatigue Management System for Sustained Military Operations

    DTIC Science & Technology

    2008-03-31

    Bradford, Brenda Jones, Margaret Campbell, Heather McCrory, Amy Campbell, Linda Mendez, Juan Cardenas, Beckie Moise, Samuel Cardenas, Fernando...always under the direct observation of research personnel or knowingly monitored from a central control station by closed circuit television, excluding...blocks 1, 5, 9, and 13). Statistical Analyses To determine the appropriate sample size for this study, a power analysis was based on the post

  14. Statistical average estimates of high latitude field-aligned currents from the STARE and SABRE coherent VHF radar systems

    NASA Astrophysics Data System (ADS)

    Kosch, M. J.; Nielsen, E.

    Two bistatic VHF radar systems, STARE and SABRE, have been employed to estimate ionospheric electric fields in the geomagnetic latitude range 61.1 - 69.3° (geographic latitude range 63.8 - 72.6°) over northern Scandinavia. 173 days of good backscatter from all four radars have been analysed during the period 1982 to 1986, from which the average ionospheric divergence electric field versus latitude and time is calculated. The average magnetic field-aligned currents are computed using an AE-dependent empirical model of the ionospheric conductance. Statistical Birkeland current estimates are presented for high and low values of the Kp and AE indices as well as positive and negative orientations of the IMF B z component. The results compare very favourably to other ground-based and satellite measurements.

  15. A toolbox for determining subdiffusive mechanisms

    NASA Astrophysics Data System (ADS)

    Meroz, Yasmine; Sokolov, Igor M.

    2015-04-01

    Subdiffusive processes have become a field of great interest in the last decades, due to amounting experimental evidence of subdiffusive behavior in complex systems, and especially in biological systems. Different physical scenarios leading to subdiffusion differ in the details of the dynamics. These differences are what allow to theoretically reconstruct the underlying physics from the results of observations, and will be the topic of this review. We review the main statistical analyses available today to distinguish between these scenarios, categorizing them according to the relevant characteristics. We collect the available tools and statistical tests, presenting them within a broader perspective. We also consider possible complications such as the subordination of subdiffusive mechanisms. Due to the advances in single particle tracking experiments in recent years, we focus on the relevant case of where the available experimental data is scant, at the level of single trajectories.

  16. Initial evaluation of discrete orthogonal basis reconstruction of ECT images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moody, E.B.; Donohue, K.D.

    1996-12-31

    Discrete orthogonal basis restoration (DOBR) is a linear, non-iterative, and robust method for solving inverse problems for systems characterized by shift-variant transfer functions. This simulation study evaluates the feasibility of using DOBR for reconstructing emission computed tomographic (ECT) images. The imaging system model uses typical SPECT parameters and incorporates the effects of attenuation, spatially-variant PSF, and Poisson noise in the projection process. Sample reconstructions and statistical error analyses for a class of digital phantoms compare the DOBR performance for Hartley and Walsh basis functions. Test results confirm that DOBR with either basis set produces images with good statistical properties. Nomore » problems were encountered with reconstruction instability. The flexibility of the DOBR method and its consistent performance warrants further investigation of DOBR as a means of ECT image reconstruction.« less

  17. Estimability and simple dynamical analyses of range (range-rate range-difference) observations to artificial satellites. [laser range observations to LAGEOS using non-Bayesian statistics

    NASA Technical Reports Server (NTRS)

    Vangelder, B. H. W.

    1978-01-01

    Non-Bayesian statistics were used in simulation studies centered around laser range observations to LAGEOS. The capabilities of satellite laser ranging especially in connection with relative station positioning are evaluated. The satellite measurement system under investigation may fall short in precise determinations of the earth's orientation (precession and nutation) and earth's rotation as opposed to systems as very long baseline interferometry (VLBI) and lunar laser ranging (LLR). Relative station positioning, determination of (differential) polar motion, positioning of stations with respect to the earth's center of mass and determination of the earth's gravity field should be easily realized by satellite laser ranging (SLR). The last two features should be considered as best (or solely) determinable by SLR in contrast to VLBI and LLR.

  18. A scoring system for ascertainment of incident stroke; the Risk Index Score (RISc).

    PubMed

    Kass-Hout, T A; Moyé, L A; Smith, M A; Morgenstern, L B

    2006-01-01

    The main objective of this study was to develop and validate a computer-based statistical algorithm that could be translated into a simple scoring system in order to ascertain incident stroke cases using hospital admission medical records data. The Risk Index Score (RISc) algorithm was developed using data collected prospectively by the Brain Attack Surveillance in Corpus Christi (BASIC) project, 2000. The validity of RISc was evaluated by estimating the concordance of scoring system stroke ascertainment to stroke ascertainment by physician and/or abstractor review of hospital admission records. RISc was developed on 1718 randomly selected patients (training set) and then statistically validated on an independent sample of 858 patients (validation set). A multivariable logistic model was used to develop RISc and subsequently evaluated by goodness-of-fit and receiver operating characteristic (ROC) analyses. The higher the value of RISc, the higher the patient's risk of potential stroke. The study showed RISc was well calibrated and discriminated those who had potential stroke from those that did not on initial screening. In this study we developed and validated a rapid, easy, efficient, and accurate method to ascertain incident stroke cases from routine hospital admission records for epidemiologic investigations. Validation of this scoring system was achieved statistically; however, clinical validation in a community hospital setting is warranted.

  19. Point-by-point compositional analysis for atom probe tomography.

    PubMed

    Stephenson, Leigh T; Ceguerra, Anna V; Li, Tong; Rojhirunsakool, Tanaporn; Nag, Soumya; Banerjee, Rajarshi; Cairney, Julie M; Ringer, Simon P

    2014-01-01

    This new alternate approach to data processing for analyses that traditionally employed grid-based counting methods is necessary because it removes a user-imposed coordinate system that not only limits an analysis but also may introduce errors. We have modified the widely used "binomial" analysis for APT data by replacing grid-based counting with coordinate-independent nearest neighbour identification, improving the measurements and the statistics obtained, allowing quantitative analysis of smaller datasets, and datasets from non-dilute solid solutions. It also allows better visualisation of compositional fluctuations in the data. Our modifications include:.•using spherical k-atom blocks identified by each detected atom's first k nearest neighbours.•3D data visualisation of block composition and nearest neighbour anisotropy.•using z-statistics to directly compare experimental and expected composition curves. Similar modifications may be made to other grid-based counting analyses (contingency table, Langer-Bar-on-Miller, sinusoidal model) and could be instrumental in developing novel data visualisation options.

  20. A Retrospective Survey of Research Design and Statistical Analyses in Selected Chinese Medical Journals in 1998 and 2008

    PubMed Central

    Jin, Zhichao; Yu, Danghui; Zhang, Luoman; Meng, Hong; Lu, Jian; Gao, Qingbin; Cao, Yang; Ma, Xiuqiang; Wu, Cheng; He, Qian; Wang, Rui; He, Jia

    2010-01-01

    Background High quality clinical research not only requires advanced professional knowledge, but also needs sound study design and correct statistical analyses. The number of clinical research articles published in Chinese medical journals has increased immensely in the past decade, but study design quality and statistical analyses have remained suboptimal. The aim of this investigation was to gather evidence on the quality of study design and statistical analyses in clinical researches conducted in China for the first decade of the new millennium. Methodology/Principal Findings Ten (10) leading Chinese medical journals were selected and all original articles published in 1998 (N = 1,335) and 2008 (N = 1,578) were thoroughly categorized and reviewed. A well-defined and validated checklist on study design, statistical analyses, results presentation, and interpretation was used for review and evaluation. Main outcomes were the frequencies of different types of study design, error/defect proportion in design and statistical analyses, and implementation of CONSORT in randomized clinical trials. From 1998 to 2008: The error/defect proportion in statistical analyses decreased significantly ( = 12.03, p<0.001), 59.8% (545/1,335) in 1998 compared to 52.2% (664/1,578) in 2008. The overall error/defect proportion of study design also decreased ( = 21.22, p<0.001), 50.9% (680/1,335) compared to 42.40% (669/1,578). In 2008, design with randomized clinical trials remained low in single digit (3.8%, 60/1,578) with two-third showed poor results reporting (defects in 44 papers, 73.3%). Nearly half of the published studies were retrospective in nature, 49.3% (658/1,335) in 1998 compared to 48.2% (761/1,578) in 2008. Decreases in defect proportions were observed in both results presentation ( = 93.26, p<0.001), 92.7% (945/1,019) compared to 78.2% (1023/1,309) and interpretation ( = 27.26, p<0.001), 9.7% (99/1,019) compared to 4.3% (56/1,309), some serious ones persisted. Conclusions/Significance Chinese medical research seems to have made significant progress regarding statistical analyses, but there remains ample room for improvement regarding study designs. Retrospective clinical studies are the most often used design, whereas randomized clinical trials are rare and often show methodological weaknesses. Urgent implementation of the CONSORT statement is imperative. PMID:20520824

  1. Use of Statistical Analyses in the Ophthalmic Literature

    PubMed Central

    Lisboa, Renato; Meira-Freitas, Daniel; Tatham, Andrew J.; Marvasti, Amir H.; Sharpsten, Lucie; Medeiros, Felipe A.

    2014-01-01

    Purpose To identify the most commonly used statistical analyses in the ophthalmic literature and to determine the likely gain in comprehension of the literature that readers could expect if they were to sequentially add knowledge of more advanced techniques to their statistical repertoire. Design Cross-sectional study Methods All articles published from January 2012 to December 2012 in Ophthalmology, American Journal of Ophthalmology and Archives of Ophthalmology were reviewed. A total of 780 peer-reviewed articles were included. Two reviewers examined each article and assigned categories to each one depending on the type of statistical analyses used. Discrepancies between reviewers were resolved by consensus. Main Outcome Measures Total number and percentage of articles containing each category of statistical analysis were obtained. Additionally we estimated the accumulated number and percentage of articles that a reader would be expected to be able to interpret depending on their statistical repertoire. Results Readers with little or no statistical knowledge would be expected to be able to interpret the statistical methods presented in only 20.8% of articles. In order to understand more than half (51.4%) of the articles published, readers were expected to be familiar with at least 15 different statistical methods. Knowledge of 21 categories of statistical methods was necessary to comprehend 70.9% of articles, while knowledge of more than 29 categories was necessary to comprehend more than 90% of articles. Articles in retina and glaucoma subspecialties showed a tendency for using more complex analysis when compared to cornea. Conclusions Readers of clinical journals in ophthalmology need to have substantial knowledge of statistical methodology to understand the results of published studies in the literature. The frequency of use of complex statistical analyses also indicates that those involved in the editorial peer-review process must have sound statistical knowledge in order to critically appraise articles submitted for publication. The results of this study could provide guidance to direct the statistical learning of clinical ophthalmologists, researchers and educators involved in the design of courses for residents and medical students. PMID:24612977

  2. Analysing malaria incidence at the small area level for developing a spatial decision support system: A case study in Kalaburagi, Karnataka, India.

    PubMed

    Shekhar, S; Yoo, E-H; Ahmed, S A; Haining, R; Kadannolly, S

    2017-02-01

    Spatial decision support systems have already proved their value in helping to reduce infectious diseases but to be effective they need to be designed to reflect local circumstances and local data availability. We report the first stage of a project to develop a spatial decision support system for infectious diseases for Karnataka State in India. The focus of this paper is on malaria incidence and we draw on small area data on new cases of malaria analysed in two-monthly time intervals over the period February 2012 to January 2016 for Kalaburagi taluk, a small area in Karnataka. We report the results of data mapping and cluster detection (identifying areas of excess risk) including evaluating the temporal persistence of excess risk and the local conditions with which high counts are statistically associated. We comment on how this work might feed into a practical spatial decision support system. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Global atmospheric circulation statistics, 1000-1 mb

    NASA Technical Reports Server (NTRS)

    Randel, William J.

    1992-01-01

    The atlas presents atmospheric general circulation statistics derived from twelve years (1979-90) of daily National Meteorological Center (NMC) operational geopotential height analyses; it is an update of a prior atlas using data over 1979-1986. These global analyses are available on pressure levels covering 1000-1 mb (approximately 0-50 km). The geopotential grids are a combined product of the Climate Analysis Center (which produces analyses over 70-1 mb) and operational NMC analyses (over 1000-100 mb). Balance horizontal winds and hydrostatic temperatures are derived from the geopotential fields.

  4. Development of the Statistical Reasoning in Biology Concept Inventory (SRBCI)

    PubMed Central

    Deane, Thomas; Nomme, Kathy; Jeffery, Erica; Pollock, Carol; Birol, Gülnur

    2016-01-01

    We followed established best practices in concept inventory design and developed a 12-item inventory to assess student ability in statistical reasoning in biology (Statistical Reasoning in Biology Concept Inventory [SRBCI]). It is important to assess student thinking in this conceptual area, because it is a fundamental requirement of being statistically literate and associated skills are needed in almost all walks of life. Despite this, previous work shows that non–expert-like thinking in statistical reasoning is common, even after instruction. As science educators, our goal should be to move students along a novice-to-expert spectrum, which could be achieved with growing experience in statistical reasoning. We used item response theory analyses (the one-parameter Rasch model and associated analyses) to assess responses gathered from biology students in two populations at a large research university in Canada in order to test SRBCI’s robustness and sensitivity in capturing useful data relating to the students’ conceptual ability in statistical reasoning. Our analyses indicated that SRBCI is a unidimensional construct, with items that vary widely in difficulty and provide useful information about such student ability. SRBCI should be useful as a diagnostic tool in a variety of biology settings and as a means of measuring the success of teaching interventions designed to improve statistical reasoning skills. PMID:26903497

  5. Secondary Analysis of National Longitudinal Transition Study 2 Data

    ERIC Educational Resources Information Center

    Hicks, Tyler A.; Knollman, Greg A.

    2015-01-01

    This review examines published secondary analyses of National Longitudinal Transition Study 2 (NLTS2) data, with a primary focus upon statistical objectives, paradigms, inferences, and methods. Its primary purpose was to determine which statistical techniques have been common in secondary analyses of NLTS2 data. The review begins with an…

  6. A Nonparametric Geostatistical Method For Estimating Species Importance

    Treesearch

    Andrew J. Lister; Rachel Riemann; Michael Hoppus

    2001-01-01

    Parametric statistical methods are not always appropriate for conducting spatial analyses of forest inventory data. Parametric geostatistical methods such as variography and kriging are essentially averaging procedures, and thus can be affected by extreme values. Furthermore, non normal distributions violate the assumptions of analyses in which test statistics are...

  7. "Who Was 'Shadow'?" The Computer Knows: Applying Grammar-Program Statistics in Content Analyses to Solve Mysteries about Authorship.

    ERIC Educational Resources Information Center

    Ellis, Barbara G.; Dick, Steven J.

    1996-01-01

    Employs the statistics-documentation portion of a word-processing program's grammar-check feature together with qualitative analyses to determine that Henry Watterson, long-time editor of the "Louisville Courier-Journal," was probably the South's famed Civil War correspondent "Shadow." (TB)

  8. Rainfall effects on inflow and infiltration in wastewater treatment systems in a coastal plain region.

    PubMed

    Cahoon, Lawrence B; Hanke, Marc H

    2017-04-01

    Aging wastewater collection and treatment systems have not received as much attention as other forms of infrastructure, even though they are vital to public health, economic growth, and environmental quality. Inflow and infiltration (I&I) are among potentially widespread problems facing central sewage collection and treatment systems, posing risks of sanitary system overflows (SSOs), system degradation, and water quality impairment, but remain poorly quantified. Whole-system analyses of I&I were conducted by regression analyses of system flow responses to rainfall and temperature for 93 wastewater treatment plants in 23 counties in eastern North Carolina, USA, a coastal plain region with high water tables and generally higher rainfalls than the continental interior. Statistically significant flow responses to rainfall were found in 92% of these systems, with 2-year average I&I values exceeding 10% of rainless system flow in over 40% of them. The effects of rainfall, which can be intense in this coastal region, have region-wide implications for sewer system performance and environmental management. The positive association between rainfall and excessive I&I parallels the effects of storm water runoff on water quality, in that excessive I&I can also drive SSOs, thus confounding water quality protection efforts.

  9. Content-based VLE designs improve learning efficiency in constructivist statistics education.

    PubMed

    Wessa, Patrick; De Rycker, Antoon; Holliday, Ian Edward

    2011-01-01

    We introduced a series of computer-supported workshops in our undergraduate statistics courses, in the hope that it would help students to gain a deeper understanding of statistical concepts. This raised questions about the appropriate design of the Virtual Learning Environment (VLE) in which such an approach had to be implemented. Therefore, we investigated two competing software design models for VLEs. In the first system, all learning features were a function of the classical VLE. The second system was designed from the perspective that learning features should be a function of the course's core content (statistical analyses), which required us to develop a specific-purpose Statistical Learning Environment (SLE) based on Reproducible Computing and newly developed Peer Review (PR) technology. The main research question is whether the second VLE design improved learning efficiency as compared to the standard type of VLE design that is commonly used in education. As a secondary objective we provide empirical evidence about the usefulness of PR as a constructivist learning activity which supports non-rote learning. Finally, this paper illustrates that it is possible to introduce a constructivist learning approach in large student populations, based on adequately designed educational technology, without subsuming educational content to technological convenience. Both VLE systems were tested within a two-year quasi-experiment based on a Reliable Nonequivalent Group Design. This approach allowed us to draw valid conclusions about the treatment effect of the changed VLE design, even though the systems were implemented in successive years. The methodological aspects about the experiment's internal validity are explained extensively. The effect of the design change is shown to have substantially increased the efficiency of constructivist, computer-assisted learning activities for all cohorts of the student population under investigation. The findings demonstrate that a content-based design outperforms the traditional VLE-based design.

  10. Time Series Expression Analyses Using RNA-seq: A Statistical Approach

    PubMed Central

    Oh, Sunghee; Song, Seongho; Grabowski, Gregory; Zhao, Hongyu; Noonan, James P.

    2013-01-01

    RNA-seq is becoming the de facto standard approach for transcriptome analysis with ever-reducing cost. It has considerable advantages over conventional technologies (microarrays) because it allows for direct identification and quantification of transcripts. Many time series RNA-seq datasets have been collected to study the dynamic regulations of transcripts. However, statistically rigorous and computationally efficient methods are needed to explore the time-dependent changes of gene expression in biological systems. These methods should explicitly account for the dependencies of expression patterns across time points. Here, we discuss several methods that can be applied to model timecourse RNA-seq data, including statistical evolutionary trajectory index (SETI), autoregressive time-lagged regression (AR(1)), and hidden Markov model (HMM) approaches. We use three real datasets and simulation studies to demonstrate the utility of these dynamic methods in temporal analysis. PMID:23586021

  11. Statistical Analysis of a Round-Robin Measurement Survey of Two Candidate Materials for a Seebeck Coefficient Standard Reference Material

    PubMed Central

    Lu, Z. Q. J.; Lowhorn, N. D.; Wong-Ng, W.; Zhang, W.; Thomas, E. L.; Otani, M.; Green, M. L.; Tran, T. N.; Caylor, C.; Dilley, N. R.; Downey, A.; Edwards, B.; Elsner, N.; Ghamaty, S.; Hogan, T.; Jie, Q.; Li, Q.; Martin, J.; Nolas, G.; Obara, H.; Sharp, J.; Venkatasubramanian, R.; Willigan, R.; Yang, J.; Tritt, T.

    2009-01-01

    In an effort to develop a Standard Reference Material (SRM™) for Seebeck coefficient, we have conducted a round-robin measurement survey of two candidate materials—undoped Bi2Te3 and Constantan (55 % Cu and 45 % Ni alloy). Measurements were performed in two rounds by twelve laboratories involved in active thermoelectric research using a number of different commercial and custom-built measurement systems and techniques. In this paper we report the detailed statistical analyses on the interlaboratory measurement results and the statistical methodology for analysis of irregularly sampled measurement curves in the interlaboratory study setting. Based on these results, we have selected Bi2Te3 as the prototype standard material. Once available, this SRM will be useful for future interlaboratory data comparison and instrument calibrations. PMID:27504212

  12. Time series expression analyses using RNA-seq: a statistical approach.

    PubMed

    Oh, Sunghee; Song, Seongho; Grabowski, Gregory; Zhao, Hongyu; Noonan, James P

    2013-01-01

    RNA-seq is becoming the de facto standard approach for transcriptome analysis with ever-reducing cost. It has considerable advantages over conventional technologies (microarrays) because it allows for direct identification and quantification of transcripts. Many time series RNA-seq datasets have been collected to study the dynamic regulations of transcripts. However, statistically rigorous and computationally efficient methods are needed to explore the time-dependent changes of gene expression in biological systems. These methods should explicitly account for the dependencies of expression patterns across time points. Here, we discuss several methods that can be applied to model timecourse RNA-seq data, including statistical evolutionary trajectory index (SETI), autoregressive time-lagged regression (AR(1)), and hidden Markov model (HMM) approaches. We use three real datasets and simulation studies to demonstrate the utility of these dynamic methods in temporal analysis.

  13. Aircraft Maneuvers for the Evaluation of Flying Qualities and Agility. Volume 1. Maneuver Development Process and Initial Maneuver Set

    DTIC Science & Technology

    1993-08-01

    subtitled "Simulation Data," consists of detailed infonrnation on the design parmneter variations tested, subsequent statistical analyses conducted...used with confidence during the design process. The data quality can be examined in various forms such as statistical analyses of measure of merit data...merit, such as time to capture or nmaximurn pitch rate, can be calculated from the simulation time history data. Statistical techniques are then used

  14. The Impact of Global Budgets on Pharmaceutical Spending and Utilization

    PubMed Central

    Fendrick, A. Mark; Song, Zirui; Landon, Bruce E.; Safran, Dana Gelb; Mechanic, Robert E.; Chernew, Michael E.

    2014-01-01

    In 2009, Blue Cross Blue Shield of Massachusetts implemented a global budget-based payment system, the Alternative Quality Contract (AQC), in which provider groups assumed accountability for spending. We investigate the impact of global budgets on the utilization of prescription drugs and related expenditures. Our analyses indicate no statistically significant evidence that the AQC reduced the use of drugs. Although the impact may change over time, early evidence suggests that it is premature to conclude that global budget systems may reduce access to medications. PMID:25500751

  15. Kidney function changes with aging in adults: comparison between cross-sectional and longitudinal data analyses in renal function assessment.

    PubMed

    Chung, Sang M; Lee, David J; Hand, Austin; Young, Philip; Vaidyanathan, Jayabharathi; Sahajwalla, Chandrahas

    2015-12-01

    The study evaluated whether the renal function decline rate per year with age in adults varies based on two primary statistical analyses: cross-section (CS), using one observation per subject, and longitudinal (LT), using multiple observations per subject over time. A total of 16628 records (3946 subjects; age range 30-92 years) of creatinine clearance and relevant demographic data were used. On average, four samples per subject were collected for up to 2364 days (mean: 793 days). A simple linear regression and random coefficient models were selected for CS and LT analyses, respectively. The renal function decline rates per year were 1.33 and 0.95 ml/min/year for CS and LT analyses, respectively, and were slower when the repeated individual measurements were considered. The study confirms that rates are different based on statistical analyses, and that a statistically robust longitudinal model with a proper sampling design provides reliable individual as well as population estimates of the renal function decline rates per year with age in adults. In conclusion, our findings indicated that one should be cautious in interpreting the renal function decline rate with aging information because its estimation was highly dependent on the statistical analyses. From our analyses, a population longitudinal analysis (e.g. random coefficient model) is recommended if individualization is critical, such as a dose adjustment based on renal function during a chronic therapy. Copyright © 2015 John Wiley & Sons, Ltd.

  16. Which is the optimal risk stratification system for surgically treated localized primary GIST? Comparison of three contemporary prognostic criteria in 171 tumors and a proposal for a modified Armed Forces Institute of Pathology risk criteria.

    PubMed

    Goh, Brian K P; Chow, Pierce K H; Yap, Wai-Ming; Kesavan, Sittampalam M; Song, In-Chin; Paul, Pradeep G; Ooi, Boon-Swee; Chung, Yaw-Fui A; Wong, Wai-Keong

    2008-08-01

    This study aims to validate and compare the performance of the National Institute of Health (NIH) criteria, Huang modified NIH criteria, and Armed Forces Institute of Pathology (AFIP) risk criteria for gastrointestinal stromal tumors (GISTs) in a large series of localized primary GISTs surgically treated at a single institution to determine the ideal risk stratification system for GIST. The clinicopathological features of 171 consecutive patients who underwent surgical resection for GISTs were retrospectively reviewed. Statistical analyses were performed to compare the prognostic value of the three risk criteria by analyzing the discriminatory ability linear trend, homogeneity, monotonicity of gradients, and Akaike information criteria. The median actuarial recurrence-free survival (RFS) for all 171 patients was 70%. On multivariate analyses, size >10 cm, mitotic count >5/50 high-power field, tumor necrosis, and serosal involvement were independent prognostic factors of RFS. All three risk criteria demonstrated a statistically significant difference in the recurrence rate, median actuarial RFS, actuarial 5-year RFS, and tumor-specific death across the different stages. Comparison of the various risk-stratification systems demonstrated that our proposed modified AFIP criteria had the best independent predictive value of RFS when compared with the other systems. The NIH, modified NIH, and AFIP criteria are useful in the prognostication of GIST, and the AFIP risk criteria provided the best prognostication among the three systems for primary localized GIST. However, remarkable prognostic heterogeneity exists in the AFIP high-risk category, and with our proposed modification, this system provides the most accurate prognostic information.

  17. Ichthyoplankton abundance and variance in a large river system concerns for long-term monitoring

    USGS Publications Warehouse

    Holland-Bartels, Leslie E.; Dewey, Michael R.; Zigler, Steven J.

    1995-01-01

    System-wide spatial patterns of ichthyoplankton abundance and variability were assessed in the upper Mississippi and lower Illinois rivers to address the experimental design and statistical confidence in density estimates. Ichthyoplankton was sampled from June to August 1989 in primary milieus (vegetated and non-vegated backwaters and impounded areas, main channels and main channel borders) in three navigation pools (8, 13 and 26) of the upper Mississippi River and in a downstream reach of the Illinois River. Ichthyoplankton densities varied among stations of similar aquatic landscapes (milieus) more than among subsamples within a station. An analysis of sampling effort indicated that the collection of single samples at many stations in a given milieu type is statistically and economically preferable to the collection of multiple subsamples at fewer stations. Cluster analyses also revealed that stations only generally grouped by their preassigned milieu types. Pilot studies such as this can define station groupings and sources of variation beyond an a priori habitat classification. Thus the minimum intensity of sampling required to achieve a desired statistical confidence can be identified before implementing monitoring efforts.

  18. Statistics and bioinformatics in nutritional sciences: analysis of complex data in the era of systems biology⋆

    PubMed Central

    Fu, Wenjiang J.; Stromberg, Arnold J.; Viele, Kert; Carroll, Raymond J.; Wu, Guoyao

    2009-01-01

    Over the past two decades, there have been revolutionary developments in life science technologies characterized by high throughput, high efficiency, and rapid computation. Nutritionists now have the advanced methodologies for the analysis of DNA, RNA, protein, low-molecular-weight metabolites, as well as access to bioinformatics databases. Statistics, which can be defined as the process of making scientific inferences from data that contain variability, has historically played an integral role in advancing nutritional sciences. Currently, in the era of systems biology, statistics has become an increasingly important tool to quantitatively analyze information about biological macromolecules. This article describes general terms used in statistical analysis of large, complex experimental data. These terms include experimental design, power analysis, sample size calculation, and experimental errors (type I and II errors) for nutritional studies at population, tissue, cellular, and molecular levels. In addition, we highlighted various sources of experimental variations in studies involving microarray gene expression, real-time polymerase chain reaction, proteomics, and other bioinformatics technologies. Moreover, we provided guidelines for nutritionists and other biomedical scientists to plan and conduct studies and to analyze the complex data. Appropriate statistical analyses are expected to make an important contribution to solving major nutrition-associated problems in humans and animals (including obesity, diabetes, cardiovascular disease, cancer, ageing, and intrauterine fetal retardation). PMID:20233650

  19. Organizational downsizing and age discrimination litigation: the influence of personnel practices and statistical evidence on litigation outcomes.

    PubMed

    Wingate, Peter H; Thornton, George C; McIntyre, Kelly S; Frame, Jennifer H

    2003-02-01

    The present study examined relationships between reduction-in-force (RIF) personnel practices, presentation of statistical evidence, and litigation outcomes. Policy capturing methods were utilized to analyze the components of 115 federal district court opinions involving age discrimination disparate treatment allegations and organizational downsizing. Univariate analyses revealed meaningful links between RIF personnel practices, use of statistical evidence, and judicial verdict. The defendant organization was awarded summary judgment in 73% of the claims included in the study. Judicial decisions in favor of the defendant organization were found to be significantly related to such variables as formal performance appraisal systems, termination decision review within the organization, methods of employee assessment and selection for termination, and the presence of a concrete layoff policy. The use of statistical evidence in ADEA disparate treatment litigation was investigated and found to be a potentially persuasive type of indirect evidence. Legal, personnel, and evidentiary ramifications are reviewed, and a framework of downsizing mechanics emphasizing legal defensibility is presented.

  20. Determining Spatio-Temporal Cadastral Data Requirement for Infrastructure of Ladm for Turkey

    NASA Astrophysics Data System (ADS)

    Alkan, M.; Polat, Z. A.

    2016-06-01

    Nowadays, the nature of land title and cadastral (LTC) data in the Turkey is dynamic from a temporal perspective which depends on the LTC operations. Functional requirements with respect to the characteristics are investigated based upon interviews of professionals in public and private sectors. These are; Legal authorities, Land Registry and Cadastre offices, Highway departments, Foundations, Ministries of Budget, Transportation, Justice, Public Works and Settlement, Environment and Forestry, Agriculture and Rural Affairs, Culture and Internal Affairs, State Institute of Statistics (SIS), execution offices, tax offices, real estate offices, private sector, local governments and banks. On the other hand, spatio-temporal LTC data very important component for creating infrastructure of Land Administration Model (LADM). For this reason, spatio-temporal LTC data needs for LADM not only updated but also temporal. The investigations ended up with determine temporal analyses of LTC data, traditional LTC system and tracing temporal analyses in traditional LTC system. In the traditional system, the temporal analyses needed by all these users could not be performed in a rapid and reliable way. The reason for this is that the traditional LTC system is a manual archiving system. The aims and general contents of this paper: (1) define traditional LTC system of Turkey; (2) determining the need for spatio-temporal LTC data and analyses for core domain model for LADM. As a results of temporal and spatio-temporal analysis LTC data needs, new system design is important for the Turkish LADM model. Designing and realizing an efficient and functional Temporal Geographic Information Systems (TGIS) is inevitable for the Turkish LADM core infrastructure. Finally this paper outcome is creating infrastructure for design and develop LADM for Turkey.

  1. GSuite HyperBrowser: integrative analysis of dataset collections across the genome and epigenome.

    PubMed

    Simovski, Boris; Vodák, Daniel; Gundersen, Sveinung; Domanska, Diana; Azab, Abdulrahman; Holden, Lars; Holden, Marit; Grytten, Ivar; Rand, Knut; Drabløs, Finn; Johansen, Morten; Mora, Antonio; Lund-Andersen, Christin; Fromm, Bastian; Eskeland, Ragnhild; Gabrielsen, Odd Stokke; Ferkingstad, Egil; Nakken, Sigve; Bengtsen, Mads; Nederbragt, Alexander Johan; Thorarensen, Hildur Sif; Akse, Johannes Andreas; Glad, Ingrid; Hovig, Eivind; Sandve, Geir Kjetil

    2017-07-01

    Recent large-scale undertakings such as ENCODE and Roadmap Epigenomics have generated experimental data mapped to the human reference genome (as genomic tracks) representing a variety of functional elements across a large number of cell types. Despite the high potential value of these publicly available data for a broad variety of investigations, little attention has been given to the analytical methodology necessary for their widespread utilisation. We here present a first principled treatment of the analysis of collections of genomic tracks. We have developed novel computational and statistical methodology to permit comparative and confirmatory analyses across multiple and disparate data sources. We delineate a set of generic questions that are useful across a broad range of investigations and discuss the implications of choosing different statistical measures and null models. Examples include contrasting analyses across different tissues or diseases. The methodology has been implemented in a comprehensive open-source software system, the GSuite HyperBrowser. To make the functionality accessible to biologists, and to facilitate reproducible analysis, we have also developed a web-based interface providing an expertly guided and customizable way of utilizing the methodology. With this system, many novel biological questions can flexibly be posed and rapidly answered. Through a combination of streamlined data acquisition, interoperable representation of dataset collections, and customizable statistical analysis with guided setup and interpretation, the GSuite HyperBrowser represents a first comprehensive solution for integrative analysis of track collections across the genome and epigenome. The software is available at: https://hyperbrowser.uio.no. © The Author 2017. Published by Oxford University Press.

  2. Prevalence of Hyposalivation in Patients with Systemic Lupus Erythematosus in a Brazilian Subpopulation

    PubMed Central

    Leite, Cristhiane Almeida; Galera, Marcial Francis; Espinosa, Mariano Martínez; de Lima, Paulo Ricardo Teles; Fernandes, Vander; Borges, Álvaro Henrique; Dias, Eliane Pedra

    2015-01-01

    Background. Systemic lupus erythematosus (SLE) is a chronic inflammatory, multisystem, and autoimmune disease. Objective. The aim of this study was to describe the prevalence of hyposalivation in SLE patients and evaluate factors associated. Methods. This is a cross-sectional study developed at the Cuiaba University General Hospital (UNIC-HGU), Mato Grosso, Brazil. The study population consisted of female SLE patients treated at this hospital from 06/2010 to 12/2012. Unstimulated salivary flow rates (SFRs) were measured. Descriptive and inferential analyses were performed in all cases using a significance level P < 0.05. Results. The results showed that 79% of patients with systemic lupus erythematosus suffered from hyposalivation and that the disease activity and age in years were the factors that resulted in statistically significant differences. Conclusion. The activity of the disease, age >27 years, and the drugs used were factors associated with hyposalivation, resulting in a statistically significant decrease in saliva production. PMID:25649631

  3. Agent-Based Simulation and Analysis of a Defensive UAV Swarm Against an Enemy UAV Swarm

    DTIC Science & Technology

    2011-06-01

    de Investigacion, Programas y Desarrollo de la Armada Armada de Chile CHILE 10. CAPT Jeffrey Kline, USN(ret.) Naval Postgraduate School Monterey, California 91 ...this de - fensive swarm system, an agent-based simulation model is developed, and appropriate designs of experiments and statistical analyses are... de - velopment and implementation of counter UAV technology from readily-available commercial products. The organization leverages the “largest

  4. Inferential Statistics in "Language Teaching Research": A Review and Ways Forward

    ERIC Educational Resources Information Center

    Lindstromberg, Seth

    2016-01-01

    This article reviews all (quasi)experimental studies appearing in the first 19 volumes (1997-2015) of "Language Teaching Research" (LTR). Specifically, it provides an overview of how statistical analyses were conducted in these studies and of how the analyses were reported. The overall conclusion is that there has been a tight adherence…

  5. An Analysis Pipeline with Statistical and Visualization-Guided Knowledge Discovery for Michigan-Style Learning Classifier Systems

    PubMed Central

    Urbanowicz, Ryan J.; Granizo-Mackenzie, Ambrose; Moore, Jason H.

    2014-01-01

    Michigan-style learning classifier systems (M-LCSs) represent an adaptive and powerful class of evolutionary algorithms which distribute the learned solution over a sizable population of rules. However their application to complex real world data mining problems, such as genetic association studies, has been limited. Traditional knowledge discovery strategies for M-LCS rule populations involve sorting and manual rule inspection. While this approach may be sufficient for simpler problems, the confounding influence of noise and the need to discriminate between predictive and non-predictive attributes calls for additional strategies. Additionally, tests of significance must be adapted to M-LCS analyses in order to make them a viable option within fields that require such analyses to assess confidence. In this work we introduce an M-LCS analysis pipeline that combines uniquely applied visualizations with objective statistical evaluation for the identification of predictive attributes, and reliable rule generalizations in noisy single-step data mining problems. This work considers an alternative paradigm for knowledge discovery in M-LCSs, shifting the focus from individual rules to a global, population-wide perspective. We demonstrate the efficacy of this pipeline applied to the identification of epistasis (i.e., attribute interaction) and heterogeneity in noisy simulated genetic association data. PMID:25431544

  6. Leasing Into the Sun: A Mixed Method Analysis of Transactions of Homes with Third Party Owned Solar

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoen, Ben; Rand, Joseph; Adomatis, Sandra

    This analysis is the first to examine if homes with third-party owned (TPO) PV systems are unique in the marketplace as compared to non-PV or non-TPO PV homes. This is of growing importance as the number of homes with TPO systems is nearly a half of a million in the US currently and is growing. A hedonic pricing model analysis of 20,106 homes that sold in California between 2011 and 2013 is conducted, as well as a paired sales analysis of 18 pairs of TPO PV and non-PV homes in San Diego spanning 2012 and 2013. The hedonic model examinedmore » 2,914 non-TPO PV home sales and 113 TPO PV sales and fails to uncover statistically significant premiums for TPO PV homes nor for those with pre-paid leases as compared to non-PV homes. Similarly, the paired sales analysis does not find evidence of an impact to value for the TPO homes when comparing to non-PV homes. Analyses of non-TPO PV sales both here and previously have found larger and statistically significant premiums. Collection of a larger dataset that covers the present period is recommended for future analyses so that smaller, more nuanced and recent effects can be discovered.« less

  7. Regional-scale input of dispersed and discrete volcanic ash to the Izu-Bonin and Mariana subduction zones

    NASA Astrophysics Data System (ADS)

    Scudder, Rachel P.; Murray, Richard W.; Schindlbeck, Julie C.; Kutterolf, Steffen; Hauff, Folkmar; McKinley, Claire C.

    2014-11-01

    We have geochemically and statistically characterized bulk marine sediment and ash layers at Ocean Drilling Program Site 1149 (Izu-Bonin Arc) and Deep Sea Drilling Project Site 52 (Mariana Arc), and have quantified that multiple dispersed ash sources collectively comprise ˜30-35% of the hemipelagic sediment mass entering the Izu-Bonin-Mariana subduction system. Multivariate statistical analyses indicate that the bulk sediment at Site 1149 is a mixture of Chinese Loess, a second compositionally distinct eolian source, a dispersed mafic ash, and a dispersed felsic ash. We interpret the source of these ashes as, respectively, being basalt from the Izu-Bonin Front Arc (IBFA) and rhyolite from the Honshu Arc. Sr-, Nd-, and Pb isotopic analyses of the bulk sediment are consistent with the chemical/statistical-based interpretations. Comparison of the mass accumulation rate of the dispersed ash component to discrete ash layer parameters (thickness, sedimentation rate, and number of layers) suggests that eruption frequency, rather than eruption size, drives the dispersed ash record. At Site 52, the geochemistry and statistical modeling indicates that Chinese Loess, IBFA, dispersed BNN (boninite from Izu-Bonin), and a dispersed felsic ash of unknown origin are the sources. At Site 1149, the ash layers and the dispersed ash are compositionally coupled, whereas at Site 52 they are decoupled in that there are no boninite layers, yet boninite is dispersed within the sediment. Changes in the volcanic and eolian inputs through time indicate strong arc-related and climate-related controls.

  8. A concept for holistic whole body MRI data analysis, Imiomics

    PubMed Central

    Malmberg, Filip; Johansson, Lars; Lind, Lars; Sundbom, Magnus; Ahlström, Håkan; Kullberg, Joel

    2017-01-01

    Purpose To present and evaluate a whole-body image analysis concept, Imiomics (imaging–omics) and an image registration method that enables Imiomics analyses by deforming all image data to a common coordinate system, so that the information in each voxel can be compared between persons or within a person over time and integrated with non-imaging data. Methods The presented image registration method utilizes relative elasticity constraints of different tissue obtained from whole-body water-fat MRI. The registration method is evaluated by inverse consistency and Dice coefficients and the Imiomics concept is evaluated by example analyses of importance for metabolic research using non-imaging parameters where we know what to expect. The example analyses include whole body imaging atlas creation, anomaly detection, and cross-sectional and longitudinal analysis. Results The image registration method evaluation on 128 subjects shows low inverse consistency errors and high Dice coefficients. Also, the statistical atlas with fat content intensity values shows low standard deviation values, indicating successful deformations to the common coordinate system. The example analyses show expected associations and correlations which agree with explicit measurements, and thereby illustrate the usefulness of the proposed Imiomics concept. Conclusions The registration method is well-suited for Imiomics analyses, which enable analyses of relationships to non-imaging data, e.g. clinical data, in new types of holistic targeted and untargeted big-data analysis. PMID:28241015

  9. SOCR Analyses - an Instructional Java Web-based Statistical Analysis Toolkit.

    PubMed

    Chu, Annie; Cui, Jenny; Dinov, Ivo D

    2009-03-01

    The Statistical Online Computational Resource (SOCR) designs web-based tools for educational use in a variety of undergraduate courses (Dinov 2006). Several studies have demonstrated that these resources significantly improve students' motivation and learning experiences (Dinov et al. 2008). SOCR Analyses is a new component that concentrates on data modeling and analysis using parametric and non-parametric techniques supported with graphical model diagnostics. Currently implemented analyses include commonly used models in undergraduate statistics courses like linear models (Simple Linear Regression, Multiple Linear Regression, One-Way and Two-Way ANOVA). In addition, we implemented tests for sample comparisons, such as t-test in the parametric category; and Wilcoxon rank sum test, Kruskal-Wallis test, Friedman's test, in the non-parametric category. SOCR Analyses also include several hypothesis test models, such as Contingency tables, Friedman's test and Fisher's exact test.The code itself is open source (http://socr.googlecode.com/), hoping to contribute to the efforts of the statistical computing community. The code includes functionality for each specific analysis model and it has general utilities that can be applied in various statistical computing tasks. For example, concrete methods with API (Application Programming Interface) have been implemented in statistical summary, least square solutions of general linear models, rank calculations, etc. HTML interfaces, tutorials, source code, activities, and data are freely available via the web (www.SOCR.ucla.edu). Code examples for developers and demos for educators are provided on the SOCR Wiki website.In this article, the pedagogical utilization of the SOCR Analyses is discussed, as well as the underlying design framework. As the SOCR project is on-going and more functions and tools are being added to it, these resources are constantly improved. The reader is strongly encouraged to check the SOCR site for most updated information and newly added models.

  10. Multi-Scale Modeling to Improve Single-Molecule, Single-Cell Experiments

    NASA Astrophysics Data System (ADS)

    Munsky, Brian; Shepherd, Douglas

    2014-03-01

    Single-cell, single-molecule experiments are producing an unprecedented amount of data to capture the dynamics of biological systems. When integrated with computational models, observations of spatial, temporal and stochastic fluctuations can yield powerful quantitative insight. We concentrate on experiments that localize and count individual molecules of mRNA. These high precision experiments have large imaging and computational processing costs, and we explore how improved computational analyses can dramatically reduce overall data requirements. In particular, we show how analyses of spatial, temporal and stochastic fluctuations can significantly enhance parameter estimation results for small, noisy data sets. We also show how full probability distribution analyses can constrain parameters with far less data than bulk analyses or statistical moment closures. Finally, we discuss how a systematic modeling progression from simple to more complex analyses can reduce total computational costs by orders of magnitude. We illustrate our approach using single-molecule, spatial mRNA measurements of Interleukin 1-alpha mRNA induction in human THP1 cells following stimulation. Our approach could improve the effectiveness of single-molecule gene regulation analyses for many other process.

  11. Evaluation of apical extrusion of debris and irrigant using two new reciprocating and one continuous rotation single file systems.

    PubMed

    Nayak, Gurudutt; Singh, Inderpreet; Shetty, Shashit; Dahiya, Surya

    2014-05-01

    Apical extrusion of debris and irrigants during cleaning and shaping of the root canal is one of the main causes of periapical inflammation and postoperative flare-ups. The purpose of this study was to quantitatively measure the amount of debris and irrigants extruded apically in single rooted canals using two reciprocating and one rotary single file nickel-titanium instrumentation systems. Sixty human mandibular premolars, randomly assigned to three groups (n = 20) were instrumented using two reciprocating (Reciproc and Wave One) and one rotary (One Shape) single-file nickel-titanium systems. Bidistilled water was used as irrigant with traditional needle irrigation delivery system. Eppendorf tubes were used as test apparatus for collection of debris and irrigant. The volume of extruded irrigant was collected and quantified via 0.1-mL increment measure supplied on the disposable plastic insulin syringe. The liquid inside the tubes was dried and the mean weight of debris was assessed using an electronic microbalance. The data were statistically analysed using Kruskal-Wallis nonparametric test and Mann Whitney U test with Bonferroni adjustment. P-values less than 0.05 were considered significant. The Reciproc file system produced significantly more debris compared with OneShape file system (P<0.05), but no statistically significant difference was obtained between the two reciprocating instruments (P>0.05). Extrusion of irrigant was statistically insignificant irrespective of the instrument or instrumentation technique used (P >0.05). Although all systems caused apical extrusion of debris and irrigant, continuous rotary instrumentation was associated with less extrusion as compared with the use of reciprocating file systems.

  12. Measuring anxiety after spinal cord injury: Development and psychometric characteristics of the SCI-QOL Anxiety item bank and linkage with GAD-7.

    PubMed

    Kisala, Pamela A; Tulsky, David S; Kalpakjian, Claire Z; Heinemann, Allen W; Pohlig, Ryan T; Carle, Adam; Choi, Seung W

    2015-05-01

    To develop a calibrated item bank and computer adaptive test to assess anxiety symptoms in individuals with spinal cord injury (SCI), transform scores to the Patient Reported Outcomes Measurement Information System (PROMIS) metric, and create a statistical linkage with the Generalized Anxiety Disorder (GAD)-7, a widely used anxiety measure. Grounded-theory based qualitative item development methods; large-scale item calibration field testing; confirmatory factor analysis; graded response model item response theory analyses; statistical linking techniques to transform scores to a PROMIS metric; and linkage with the GAD-7. Setting Five SCI Model System centers and one Department of Veterans Affairs medical center in the United States. Participants Adults with traumatic SCI. Spinal Cord Injury-Quality of Life (SCI-QOL) Anxiety Item Bank Seven hundred sixteen individuals with traumatic SCI completed 38 items assessing anxiety, 17 of which were PROMIS items. After 13 items (including 2 PROMIS items) were removed, factor analyses confirmed unidimensionality. Item response theory analyses were used to estimate slopes and thresholds for the final 25 items (15 from PROMIS). The observed Pearson correlation between the SCI-QOL Anxiety and GAD-7 scores was 0.67. The SCI-QOL Anxiety item bank demonstrates excellent psychometric properties and is available as a computer adaptive test or short form for research and clinical applications. SCI-QOL Anxiety scores have been transformed to the PROMIS metric and we provide a method to link SCI-QOL Anxiety scores with those of the GAD-7.

  13. Patterns of exchange of forensic DNA data in the European Union through the Prüm system.

    PubMed

    Santos, Filipe; Machado, Helena

    2017-07-01

    This paper presents a study of the 5-year operation (2011-2015) of the transnational exchange of forensic DNA data between Member States of the European Union (EU) for the purpose of combating cross-border crime and terrorism within the so-called Prüm system. This first systematisation of the full official statistical dataset provides an overall assessment of the match figures and patterns of operation of the Prüm system for DNA exchange. These figures and patterns are analysed in terms of the differentiated contributions by participating EU Member States. The data suggest a trend for West and Central European countries to concentrate the majority of Prüm matches, while DNA databases of Eastern European countries tend to contribute with profiles of people that match stains in other countries. In view of the necessary transparency and accountability of the Prüm system, more extensive and informative statistics would be an important contribution to the assessment of its functioning and societal benefits. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  14. Shallow plumbing systems inferred from spatial analysis of pockmark arrays

    NASA Astrophysics Data System (ADS)

    Maia, A.; Cartwright, J. A.; Andersen, E.

    2016-12-01

    This study describes and analyses an extraordinary array of pockmarks at the modern seabed of the Lower Congo Basin (offshore Angola), in order to understand the fluid migration routes and shallow plumbing system of the area. The 3D seismic visualization of feeding conduits (pipes) allowed the identification of the source interval for the fluids expelled during pockmark formation. Spatial statistics are used to show the relationship between the underlying (polarised) polygonal fault (PPFs) patterns and seabed pockmarks distributions. Our results show PPFs control the linear arrangement of pockmarks and feeder pipes along fault strike, but faults do not act as conduits. Spatial statistics also revealed pockmark occurrence is not considered to be random, especially at short distances to nearest neighbours (<200m) where anti-clustering distributions suggest the presence of an exclusion zone around each pockmark in which no other pockmark will form. The results of this study are relevant for the understanding of shallow fluid plumbing systems in offshore settings, with implications on our current knowledge of overall fluid flow systems in hydrocarbon-rich continental margins.

  15. [Met-enkephalin in the cerebrospinal fluid as an indicator of central nervous system injury in meningitis and encephalitis].

    PubMed

    Cieśla, Andrzej; Pierzchała-Koziec, Krystyna; Mach, Tomasz; Garlicki, Aleksander; Bociaga-Jasik, Monika

    2005-05-01

    Assessment of met-enkephalin level in the cerebrospinal fluid (CSF) of patients with inflammatory process of the central nervous system (CNS) was performed to estimate the role of opioid system in viral and bacterial meningitis, and encephalitis. The met-enkephalin level, protein concentration and pleocytosis were analysed in the CSF of 53 patients with viral or bacterial meningitis, encephalitis, and in the control group of patients without inflammatory disease of the CNS. The biggest differences have been observed between the groups of patients with bacterial meningitis and those without inflammatory disease of the CNS, but they were statistically insignificant. There was a lack of correlation between met-enkephalin level and some factors of inflammatory process in CSF, such as pleocytosis and protein concentration. We have not revealed any correlation between etiological agent of CNS infection and opioid system of the brain. Despite the fact that, we observed in the study statistically insignificant changes, we suggest to continue investigations, including additional parameters which are characteristic for the CNS diseases.

  16. A d-statistic for single-case designs that is equivalent to the usual between-groups d-statistic.

    PubMed

    Shadish, William R; Hedges, Larry V; Pustejovsky, James E; Boyajian, Jonathan G; Sullivan, Kristynn J; Andrade, Alma; Barrientos, Jeannette L

    2014-01-01

    We describe a standardised mean difference statistic (d) for single-case designs that is equivalent to the usual d in between-groups experiments. We show how it can be used to summarise treatment effects over cases within a study, to do power analyses in planning new studies and grant proposals, and to meta-analyse effects across studies of the same question. We discuss limitations of this d-statistic, and possible remedies to them. Even so, this d-statistic is better founded statistically than other effect size measures for single-case design, and unlike many general linear model approaches such as multilevel modelling or generalised additive models, it produces a standardised effect size that can be integrated over studies with different outcome measures. SPSS macros for both effect size computation and power analysis are available.

  17. Coastal and Marine Bird Data Base

    USGS Publications Warehouse

    Anderson, S.H.; Geissler, P.H.; Dawson, D.K.

    1980-01-01

    Summary: This report discusses the development of a coastal and marine bird data base at the Migratory Bird and Habitat Research Laboratory. The system is compared with other data bases, and suggestions for future development, such as possible adaptations for other taxonomic groups, are included. The data base is based on the Statistical Analysis System but includes extensions programmed in PL/I. The Appendix shows how the system evolved. Output examples are given for heron data and pelagic bird data which indicate the types of analyses that can be conducted and output figures. The Appendixes include a retrieval language user's guide and description of the retrieval process and listing of translator program.

  18. Finnish upper secondary students' collaborative processes in learning statistics in a CSCL environment

    NASA Astrophysics Data System (ADS)

    Kaleva Oikarinen, Juho; Järvelä, Sanna; Kaasila, Raimo

    2014-04-01

    This design-based research project focuses on documenting statistical learning among 16-17-year-old Finnish upper secondary school students (N = 78) in a computer-supported collaborative learning (CSCL) environment. One novel value of this study is in reporting the shift from teacher-led mathematical teaching to autonomous small-group learning in statistics. The main aim of this study is to examine how student collaboration occurs in learning statistics in a CSCL environment. The data include material from videotaped classroom observations and the researcher's notes. In this paper, the inter-subjective phenomena of students' interactions in a CSCL environment are analysed by using a contact summary sheet (CSS). The development of the multi-dimensional coding procedure of the CSS instrument is presented. Aptly selected video episodes were transcribed and coded in terms of conversational acts, which were divided into non-task-related and task-related categories to depict students' levels of collaboration. The results show that collaborative learning (CL) can facilitate cohesion and responsibility and reduce students' feelings of detachment in our classless, periodic school system. The interactive .pdf material and collaboration in small groups enable statistical learning. It is concluded that CSCL is one possible method of promoting statistical teaching. CL using interactive materials seems to foster and facilitate statistical learning processes.

  19. Bootstrap versus Statistical Effect Size Corrections: A Comparison with Data from the Finding Embedded Figures Test.

    ERIC Educational Resources Information Center

    Thompson, Bruce; Melancon, Janet G.

    Effect sizes have been increasingly emphasized in research as more researchers have recognized that: (1) all parametric analyses (t-tests, analyses of variance, etc.) are correlational; (2) effect sizes have played an important role in meta-analytic work; and (3) statistical significance testing is limited in its capacity to inform scientific…

  20. Comments on `A Cautionary Note on the Interpretation of EOFs'.

    NASA Astrophysics Data System (ADS)

    Behera, Swadhin K.; Rao, Suryachandra A.; Saji, Hameed N.; Yamagata, Toshio

    2003-04-01

    The misleading aspect of the statistical analyses used in Dommenget and Latif, which raises concerns on some of the reported climate modes, is demonstrated. Adopting simple statistical techniques, the physical existence of the Indian Ocean dipole mode is shown and then the limitations of varimax and regression analyses in capturing the climate mode are discussed.

  1. Incorporating GIS building data and census housing statistics for sub-block-level population estimation

    USGS Publications Warehouse

    Wu, S.-S.; Wang, L.; Qiu, X.

    2008-01-01

    This article presents a deterministic model for sub-block-level population estimation based on the total building volumes derived from geographic information system (GIS) building data and three census block-level housing statistics. To assess the model, we generated artificial blocks by aggregating census block areas and calculating the respective housing statistics. We then applied the model to estimate populations for sub-artificial-block areas and assessed the estimates with census populations of the areas. Our analyses indicate that the average percent error of population estimation for sub-artificial-block areas is comparable to those for sub-census-block areas of the same size relative to associated blocks. The smaller the sub-block-level areas, the higher the population estimation errors. For example, the average percent error for residential areas is approximately 0.11 percent for 100 percent block areas and 35 percent for 5 percent block areas.

  2. Geostatistics and GIS: tools for characterizing environmental contamination.

    PubMed

    Henshaw, Shannon L; Curriero, Frank C; Shields, Timothy M; Glass, Gregory E; Strickland, Paul T; Breysse, Patrick N

    2004-08-01

    Geostatistics is a set of statistical techniques used in the analysis of georeferenced data that can be applied to environmental contamination and remediation studies. In this study, the 1,1-dichloro-2,2-bis(p-chlorophenyl)ethylene (DDE) contamination at a Superfund site in western Maryland is evaluated. Concern about the site and its future clean up has triggered interest within the community because residential development surrounds the area. Spatial statistical methods, of which geostatistics is a subset, are becoming increasingly popular, in part due to the availability of geographic information system (GIS) software in a variety of application packages. In this article, the joint use of ArcGIS software and the R statistical computing environment are demonstrated as an approach for comprehensive geostatistical analyses. The spatial regression method, kriging, is used to provide predictions of DDE levels at unsampled locations both within the site and the surrounding areas where residential development is ongoing.

  3. Trends in selected streamflow statistics at 19 long-term streamflow-gaging stations indicative of outflows from Texas to Arkansas, Louisiana, Galveston Bay, and the Gulf of Mexico, 1922-2009

    USGS Publications Warehouse

    Barbie, Dana L.; Wehmeyer, Loren L.

    2012-01-01

    Trends in selected streamflow statistics during 1922-2009 were evaluated at 19 long-term streamflow-gaging stations considered indicative of outflows from Texas to Arkansas, Louisiana, Galveston Bay, and the Gulf of Mexico. The U.S. Geological Survey, in cooperation with the Texas Water Development Board, evaluated streamflow data from streamflow-gaging stations with more than 50 years of record that were active as of 2009. The outflows into Arkansas and Louisiana were represented by 3 streamflow-gaging stations, and outflows into the Gulf of Mexico, including Galveston Bay, were represented by 16 streamflow-gaging stations. Monotonic trend analyses were done using the following three streamflow statistics generated from daily mean values of streamflow: (1) annual mean daily discharge, (2) annual maximum daily discharge, and (3) annual minimum daily discharge. The trend analyses were based on the nonparametric Kendall's Tau test, which is useful for the detection of monotonic upward or downward trends with time. A total of 69 trend analyses by Kendall's Tau were computed - 19 periods of streamflow multiplied by the 3 streamflow statistics plus 12 additional trend analyses because the periods of record for 2 streamflow-gaging stations were divided into periods representing pre- and post-reservoir impoundment. Unless otherwise described, each trend analysis used the entire period of record for each streamflow-gaging station. The monotonic trend analysis detected 11 statistically significant downward trends, 37 instances of no trend, and 21 statistically significant upward trends. One general region studied, which seemingly has relatively more upward trends for many of the streamflow statistics analyzed, includes the rivers and associated creeks and bayous to Galveston Bay in the Houston metropolitan area. Lastly, the most western river basins considered (the Nueces and Rio Grande) had statistically significant downward trends for many of the streamflow statistics analyzed.

  4. Performance evaluation of a hybrid-passive landfill leachate treatment system using multivariate statistical techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wallace, Jack, E-mail: jack.wallace@ce.queensu.ca; Champagne, Pascale, E-mail: champagne@civil.queensu.ca; Monnier, Anne-Charlotte, E-mail: anne-charlotte.monnier@insa-lyon.fr

    Highlights: • Performance of a hybrid passive landfill leachate treatment system was evaluated. • 33 Water chemistry parameters were sampled for 21 months and statistically analyzed. • Parameters were strongly linked and explained most (>40%) of the variation in data. • Alkalinity, ammonia, COD, heavy metals, and iron were criteria for performance. • Eight other parameters were key in modeling system dynamics and criteria. - Abstract: A pilot-scale hybrid-passive treatment system operated at the Merrick Landfill in North Bay, Ontario, Canada, treats municipal landfill leachate and provides for subsequent natural attenuation. Collected leachate is directed to a hybrid-passive treatment system,more » followed by controlled release to a natural attenuation zone before entering the nearby Little Sturgeon River. The study presents a comprehensive evaluation of the performance of the system using multivariate statistical techniques to determine the interactions between parameters, major pollutants in the leachate, and the biological and chemical processes occurring in the system. Five parameters (ammonia, alkalinity, chemical oxygen demand (COD), “heavy” metals of interest, with atomic weights above calcium, and iron) were set as criteria for the evaluation of system performance based on their toxicity to aquatic ecosystems and importance in treatment with respect to discharge regulations. System data for a full range of water quality parameters over a 21-month period were analyzed using principal components analysis (PCA), as well as principal components (PC) and partial least squares (PLS) regressions. PCA indicated a high degree of association for most parameters with the first PC, which explained a high percentage (>40%) of the variation in the data, suggesting strong statistical relationships among most of the parameters in the system. Regression analyses identified 8 parameters (set as independent variables) that were most frequently retained for modeling the five criteria parameters (set as dependent variables), on a statistically significant level: conductivity, dissolved oxygen (DO), nitrite (NO{sub 2}{sup −}), organic nitrogen (N), oxidation reduction potential (ORP), pH, sulfate and total volatile solids (TVS). The criteria parameters and the significant explanatory parameters were most important in modeling the dynamics of the passive treatment system during the study period. Such techniques and procedures were found to be highly valuable and could be applied to other sites to determine parameters of interest in similar naturalized engineered systems.« less

  5. Functional genomics annotation of a statistical epistasis network associated with bladder cancer susceptibility.

    PubMed

    Hu, Ting; Pan, Qinxin; Andrew, Angeline S; Langer, Jillian M; Cole, Michael D; Tomlinson, Craig R; Karagas, Margaret R; Moore, Jason H

    2014-04-11

    Several different genetic and environmental factors have been identified as independent risk factors for bladder cancer in population-based studies. Recent studies have turned to understanding the role of gene-gene and gene-environment interactions in determining risk. We previously developed the bioinformatics framework of statistical epistasis networks (SEN) to characterize the global structure of interacting genetic factors associated with a particular disease or clinical outcome. By applying SEN to a population-based study of bladder cancer among Caucasians in New Hampshire, we were able to identify a set of connected genetic factors with strong and significant interaction effects on bladder cancer susceptibility. To support our statistical findings using networks, in the present study, we performed pathway enrichment analyses on the set of genes identified using SEN, and found that they are associated with the carcinogen benzo[a]pyrene, a component of tobacco smoke. We further carried out an mRNA expression microarray experiment to validate statistical genetic interactions, and to determine if the set of genes identified in the SEN were differentially expressed in a normal bladder cell line and a bladder cancer cell line in the presence or absence of benzo[a]pyrene. Significant nonrandom sets of genes from the SEN were found to be differentially expressed in response to benzo[a]pyrene in both the normal bladder cells and the bladder cancer cells. In addition, the patterns of gene expression were significantly different between these two cell types. The enrichment analyses and the gene expression microarray results support the idea that SEN analysis of bladder in population-based studies is able to identify biologically meaningful statistical patterns. These results bring us a step closer to a systems genetic approach to understanding cancer susceptibility that integrates population and laboratory-based studies.

  6. iTTVis: Interactive Visualization of Table Tennis Data.

    PubMed

    Wu, Yingcai; Lan, Ji; Shu, Xinhuan; Ji, Chenyang; Zhao, Kejian; Wang, Jiachen; Zhang, Hui

    2018-01-01

    The rapid development of information technology paved the way for the recording of fine-grained data, such as stroke techniques and stroke placements, during a table tennis match. This data recording creates opportunities to analyze and evaluate matches from new perspectives. Nevertheless, the increasingly complex data poses a significant challenge to make sense of and gain insights into. Analysts usually employ tedious and cumbersome methods which are limited to watching videos and reading statistical tables. However, existing sports visualization methods cannot be applied to visualizing table tennis competitions due to different competition rules and particular data attributes. In this work, we collaborate with data analysts to understand and characterize the sophisticated domain problem of analysis of table tennis data. We propose iTTVis, a novel interactive table tennis visualization system, which to our knowledge, is the first visual analysis system for analyzing and exploring table tennis data. iTTVis provides a holistic visualization of an entire match from three main perspectives, namely, time-oriented, statistical, and tactical analyses. The proposed system with several well-coordinated views not only supports correlation identification through statistics and pattern detection of tactics with a score timeline but also allows cross analysis to gain insights. Data analysts have obtained several new insights by using iTTVis. The effectiveness and usability of the proposed system are demonstrated with four case studies.

  7. Real-time Data Display System of the Korean Neonatal Network

    PubMed Central

    Lee, Byong Sop; Moon, Wi Hwan

    2015-01-01

    Real-time data reporting in clinical research networks can provide network members through interim analyses of the registered data, which can facilitate further studies and quality improvement activities. The aim of this report was to describe the building process of the data display system (DDS) of the Korean Neonatal Network (KNN) and its basic structure. After member verification at the KNN member's site, users can choose a variable of interest that is listed in the in-hospital data statistics (for 90 variables) or in the follow-up data statistics (for 54 variables). The statistical results of the outcome variables are displayed on the HyperText Markup Language 5-based chart graphs and tables. Participating hospitals can compare their performance to those of KNN as a whole and identify the trends over time. Ranking of each participating hospital is also displayed in terms of key outcome variables such as mortality and major neonatal morbidities with the names of other centers blinded. The most powerful function of the DDS is the ability to perform 'conditional filtering' which allows users to exclusively review the records of interest. Further collaboration is needed to upgrade the DDS to a more sophisticated analytical system and to provide a more user-friendly interface. PMID:26566352

  8. A statistical characterization of the Galileo-to-GPS inter-system bias

    NASA Astrophysics Data System (ADS)

    Gioia, Ciro; Borio, Daniele

    2016-11-01

    Global navigation satellite system operates using independent time scales and thus inter-system time offsets have to be determined to enable multi-constellation navigation solutions. GPS/Galileo inter-system bias and drift are evaluated here using different types of receivers: two mass market and two professional receivers. Moreover, three different approaches are considered for the inter-system bias determination: in the first one, the broadcast Galileo to GPS time offset is used to align GPS and Galileo time scales. In the second, the inter-system bias is included in the multi-constellation navigation solution and is estimated using the measurements available. Finally, an enhanced algorithm using constraints on the inter-system bias time evolution is proposed. The inter-system bias estimates obtained with the different approaches are analysed and their stability is experimentally evaluated using the Allan deviation. The impact of the inter-system bias on the position velocity time solution is also considered and the performance of the approaches analysed is evaluated in terms of standard deviation and mean errors for both horizontal and vertical components. From the experiments, it emerges that the inter-system bias is very stable and that the use of constraints, modelling the GPS/Galileo inter-system bias behaviour, significantly improves the performance of multi-constellation navigation.

  9. Reliable mortality statistics for Turkey: Are we there yet?

    PubMed

    Özdemir, Raziye; Rao, Chalapati; Öcek, Zeliha; Dinç Horasan, Gönül

    2015-06-10

    The Turkish government has implemented several reforms to improve the Turkish Statistical Institute Death Reporting System (TURKSTAT-DRS) since 2009. However, there has been no assessment to evaluate the impact of these reforms on causes of death statistics. This study attempted to analyse the impact of these reforms on the TURKSTAT-DRS for Turkey, and in the case of Izmir, one of the most developed provinces in Turkey. The evaluation framework comprised three main components each with specific criteria. Firstly, data from TURKSTAT for Turkey and Izmir for the periods 2001-2008 and 2009-2013 were assessed in terms of the following dimensions that represent quality of mortality statistics (a. completeness of death registration, b. trends in proportions of deaths with ill-defined causes). Secondly, the quality of information recorded on individual death certificates from Izmir in 2010 was analysed for a. missing information, b. timeliness of death notifications and c. characteristics of deaths with ill-defined causes. Finally, TURKSTAT data were analysed to estimate life tables and summary mortality indicators for Turkey and Izmir, as well as the leading causes-of-death in Turkey in 2013. Registration of adult deaths in Izmir as well as at the national level for Turkey has considerably improved since the introduction of reforms in 2009, along with marked decline in the proportions of deaths assigned ill-defined causes. Death certificates from Izmir indicated significant gaps in recorded information for demographic as well as epidemiological variables, particularly for infant deaths, and in the detailed recording of causes of death. Life expectancy at birth estimated from local data is 3-4 years higher than similar estimates for Turkey from international studies, and this requires further investigation and confirmation. The TURKSTAT-DRS is now an improved source of mortality and cause of death statistics for Turkey. The reliability and validity of TURKSTAT data needs to be established through a detailed research program to evaluate completeness of death registration and validity of registered causes of death. Similar evaluation and data analysis of mortality indicators is required at regular intervals at national and sub-national level, to increase confidence in their utility as primary data for epidemiology and health policy.

  10. Publication of statistically significant research findings in prosthodontics & implant dentistry in the context of other dental specialties.

    PubMed

    Papageorgiou, Spyridon N; Kloukos, Dimitrios; Petridis, Haralampos; Pandis, Nikolaos

    2015-10-01

    To assess the hypothesis that there is excessive reporting of statistically significant studies published in prosthodontic and implantology journals, which could indicate selective publication. The last 30 issues of 9 journals in prosthodontics and implant dentistry were hand-searched for articles with statistical analyses. The percentages of significant and non-significant results were tabulated by parameter of interest. Univariable/multivariable logistic regression analyses were applied to identify possible predictors of reporting statistically significance findings. The results of this study were compared with similar studies in dentistry with random-effects meta-analyses. From the 2323 included studies 71% of them reported statistically significant results, with the significant results ranging from 47% to 86%. Multivariable modeling identified that geographical area and involvement of statistician were predictors of statistically significant results. Compared to interventional studies, the odds that in vitro and observational studies would report statistically significant results was increased by 1.20 times (OR: 2.20, 95% CI: 1.66-2.92) and 0.35 times (OR: 1.35, 95% CI: 1.05-1.73), respectively. The probability of statistically significant results from randomized controlled trials was significantly lower compared to various study designs (difference: 30%, 95% CI: 11-49%). Likewise the probability of statistically significant results in prosthodontics and implant dentistry was lower compared to other dental specialties, but this result did not reach statistical significant (P>0.05). The majority of studies identified in the fields of prosthodontics and implant dentistry presented statistically significant results. The same trend existed in publications of other specialties in dentistry. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. Quantifying the Relationship between AMC Resources and U.S. Army Materiel Readiness

    DTIC Science & Technology

    1989-08-25

    Resource Management report 984 for the same period. Insufficient data precluded analysis of the OMA PEs Total Package Fielding and Life Cycle Software...procurement, had the greatest failure rates when subjected to the statistical tests merely because of the reduced number of data pairs. Analyses of...ENGINEERING DEVELOPMENT 6.5 - MANAGEMENT AND SUPPORT 6.7 - OPERATIONAL SYSTEM DEVELOPMENT P2 - GENERAL PURPOSE FORCES P3 - INTELIGENCE AND COMMUNICATIONS P7

  12. Angiogenesis and lymphangiogenesis as prognostic factors after therapy in patients with cervical cancer

    PubMed Central

    Makarewicz, Roman; Kopczyńska, Ewa; Marszałek, Andrzej; Goralewska, Alina; Kardymowicz, Hanna

    2012-01-01

    Aim of the study This retrospective study attempts to evaluate the influence of serum vascular endothelial growth factor C (VEGF-C), microvessel density (MVD) and lymphatic vessel density (LMVD) on the result of tumour treatment in women with cervical cancer. Material and methods The research was carried out in a group of 58 patients scheduled for brachytherapy for cervical cancer. All women were patients of the Department and University Hospital of Oncology and Brachytherapy, Collegium Medicum in Bydgoszcz of Nicolaus Copernicus University in Toruń. VEGF-C was determined by means of a quantitative sandwich enzyme immunoassay using a human antibody VEGF-C ELISA produced by Bender MedSystem, enzyme-linked immunosorbent detecting the activity of human VEGF-C in body fluids. The measure for the intensity of angiogenesis and lymphangiogenesis in immunohistochemical reactions is the number of blood vessels within the tumour. Statistical analysis was done using Statistica 6.0 software (StatSoft, Inc. 2001). The Cox proportional hazards model was used for univariate and multivariate analyses. Univariate analysis of overall survival was performed as outlined by Kaplan and Meier. In all statistical analyses p < 0.05 (marked red) was taken as significant. Results In 51 patients who showed up for follow-up examination, the influence of the factors of angiogenesis, lymphangiogenesis, patients’ age and the level of haemoglobin at the end of treatment were assessed. Selected variables, such as patients’ age, lymph vessel density (LMVD), microvessel density (MVD) and the level of haemoglobin (Hb) before treatment were analysed by means of Cox logical regression as potential prognostic factors for lymph node invasion. The observed differences were statistically significant for haemoglobin level before treatment and the platelet number after treatment. The study revealed the following prognostic factors: lymph node status, FIGO stage, and kind of treatment. No statistically significant influence of angiogenic and lymphangiogenic factors on the prognosis was found. Conclusion Angiogenic and lymphangiogenic factors have no value in predicting response to radiotherapy in cervical cancer patients. PMID:23788848

  13. Ice tracking techniques, implementation, performance, and applications

    NASA Technical Reports Server (NTRS)

    Rothrock, D. A.; Carsey, F. D.; Curlander, J. C.; Holt, B.; Kwok, R.; Weeks, W. F.

    1992-01-01

    Present techniques of ice tracking make use both of cross-correlation and of edge tracking, the former being more successful in heavy pack ice, the latter being critical for the broken ice of the pack margins. Algorithms must assume some constraints on the spatial variations of displacements to eliminate fliers, but must avoid introducing any errors into the spatial statistics of the measured displacement field. We draw our illustrations from the implementation of an automated tracking system for kinematic analyses of ERS-1 and JERS-1 SAR imagery at the University of Alaska - the Alaska SAR Facility's Geophysical Processor System. Analyses of the ice kinematic data that might have some general interest to analysts of cloud-derived wind fields are the spatial structure of the fields, and the evaluation and variability of average deformation and its invariants: divergence, vorticity and shear. Many problems in sea ice dynamics and mechanics can be addressed with the kinematic data from SAR.

  14. Fatigue reliability of deck structures subjected to correlated crack growth

    NASA Astrophysics Data System (ADS)

    Feng, G. Q.; Garbatov, Y.; Guedes Soares, C.

    2013-12-01

    The objective of this work is to analyse fatigue reliability of deck structures subjected to correlated crack growth. The stress intensity factors of the correlated cracks are obtained by finite element analysis and based on which the geometry correction functions are derived. The Monte Carlo simulations are applied to predict the statistical descriptors of correlated cracks based on the Paris-Erdogan equation. A probabilistic model of crack growth as a function of time is used to analyse the fatigue reliability of deck structures accounting for the crack propagation correlation. A deck structure is modelled as a series system of stiffened panels, where a stiffened panel is regarded as a parallel system composed of plates and are longitudinal. It has been proven that the method developed here can be conveniently applied to perform the fatigue reliability assessment of structures subjected to correlated crack growth.

  15. Lindemann histograms as a new method to analyse nano-patterns and phases

    NASA Astrophysics Data System (ADS)

    Makey, Ghaith; Ilday, Serim; Tokel, Onur; Ibrahim, Muhamet; Yavuz, Ozgun; Pavlov, Ihor; Gulseren, Oguz; Ilday, Omer

    The detection, observation, and analysis of material phases and atomistic patterns are of great importance for understanding systems exhibiting both equilibrium and far-from-equilibrium dynamics. As such, there is intense research on phase transitions and pattern dynamics in soft matter, statistical and nonlinear physics, and polymer physics. In order to identify phases and nano-patterns, the pair correlation function is commonly used. However, this approach is limited in terms of recognizing competing patterns in dynamic systems, and lacks visualisation capabilities. In order to solve these limitations, we introduce Lindemann histogram quantification as an alternative method to analyse solid, liquid, and gas phases, along with hexagonal, square, and amorphous nano-pattern symmetries. We show that the proposed approach based on Lindemann parameter calculated per particle maps local number densities to material phase or particles pattern. We apply the Lindemann histogram method on dynamical colloidal self-assembly experimental data and identify competing patterns.

  16. DESIGNING ENVIRONMENTAL MONITORING DATABASES FOR STATISTIC ASSESSMENT

    EPA Science Inventory

    Databases designed for statistical analyses have characteristics that distinguish them from databases intended for general use. EMAP uses a probabilistic sampling design to collect data to produce statistical assessments of environmental conditions. In addition to supporting the ...

  17. Comparing Visual and Statistical Analysis of Multiple Baseline Design Graphs.

    PubMed

    Wolfe, Katie; Dickenson, Tammiee S; Miller, Bridget; McGrath, Kathleen V

    2018-04-01

    A growing number of statistical analyses are being developed for single-case research. One important factor in evaluating these methods is the extent to which each corresponds to visual analysis. Few studies have compared statistical and visual analysis, and information about more recently developed statistics is scarce. Therefore, our purpose was to evaluate the agreement between visual analysis and four statistical analyses: improvement rate difference (IRD); Tau-U; Hedges, Pustejovsky, Shadish (HPS) effect size; and between-case standardized mean difference (BC-SMD). Results indicate that IRD and BC-SMD had the strongest overall agreement with visual analysis. Although Tau-U had strong agreement with visual analysis on raw values, it had poorer agreement when those values were dichotomized to represent the presence or absence of a functional relation. Overall, visual analysis appeared to be more conservative than statistical analysis, but further research is needed to evaluate the nature of these disagreements.

  18. Errors in statistical decision making Chapter 2 in Applied Statistics in Agricultural, Biological, and Environmental Sciences

    USDA-ARS?s Scientific Manuscript database

    Agronomic and Environmental research experiments result in data that are analyzed using statistical methods. These data are unavoidably accompanied by uncertainty. Decisions about hypotheses, based on statistical analyses of these data are therefore subject to error. This error is of three types,...

  19. Rockslide susceptibility and hazard assessment for mitigation works design along vertical rocky cliffs: workflow proposal based on a real case-study conducted in Sacco (Campania), Italy

    NASA Astrophysics Data System (ADS)

    Pignalosa, Antonio; Di Crescenzo, Giuseppe; Marino, Ermanno; Terracciano, Rosario; Santo, Antonio

    2015-04-01

    The work here presented concerns a case study in which a complete multidisciplinary workflow has been applied for an extensive assessment of the rockslide susceptibility and hazard in a common scenario such as a vertical and fractured rocky cliffs. The studied area is located in a high-relief zone in Southern Italy (Sacco, Salerno, Campania), characterized by wide vertical rocky cliffs formed by tectonized thick successions of shallow-water limestones. The study concerned the following phases: a) topographic surveying integrating of 3d laser scanning, photogrammetry and GNSS; b) gelogical surveying, characterization of single instabilities and geomecanichal surveying, conducted by geologists rock climbers; c) processing of 3d data and reconstruction of high resolution geometrical models; d) structural and geomechanical analyses; e) data filing in a GIS-based spatial database; f) geo-statistical and spatial analyses and mapping of the whole set of data; g) 3D rockfall analysis; The main goals of the study have been a) to set-up an investigation method to achieve a complete and thorough characterization of the slope stability conditions and b) to provide a detailed base for an accurate definition of the reinforcement and mitigation systems. For this purposes the most up-to-date methods of field surveying, remote sensing, 3d modelling and geospatial data analysis have been integrated in a systematic workflow, accounting of the economic sustainability of the whole project. A novel integrated approach have been applied both fusing deterministic and statistical surveying methods. This approach enabled to deal with the wide extension of the studied area (near to 200.000 m2), without compromising an high accuracy of the results. The deterministic phase, based on a field characterization of single instabilities and their further analyses on 3d models, has been applied for delineating the peculiarity of each single feature. The statistical approach, based on geostructural field mapping and on punctual geomechanical data from scan-line surveying, allowed the rock mass partitioning in homogeneous geomechanical sectors and data interpolation through bounded geostatistical analyses on 3d models. All data, resulting from both approaches, have been referenced and filed in a single spatial database and considered in global geo-statistical analyses for deriving a fully modelled and comprehensive evaluation of the rockslide susceptibility. The described workflow yielded the following innovative results: a) a detailed census of single potential instabilities, through a spatial database recording the geometrical, geological and mechanical features, along with the expected failure modes; b) an high resolution characterization of the whole slope rockslide susceptibility, based on the partitioning of the area according to the stability and mechanical conditions which can be directly related to specific hazard mitigation systems; c) the exact extension of the area exposed to the rockslide hazard, along with the dynamic parameters of expected phenomena; d) an intervention design for hazard mitigation.

  20. The Large-Scale Structure of Semantic Networks: Statistical Analyses and a Model of Semantic Growth

    ERIC Educational Resources Information Center

    Steyvers, Mark; Tenenbaum, Joshua B.

    2005-01-01

    We present statistical analyses of the large-scale structure of 3 types of semantic networks: word associations, WordNet, and Roget's Thesaurus. We show that they have a small-world structure, characterized by sparse connectivity, short average path lengths between words, and strong local clustering. In addition, the distributions of the number of…

  1. Color recovery effect of different bleaching systems on a discolored composite resin.

    PubMed

    Gul, P; Harorlı, O T; Ocal, I B; Ergin, Z; Barutcigil, C

    2017-10-01

    Discoloration of resin-based composites is a commonly encountered problem, and bleaching agents may be used for the therapy of the existing discoloration. The purpose of this study was to investigate in vitro color recovery effect of different bleaching systems on the heavily discolored composite resin. Fifty disk-shaped dental composite specimens were prepared using A2 shade nanohybrid universal composite resin (3M ESPE Filtek Z550, St. Paul, MN, USA). Composite samples were immersed in coffee and turnip juice for 1 week in each. One laser activated bleaching (LB) (Biolase Laserwhite*20) and three conventional bleaching systems (Ultradent Opalescence Boost 40% (OB), Ultradent Opalescence PF 15% home bleaching (HB), Crest 3D White [Whitening Mouthwash]) were tested in this study. Distilled water was used as control group. The color of the samples were measured using a spectrophotometer (VITA Easy shade Compact, VITA Zahnfabrik, Bad Säckingen, Germany). Color changes (ΔE00) were calculated using the CIEDE2000 formula. Statistical analyses were conducted using paired samples test, one-way analysis of variance, and Tukey's multiple comparison tests (α = 0.05). The staining beverages caused perceptible discoloration (ΔE00 > 2.25). The color recovery effect of all bleaching systems was statistically determined to be more effective than the control group (P < 0.05). Although OB group was found as the most effective bleaching system, there was no statistically significant difference among HB, OB, and LB groups (P > 0.05). Within the limitation of this in vitro study, the highest recovery effect was determined in office bleaching system among all bleaching systems. However, home and laser bleaching systems were determined as effective as office bleaching system.

  2. A probabilistic sizing tool and Monte Carlo analysis for entry vehicle ablative thermal protection systems

    NASA Astrophysics Data System (ADS)

    Mazzaracchio, Antonio; Marchetti, Mario

    2010-03-01

    Implicit ablation and thermal response software was developed to analyse and size charring ablative thermal protection systems for entry vehicles. A statistical monitor integrated into the tool, which uses the Monte Carlo technique, allows a simulation to run over stochastic series. This performs an uncertainty and sensitivity analysis, which estimates the probability of maintaining the temperature of the underlying material within specified requirements. This approach and the associated software are primarily helpful during the preliminary design phases of spacecraft thermal protection systems. They are proposed as an alternative to traditional approaches, such as the Root-Sum-Square method. The developed tool was verified by comparing the results with those from previous work on thermal protection system probabilistic sizing methodologies, which are based on an industry standard high-fidelity ablation and thermal response program. New case studies were analysed to establish thickness margins on sizing heat shields that are currently proposed for vehicles using rigid aeroshells for future aerocapture missions at Neptune, and identifying the major sources of uncertainty in the material response.

  3. Effects off system factors on the economics of and demand for small solar thermal power systems

    NASA Technical Reports Server (NTRS)

    1981-01-01

    Market penetration as a function time, SPS performance factors, and market/economic considerations was estimated, and commercialization strategies were formulated. A market analysis task included personal interviews and supplemental mail surveys to acquire statistical data and to identify and measure attitudes, reactions and intentions of prospective SPS users. Interviews encompassed three ownership classes of electric utilities and industrial firms in the SIC codes for energy consumption. A market demand model was developed which utilized the data base developed, and projected energy price and consumption data to perform sensitivity analyses and estimate potential market for SPS.

  4. The impact of global budgets on pharmaceutical spending and utilization: early experience from the alternative quality contract.

    PubMed

    Afendulis, Christopher C; Fendrick, A Mark; Song, Zirui; Landon, Bruce E; Safran, Dana Gelb; Mechanic, Robert E; Chernew, Michael E

    2014-01-01

    In 2009, Blue Cross Blue Shield of Massachusetts implemented a global budget-based payment system, the Alternative Quality Contract (AQC), in which provider groups assumed accountability for spending. We investigate the impact of global budgets on the utilization of prescription drugs and related expenditures. Our analyses indicate no statistically significant evidence that the AQC reduced the use of drugs. Although the impact may change over time, early evidence suggests that it is premature to conclude that global budget systems may reduce access to medications. © The Author(s) 2014.

  5. Effects off system factors on the economics of and demand for small solar thermal power systems

    NASA Astrophysics Data System (ADS)

    1981-09-01

    Market penetration as a function time, SPS performance factors, and market/economic considerations was estimated, and commercialization strategies were formulated. A market analysis task included personal interviews and supplemental mail surveys to acquire statistical data and to identify and measure attitudes, reactions and intentions of prospective SPS users. Interviews encompassed three ownership classes of electric utilities and industrial firms in the SIC codes for energy consumption. A market demand model was developed which utilized the data base developed, and projected energy price and consumption data to perform sensitivity analyses and estimate potential market for SPS.

  6. SPS market analysis. [small solar thermal power systems

    NASA Technical Reports Server (NTRS)

    Goff, H. C.

    1980-01-01

    A market analysis task included personal interviews by GE personnel and supplemental mail surveys to acquire statistical data and to identify and measure attitudes, reactions and intentions of prospective small solar thermal power systems (SPS) users. Over 500 firms were contacted, including three ownership classes of electric utilities, industrial firms in the top SIC codes for energy consumption, and design engineering firms. A market demand model was developed which utilizes the data base developed by personal interviews and surveys, and projected energy price and consumption data to perform sensitivity analyses and estimate potential markets for SPS.

  7. Development of an automated processing and screening system for the space shuttle orbiter flight test data

    NASA Technical Reports Server (NTRS)

    Mccutchen, D. K.; Brose, J. F.; Palm, W. E.

    1982-01-01

    One nemesis of the structural dynamist is the tedious task of reviewing large quantities of data. This data, obtained from various types of instrumentation, may be represented by oscillogram records, root-mean-squared (rms) time histories, power spectral densities, shock spectra, 1/3 octave band analyses, and various statistical distributions. In an attempt to reduce the laborious task of manually reviewing all of the space shuttle orbiter wideband frequency-modulated (FM) analog data, an automated processing system was developed to perform the screening process based upon predefined or predicted threshold criteria.

  8. Differences in Performance Among Test Statistics for Assessing Phylogenomic Model Adequacy.

    PubMed

    Duchêne, David A; Duchêne, Sebastian; Ho, Simon Y W

    2018-05-18

    Statistical phylogenetic analyses of genomic data depend on models of nucleotide or amino acid substitution. The adequacy of these substitution models can be assessed using a number of test statistics, allowing the model to be rejected when it is found to provide a poor description of the evolutionary process. A potentially valuable use of model-adequacy test statistics is to identify when data sets are likely to produce unreliable phylogenetic estimates, but their differences in performance are rarely explored. We performed a comprehensive simulation study to identify test statistics that are sensitive to some of the most commonly cited sources of phylogenetic estimation error. Our results show that, for many test statistics, traditional thresholds for assessing model adequacy can fail to reject the model when the phylogenetic inferences are inaccurate and imprecise. This is particularly problematic when analysing loci that have few variable informative sites. We propose new thresholds for assessing substitution model adequacy and demonstrate their effectiveness in analyses of three phylogenomic data sets. These thresholds lead to frequent rejection of the model for loci that yield topological inferences that are imprecise and are likely to be inaccurate. We also propose the use of a summary statistic that provides a practical assessment of overall model adequacy. Our approach offers a promising means of enhancing model choice in genome-scale data sets, potentially leading to improvements in the reliability of phylogenomic inference.

  9. [Population genetics of the inhabitants of Northern European USSR. II. Blood group distribution and antropogenetic characteristics in 6 villages in Archangel Oblast].

    PubMed

    Revazov, A A; Pasekov, V P; Lukasheva, I D

    1975-01-01

    The paper deals with the distribution of genetic markers (systems ABO, MN, Rh (D), Hp, PTC) and a number of demographic (folding of arms, hand clasping, tongue rolling, right- and left-handedness, of the type of ear lobe, of the types of dermatoglyphic patterns) in the inhabitants of 6 villages in the Mezen District of the Archangelsk Region of the RSFSR (river Peosa basin). The data presented in this work were obtained in the course of examination of over 800 persons. Differences in the interpretation of the results of generally adopted methods of statistical analysis of samples from small populations are discussed. Among the systems analysed in one third of all the cases there was a statistically significant deviation from Hardy-Weinberg's ratios. For the MN blood groups and haptoglobins this was caused by the excess of heterozygotes. The test of Hardy--Weinberg's ratios at the level of two-loci phenotypes revealed no statistically significant deviations either in separate villages or in all the villages taken together. The analysis of heterogeneity with respect to markers inherited according to Mendel's law revealed statistically significant differences between villages in all the systems except haptoglobins. A considerable heterogeneity in the distribution of family names, the frequencies of some of them varying from village to village from 0 to 90%. Statistically significant differences between villages were shown for all the anthropogenetic characters except arm folding, hand clasping and right-left-handedness. Considering the uniformity of the environmental pressure in the region examined, the heterogeneity of the population studied is apparently associated with a random genetic differentiation (genetic drift) and, possibly, with the effect of the progenitor.

  10. A Statistical Analysis of the Economic Drivers of Battery Energy Storage in Commercial Buildings: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Long, Matthew; Simpkins, Travis; Cutler, Dylan

    There is significant interest in using battery energy storage systems (BESS) to reduce peak demand charges, and therefore the life cycle cost of electricity, in commercial buildings. This paper explores the drivers of economic viability of BESS in commercial buildings through statistical analysis. A sample population of buildings was generated, a techno-economic optimization model was used to size and dispatch the BESS, and the resulting optimal BESS sizes were analyzed for relevant predictor variables. Explanatory regression analyses were used to demonstrate that peak demand charges are the most significant predictor of an economically viable battery, and that the shape ofmore » the load profile is the most significant predictor of the size of the battery.« less

  11. A Statistical Analysis of the Economic Drivers of Battery Energy Storage in Commercial Buildings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Long, Matthew; Simpkins, Travis; Cutler, Dylan

    There is significant interest in using battery energy storage systems (BESS) to reduce peak demand charges, and therefore the life cycle cost of electricity, in commercial buildings. This paper explores the drivers of economic viability of BESS in commercial buildings through statistical analysis. A sample population of buildings was generated, a techno-economic optimization model was used to size and dispatch the BESS, and the resulting optimal BESS sizes were analyzed for relevant predictor variables. Explanatory regression analyses were used to demonstrate that peak demand charges are the most significant predictor of an economically viable battery, and that the shape ofmore » the load profile is the most significant predictor of the size of the battery.« less

  12. Total mercury in infant food, occurrence and exposure assessment in Portugal.

    PubMed

    Martins, Carla; Vasco, Elsa; Paixão, Eleonora; Alvito, Paula

    2013-01-01

    Commercial infant food labelled as from organic and conventional origin (n = 87) was analysed for total mercury content using a direct mercury analyser (DMA). Median contents of mercury were 0.50, 0.50 and 0.40 μg kg⁻¹ for processed cereal-based food, infant formulae and baby foods, respectively, with a maximum value of 19.56 μg kg⁻¹ in a baby food containing fish. Processed cereal-based food samples showed statistically significant differences for mercury content between organic and conventional analysed products. Consumption of commercial infant food analysed did not pose a risk to infants when the provisionally tolerable weekly intake (PTWI) for food other than fish and shellfish was considered. By the contrary, a risk to some infants could not be excluded when using the PTWI for fish and shellfish. This is the first study reporting contents of total mercury in commercial infant food from both farming systems and the first on exposure assessment of children to mercury in Portugal.

  13. Post Hoc Analyses of ApoE Genotype-Defined Subgroups in Clinical Trials.

    PubMed

    Kennedy, Richard E; Cutter, Gary R; Wang, Guoqiao; Schneider, Lon S

    2016-01-01

    Many post hoc analyses of clinical trials in Alzheimer's disease (AD) and mild cognitive impairment (MCI) are in small Phase 2 trials. Subject heterogeneity may lead to statistically significant post hoc results that cannot be replicated in larger follow-up studies. We investigated the extent of this problem using simulation studies mimicking current trial methods with post hoc analyses based on ApoE4 carrier status. We used a meta-database of 24 studies, including 3,574 subjects with mild AD and 1,171 subjects with MCI/prodromal AD, to simulate clinical trial scenarios. Post hoc analyses examined if rates of progression on the Alzheimer's Disease Assessment Scale-cognitive (ADAS-cog) differed between ApoE4 carriers and non-carriers. Across studies, ApoE4 carriers were younger and had lower baseline scores, greater rates of progression, and greater variability on the ADAS-cog. Up to 18% of post hoc analyses for 18-month trials in AD showed greater rates of progression for ApoE4 non-carriers that were statistically significant but unlikely to be confirmed in follow-up studies. The frequency of erroneous conclusions dropped below 3% with trials of 100 subjects per arm. In MCI, rates of statistically significant differences with greater progression in ApoE4 non-carriers remained below 3% unless sample sizes were below 25 subjects per arm. Statistically significant differences for ApoE4 in post hoc analyses often reflect heterogeneity among small samples rather than true differential effect among ApoE4 subtypes. Such analyses must be viewed cautiously. ApoE genotype should be incorporated into the design stage to minimize erroneous conclusions.

  14. Methodological Standards for Meta-Analyses and Qualitative Systematic Reviews of Cardiac Prevention and Treatment Studies: A Scientific Statement From the American Heart Association.

    PubMed

    Rao, Goutham; Lopez-Jimenez, Francisco; Boyd, Jack; D'Amico, Frank; Durant, Nefertiti H; Hlatky, Mark A; Howard, George; Kirley, Katherine; Masi, Christopher; Powell-Wiley, Tiffany M; Solomonides, Anthony E; West, Colin P; Wessel, Jennifer

    2017-09-05

    Meta-analyses are becoming increasingly popular, especially in the fields of cardiovascular disease prevention and treatment. They are often considered to be a reliable source of evidence for making healthcare decisions. Unfortunately, problems among meta-analyses such as the misapplication and misinterpretation of statistical methods and tests are long-standing and widespread. The purposes of this statement are to review key steps in the development of a meta-analysis and to provide recommendations that will be useful for carrying out meta-analyses and for readers and journal editors, who must interpret the findings and gauge methodological quality. To make the statement practical and accessible, detailed descriptions of statistical methods have been omitted. Based on a survey of cardiovascular meta-analyses, published literature on methodology, expert consultation, and consensus among the writing group, key recommendations are provided. Recommendations reinforce several current practices, including protocol registration; comprehensive search strategies; methods for data extraction and abstraction; methods for identifying, measuring, and dealing with heterogeneity; and statistical methods for pooling results. Other practices should be discontinued, including the use of levels of evidence and evidence hierarchies to gauge the value and impact of different study designs (including meta-analyses) and the use of structured tools to assess the quality of studies to be included in a meta-analysis. We also recommend choosing a pooling model for conventional meta-analyses (fixed effect or random effects) on the basis of clinical and methodological similarities among studies to be included, rather than the results of a test for statistical heterogeneity. © 2017 American Heart Association, Inc.

  15. Mixed Approach Retrospective Analyses of Suicide and Suicidal Ideation for Brand Compared with Generic Central Nervous System Drugs.

    PubMed

    Cheng, Ning; Rahman, Md Motiur; Alatawi, Yasser; Qian, Jingjing; Peissig, Peggy L; Berg, Richard L; Page, C David; Hansen, Richard A

    2018-04-01

    Several different types of drugs acting on the central nervous system (CNS) have previously been associated with an increased risk of suicide and suicidal ideation (broadly referred to as suicide). However, a differential association between brand and generic CNS drugs and suicide has not been reported. This study compares suicide adverse event rates for brand versus generic CNS drugs using multiple sources of data. Selected examples of CNS drugs (sertraline, gabapentin, zolpidem, and methylphenidate) were evaluated via the US FDA Adverse Event Reporting System (FAERS) for a hypothesis-generating study, and then via administrative claims and electronic health record (EHR) data for a more rigorous retrospective cohort study. Disproportionality analyses with reporting odds ratios and 95% confidence intervals (CIs) were used in the FAERS analyses to quantify the association between each drug and reported suicide. For the cohort studies, Cox proportional hazards models were used, controlling for demographic and clinical characteristics as well as the background risk of suicide in the insured population. The FAERS analyses found significantly lower suicide reporting rates for brands compared with generics for all four studied products (Breslow-Day P < 0.05). In the claims- and EHR-based cohort study, the adjusted hazard ratio (HR) was statistically significant only for sertraline (HR 0.58; 95% CI 0.38-0.88). Suicide reporting rates were disproportionately larger for generic than for brand CNS drugs in FAERS and adjusted retrospective cohort analyses remained significant only for sertraline. However, even for sertraline, temporal confounding related to the close proximity of black box warnings and generic availability is possible. Additional analyses in larger data sources with additional drugs are needed.

  16. Urine metabolic fingerprinting using LC-MS and GC-MS reveals metabolite changes in prostate cancer: A pilot study.

    PubMed

    Struck-Lewicka, Wiktoria; Kordalewska, Marta; Bujak, Renata; Yumba Mpanga, Arlette; Markuszewski, Marcin; Jacyna, Julia; Matuszewski, Marcin; Kaliszan, Roman; Markuszewski, Michał J

    2015-01-01

    Prostate cancer (CaP) is a leading cause of cancer deaths in men worldwide. The alarming statistics, the currently applied biomarkers are still not enough specific and selective. In addition, pathogenesis of CaP development is not totally understood. Therefore, in the present work, metabolomics study related to urinary metabolic fingerprinting analyses has been performed in order to scrutinize potential biomarkers that could help in explaining the pathomechanism of the disease and be potentially useful in its diagnosis and prognosis. Urine samples from CaP patients and healthy volunteers were analyzed with the use of high performance liquid chromatography coupled with time of flight mass spectrometry detection (HPLC-TOF/MS) in positive and negative polarity as well as gas chromatography hyphenated with triple quadruple mass spectrometry detection (GC-QqQ/MS) in a scan mode. The obtained data sets were statistically analyzed using univariate and multivariate statistical analyses. The Principal Component Analysis (PCA) was used to check systems' stability and possible outliers, whereas Partial Least Squares Discriminant Analysis (PLS-DA) was performed for evaluation of quality of the model as well as its predictive ability using statistically significant metabolites. The subsequent identification of selected metabolites using NIST library and commonly available databases allows for creation of a list of putative biomarkers and related biochemical pathways they are involved in. The selected pathways, like urea and tricarboxylic acid cycle, amino acid and purine metabolism, can play crucial role in pathogenesis of prostate cancer disease. Copyright © 2014 Elsevier B.V. All rights reserved.

  17. A Primer on Receiver Operating Characteristic Analysis and Diagnostic Efficiency Statistics for Pediatric Psychology: We Are Ready to ROC

    PubMed Central

    2014-01-01

    Objective To offer a practical demonstration of receiver operating characteristic (ROC) analyses, diagnostic efficiency statistics, and their application to clinical decision making using a popular parent checklist to assess for potential mood disorder. Method Secondary analyses of data from 589 families seeking outpatient mental health services, completing the Child Behavior Checklist and semi-structured diagnostic interviews. Results Internalizing Problems raw scores discriminated mood disorders significantly better than did age- and gender-normed T scores, or an Affective Problems score. Internalizing scores <8 had a diagnostic likelihood ratio <0.3, and scores >30 had a diagnostic likelihood ratio of 7.4. Conclusions This study illustrates a series of steps in defining a clinical problem, operationalizing it, selecting a valid study design, and using ROC analyses to generate statistics that support clinical decisions. The ROC framework offers important advantages for clinical interpretation. Appendices include sample scripts using SPSS and R to check assumptions and conduct ROC analyses. PMID:23965298

  18. A primer on receiver operating characteristic analysis and diagnostic efficiency statistics for pediatric psychology: we are ready to ROC.

    PubMed

    Youngstrom, Eric A

    2014-03-01

    To offer a practical demonstration of receiver operating characteristic (ROC) analyses, diagnostic efficiency statistics, and their application to clinical decision making using a popular parent checklist to assess for potential mood disorder. Secondary analyses of data from 589 families seeking outpatient mental health services, completing the Child Behavior Checklist and semi-structured diagnostic interviews. Internalizing Problems raw scores discriminated mood disorders significantly better than did age- and gender-normed T scores, or an Affective Problems score. Internalizing scores <8 had a diagnostic likelihood ratio <0.3, and scores >30 had a diagnostic likelihood ratio of 7.4. This study illustrates a series of steps in defining a clinical problem, operationalizing it, selecting a valid study design, and using ROC analyses to generate statistics that support clinical decisions. The ROC framework offers important advantages for clinical interpretation. Appendices include sample scripts using SPSS and R to check assumptions and conduct ROC analyses.

  19. Distinguishing Mediational Models and Analyses in Clinical Psychology: Atemporal Associations Do Not Imply Causation.

    PubMed

    Winer, E Samuel; Cervone, Daniel; Bryant, Jessica; McKinney, Cliff; Liu, Richard T; Nadorff, Michael R

    2016-09-01

    A popular way to attempt to discern causality in clinical psychology is through mediation analysis. However, mediation analysis is sometimes applied to research questions in clinical psychology when inferring causality is impossible. This practice may soon increase with new, readily available, and easy-to-use statistical advances. Thus, we here provide a heuristic to remind clinical psychological scientists of the assumptions of mediation analyses. We describe recent statistical advances and unpack assumptions of causality in mediation, underscoring the importance of time in understanding mediational hypotheses and analyses in clinical psychology. Example analyses demonstrate that statistical mediation can occur despite theoretical mediation being improbable. We propose a delineation of mediational effects derived from cross-sectional designs into the terms temporal and atemporal associations to emphasize time in conceptualizing process models in clinical psychology. The general implications for mediational hypotheses and the temporal frameworks from within which they may be drawn are discussed. © 2016 Wiley Periodicals, Inc.

  20. Evaluation of Apical Extrusion of Debris and Irrigant Using Two New Reciprocating and One Continuous Rotation Single File Systems

    PubMed Central

    Nayak, Gurudutt; Singh, Inderpreet; Shetty, Shashit; Dahiya, Surya

    2014-01-01

    Objective: Apical extrusion of debris and irrigants during cleaning and shaping of the root canal is one of the main causes of periapical inflammation and postoperative flare-ups. The purpose of this study was to quantitatively measure the amount of debris and irrigants extruded apically in single rooted canals using two reciprocating and one rotary single file nickel-titanium instrumentation systems. Materials and Methods: Sixty human mandibular premolars, randomly assigned to three groups (n = 20) were instrumented using two reciprocating (Reciproc and Wave One) and one rotary (One Shape) single-file nickel-titanium systems. Bidistilled water was used as irrigant with traditional needle irrigation delivery system. Eppendorf tubes were used as test apparatus for collection of debris and irrigant. The volume of extruded irrigant was collected and quantified via 0.1-mL increment measure supplied on the disposable plastic insulin syringe. The liquid inside the tubes was dried and the mean weight of debris was assessed using an electronic microbalance. The data were statistically analysed using Kruskal-Wallis nonparametric test and Mann Whitney U test with Bonferroni adjustment. P-values less than 0.05 were considered significant. Results: The Reciproc file system produced significantly more debris compared with OneShape file system (P<0.05), but no statistically significant difference was obtained between the two reciprocating instruments (P>0.05). Extrusion of irrigant was statistically insignificant irrespective of the instrument or instrumentation technique used (P >0.05). Conclusions: Although all systems caused apical extrusion of debris and irrigant, continuous rotary instrumentation was associated with less extrusion as compared with the use of reciprocating file systems. PMID:25628665

  1. Multivariate model of female black bear habitat use for a Geographic Information System

    USGS Publications Warehouse

    Clark, Joseph D.; Dunn, James E.; Smith, Kimberly G.

    1993-01-01

    Simple univariate statistical techniques may not adequately assess the multidimensional nature of habitats used by wildlife. Thus, we developed a multivariate method to model habitat-use potential using a set of female black bear (Ursus americanus) radio locations and habitat data consisting of forest cover type, elevation, slope, aspect, distance to roads, distance to streams, and forest cover type diversity score in the Ozark Mountains of Arkansas. The model is based on the Mahalanobis distance statistic coupled with Geographic Information System (GIS) technology. That statistic is a measure of dissimilarity and represents a standardized squared distance between a set of sample variates and an ideal based on the mean of variates associated with animal observations. Calculations were made with the GIS to produce a map containing Mahalanobis distance values within each cell on a 60- × 60-m grid. The model identified areas of high habitat use potential that could not otherwise be identified by independent perusal of any single map layer. This technique avoids many pitfalls that commonly affect typical multivariate analyses of habitat use and is a useful tool for habitat manipulation or mitigation to favor terrestrial vertebrates that use habitats on a landscape scale.

  2. Cancer Statistics Animator

    Cancer.gov

    This tool allows users to animate cancer trends over time by cancer site and cause of death, race, and sex. Provides access to incidence, mortality, and survival. Select the type of statistic, variables, format, and then extract the statistics in a delimited format for further analyses.

  3. Statistically derived contributions of diverse human influences to twentieth-century temperature changes

    NASA Astrophysics Data System (ADS)

    Estrada, Francisco; Perron, Pierre; Martínez-López, Benjamín

    2013-12-01

    The warming of the climate system is unequivocal as evidenced by an increase in global temperatures by 0.8°C over the past century. However, the attribution of the observed warming to human activities remains less clear, particularly because of the apparent slow-down in warming since the late 1990s. Here we analyse radiative forcing and temperature time series with state-of-the-art statistical methods to address this question without climate model simulations. We show that long-term trends in total radiative forcing and temperatures have largely been determined by atmospheric greenhouse gas concentrations, and modulated by other radiative factors. We identify a pronounced increase in the growth rates of both temperatures and radiative forcing around 1960, which marks the onset of sustained global warming. Our analyses also reveal a contribution of human interventions to two periods when global warming slowed down. Our statistical analysis suggests that the reduction in the emissions of ozone-depleting substances under the Montreal Protocol, as well as a reduction in methane emissions, contributed to the lower rate of warming since the 1990s. Furthermore, we identify a contribution from the two world wars and the Great Depression to the documented cooling in the mid-twentieth century, through lower carbon dioxide emissions. We conclude that reductions in greenhouse gas emissions are effective in slowing the rate of warming in the short term.

  4. Sources of Safety Data and Statistical Strategies for Design and Analysis: Postmarket Surveillance.

    PubMed

    Izem, Rima; Sanchez-Kam, Matilde; Ma, Haijun; Zink, Richard; Zhao, Yueqin

    2018-03-01

    Safety data are continuously evaluated throughout the life cycle of a medical product to accurately assess and characterize the risks associated with the product. The knowledge about a medical product's safety profile continually evolves as safety data accumulate. This paper discusses data sources and analysis considerations for safety signal detection after a medical product is approved for marketing. This manuscript is the second in a series of papers from the American Statistical Association Biopharmaceutical Section Safety Working Group. We share our recommendations for the statistical and graphical methodologies necessary to appropriately analyze, report, and interpret safety outcomes, and we discuss the advantages and disadvantages of safety data obtained from passive postmarketing surveillance systems compared to other sources. Signal detection has traditionally relied on spontaneous reporting databases that have been available worldwide for decades. However, current regulatory guidelines and ease of reporting have increased the size of these databases exponentially over the last few years. With such large databases, data-mining tools using disproportionality analysis and helpful graphics are often used to detect potential signals. Although the data sources have many limitations, analyses of these data have been successful at identifying safety signals postmarketing. Experience analyzing these dynamic data is useful in understanding the potential and limitations of analyses with new data sources such as social media, claims, or electronic medical records data.

  5. Dose response explorer: an integrated open-source tool for exploring and modelling radiotherapy dose volume outcome relationships

    NASA Astrophysics Data System (ADS)

    El Naqa, I.; Suneja, G.; Lindsay, P. E.; Hope, A. J.; Alaly, J. R.; Vicic, M.; Bradley, J. D.; Apte, A.; Deasy, J. O.

    2006-11-01

    Radiotherapy treatment outcome models are a complicated function of treatment, clinical and biological factors. Our objective is to provide clinicians and scientists with an accurate, flexible and user-friendly software tool to explore radiotherapy outcomes data and build statistical tumour control or normal tissue complications models. The software tool, called the dose response explorer system (DREES), is based on Matlab, and uses a named-field structure array data type. DREES/Matlab in combination with another open-source tool (CERR) provides an environment for analysing treatment outcomes. DREES provides many radiotherapy outcome modelling features, including (1) fitting of analytical normal tissue complication probability (NTCP) and tumour control probability (TCP) models, (2) combined modelling of multiple dose-volume variables (e.g., mean dose, max dose, etc) and clinical factors (age, gender, stage, etc) using multi-term regression modelling, (3) manual or automated selection of logistic or actuarial model variables using bootstrap statistical resampling, (4) estimation of uncertainty in model parameters, (5) performance assessment of univariate and multivariate analyses using Spearman's rank correlation and chi-square statistics, boxplots, nomograms, Kaplan-Meier survival plots, and receiver operating characteristics curves, and (6) graphical capabilities to visualize NTCP or TCP prediction versus selected variable models using various plots. DREES provides clinical researchers with a tool customized for radiotherapy outcome modelling. DREES is freely distributed. We expect to continue developing DREES based on user feedback.

  6. Memory matters: influence from a cognitive map on animal space use.

    PubMed

    Gautestad, Arild O

    2011-10-21

    A vertebrate individual's cognitive map provides a capacity for site fidelity and long-distance returns to favorable patches. Fractal-geometrical analysis of individual space use based on collection of telemetry fixes makes it possible to verify the influence of a cognitive map on the spatial scatter of habitat use and also to what extent space use has been of a scale-specific versus a scale-free kind. This approach rests on a statistical mechanical level of system abstraction, where micro-scale details of behavioral interactions are coarse-grained to macro-scale observables like the fractal dimension of space use. In this manner, the magnitude of the fractal dimension becomes a proxy variable for distinguishing between main classes of habitat exploration and site fidelity, like memory-less (Markovian) Brownian motion and Levy walk and memory-enhanced space use like Multi-scaled Random Walk (MRW). In this paper previous analyses are extended by exploring MRW simulations under three scenarios: (1) central place foraging, (2) behavioral adaptation to resource depletion (avoidance of latest visited locations) and (3) transition from MRW towards Levy walk by narrowing memory capacity to a trailing time window. A generalized statistical-mechanical theory with the power to model cognitive map influence on individual space use will be important for statistical analyses of animal habitat preferences and the mechanics behind site fidelity and home ranges. Copyright © 2011 Elsevier Ltd. All rights reserved.

  7. The determinants of bond angle variability in protein/peptide backbones: A comprehensive statistical/quantum mechanics analysis.

    PubMed

    Improta, Roberto; Vitagliano, Luigi; Esposito, Luciana

    2015-11-01

    The elucidation of the mutual influence between peptide bond geometry and local conformation has important implications for protein structure refinement, validation, and prediction. To gain insights into the structural determinants and the energetic contributions associated with protein/peptide backbone plasticity, we here report an extensive analysis of the variability of the peptide bond angles by combining statistical analyses of protein structures and quantum mechanics calculations on small model peptide systems. Our analyses demonstrate that all the backbone bond angles strongly depend on the peptide conformation and unveil the existence of regular trends as function of ψ and/or φ. The excellent agreement of the quantum mechanics calculations with the statistical surveys of protein structures validates the computational scheme here employed and demonstrates that the valence geometry of protein/peptide backbone is primarily dictated by local interactions. Notably, for the first time we show that the position of the H(α) hydrogen atom, which is an important parameter in NMR structural studies, is also dependent on the local conformation. Most of the trends observed may be satisfactorily explained by invoking steric repulsive interactions; in some specific cases the valence bond variability is also influenced by hydrogen-bond like interactions. Moreover, we can provide a reliable estimate of the energies involved in the interplay between geometry and conformations. © 2015 Wiley Periodicals, Inc.

  8. Management Information Systems Design Implications: The Effect of Cognitive Style and Information Presentation on Problem Solving.

    DTIC Science & Technology

    1987-12-01

    my thesis advisor, Dr Dennis E Campbell. Without his expert advice and extreme patience with an INTP like myself, this research would not have been...research was to identify a relationship between psychological type and mode of presentation of information. The * type theory developed ty Carl Jung and...preference rankings for seven differewnt modes of presentation of data. The statistical analyses showed no relationship betveen personality type and

  9. Aerosols Observations with a new lidar station in Punta Arenas, Chile

    NASA Astrophysics Data System (ADS)

    Barja, Boris; Zamorano, Felix; Ristori, Pablo; Otero, Lidia; Quel, Eduardo; Sugimoto, Nobuo; Shimizu, Atsushi; Santana, Jorge

    2018-04-01

    A tropospheric lidar system was installed in Punta Arenas, Chile (53.13°S, 70.88°W) in September 2016 under the collaboration project SAVERNET (Chile, Japan and Argentina) to monitor the atmosphere. Statistical analyses of the clouds and aerosols behavior and some cases of dust detected with lidar, at these high southern latitude and cold environment regions during three months (austral spring) are discussed using information from satellite, modelling and solar radiation ground measurements.

  10. Refined elasticity sampling for Monte Carlo-based identification of stabilizing network patterns.

    PubMed

    Childs, Dorothee; Grimbs, Sergio; Selbig, Joachim

    2015-06-15

    Structural kinetic modelling (SKM) is a framework to analyse whether a metabolic steady state remains stable under perturbation, without requiring detailed knowledge about individual rate equations. It provides a representation of the system's Jacobian matrix that depends solely on the network structure, steady state measurements, and the elasticities at the steady state. For a measured steady state, stability criteria can be derived by generating a large number of SKMs with randomly sampled elasticities and evaluating the resulting Jacobian matrices. The elasticity space can be analysed statistically in order to detect network positions that contribute significantly to the perturbation response. Here, we extend this approach by examining the kinetic feasibility of the elasticity combinations created during Monte Carlo sampling. Using a set of small example systems, we show that the majority of sampled SKMs would yield negative kinetic parameters if they were translated back into kinetic models. To overcome this problem, a simple criterion is formulated that mitigates such infeasible models. After evaluating the small example pathways, the methodology was used to study two steady states of the neuronal TCA cycle and the intrinsic mechanisms responsible for their stability or instability. The findings of the statistical elasticity analysis confirm that several elasticities are jointly coordinated to control stability and that the main source for potential instabilities are mutations in the enzyme alpha-ketoglutarate dehydrogenase. © The Author 2015. Published by Oxford University Press.

  11. Benefits of remote real-time side-effect monitoring systems for patients receiving cancer treatment.

    PubMed

    Kofoed, Sarah; Breen, Sibilah; Gough, Karla; Aranda, Sanchia

    2012-03-05

    In Australia, the incidence of cancer diagnoses is rising along with an aging population. Cancer treatments, such as chemotherapy, are increasingly being provided in the ambulatory care setting. Cancer treatments are commonly associated with distressing and serious side-effects and patients often struggle to manage these themselves without specialized real-time support. Unlike chronic disease populations, few systems for the remote real-time monitoring of cancer patients have been reported. However, several prototype systems have been developed and have received favorable reports. This review aimed to identify and detail systems that reported statistical analyses of changes in patient clinical outcomes, health care system usage or health economic analyses. Five papers were identified that met these criteria. There was wide variation in the design of the monitoring systems in terms of data input method, clinician alerting and response, groups of patients targeted and clinical outcomes measured. The majority of studies had significant methodological weaknesses. These included no control group comparisons, small sample sizes, poor documentation of clinical interventions or measures of adherence to the monitoring systems. In spite of the limitations, promising results emerged in terms of improved clinical outcomes (e.g. pain, depression, fatigue). Health care system usage was assessed in two papers with inconsistent results. No studies included health economic analyses. The diversity in systems described, outcomes measured and methodological issues all limited between-study comparisons. Given the acceptability of remote monitoring and the promising outcomes from the few studies analyzing patient or health care system outcomes, future research is needed to rigorously trial these systems to enable greater patient support and safety in the ambulatory setting.

  12. Benefits of remote real-time side-effect monitoring systems for patients receiving cancer treatment

    PubMed Central

    Kofoed, Sarah; Breen, Sibilah; Gough, Karla; Aranda, Sanchia

    2012-01-01

    In Australia, the incidence of cancer diagnoses is rising along with an aging population. Cancer treatments, such as chemotherapy, are increasingly being provided in the ambulatory care setting. Cancer treatments are commonly associated with distressing and serious side-effects and patients often struggle to manage these themselves without specialized real-time support. Unlike chronic disease populations, few systems for the remote real-time monitoring of cancer patients have been reported. However, several prototype systems have been developed and have received favorable reports. This review aimed to identify and detail systems that reported statistical analyses of changes in patient clinical outcomes, health care system usage or health economic analyses. Five papers were identified that met these criteria. There was wide variation in the design of the monitoring systems in terms of data input method, clinician alerting and response, groups of patients targeted and clinical outcomes measured. The majority of studies had significant methodological weaknesses. These included no control group comparisons, small sample sizes, poor documentation of clinical interventions or measures of adherence to the monitoring systems. In spite of the limitations, promising results emerged in terms of improved clinical outcomes (e.g. pain, depression, fatigue). Health care system usage was assessed in two papers with inconsistent results. No studies included health economic analyses. The diversity in systems described, outcomes measured and methodological issues all limited between-study comparisons. Given the acceptability of remote monitoring and the promising outcomes from the few studies analyzing patient or health care system outcomes, future research is needed to rigorously trial these systems to enable greater patient support and safety in the ambulatory setting. PMID:25992209

  13. Big Data and Health Economics: Strengths, Weaknesses, Opportunities and Threats.

    PubMed

    Collins, Brendan

    2016-02-01

    'Big data' is the collective name for the increasing capacity of information systems to collect and store large volumes of data, which are often unstructured and time stamped, and to analyse these data by using regression and other statistical techniques. This is a review of the potential applications of big data and health economics, using a SWOT (strengths, weaknesses, opportunities, threats) approach. In health economics, large pseudonymized databases, such as the planned care.data programme in the UK, have the potential to increase understanding of how drugs work in the real world, taking into account adherence, co-morbidities, interactions and side effects. This 'real-world evidence' has applications in individualized medicine. More routine and larger-scale cost and outcomes data collection will make health economic analyses more disease specific and population specific but may require new skill sets. There is potential for biomonitoring and lifestyle data to inform health economic analyses and public health policy.

  14. Applications of MIDAS regression in analysing trends in water quality

    NASA Astrophysics Data System (ADS)

    Penev, Spiridon; Leonte, Daniela; Lazarov, Zdravetz; Mann, Rob A.

    2014-04-01

    We discuss novel statistical methods in analysing trends in water quality. Such analysis uses complex data sets of different classes of variables, including water quality, hydrological and meteorological. We analyse the effect of rainfall and flow on trends in water quality utilising a flexible model called Mixed Data Sampling (MIDAS). This model arises because of the mixed frequency in the data collection. Typically, water quality variables are sampled fortnightly, whereas the rain data is sampled daily. The advantage of using MIDAS regression is in the flexible and parsimonious modelling of the influence of the rain and flow on trends in water quality variables. We discuss the model and its implementation on a data set from the Shoalhaven Supply System and Catchments in the state of New South Wales, Australia. Information criteria indicate that MIDAS modelling improves upon simplistic approaches that do not utilise the mixed data sampling nature of the data.

  15. Comprehensive analyses of tissue-specific networks with implications to psychiatric diseases

    PubMed Central

    Lin, Guan Ning; Corominas, Roser; Nam, Hyun-Jun; Urresti, Jorge; Iakoucheva, Lilia M.

    2017-01-01

    Recent advances in genome sequencing and “omics” technologies are opening new opportunities for improving diagnosis and treatment of human diseases. The precision medicine initiative in particular aims at developing individualized treatment options that take into account individual variability in genes and environment of each person. Systems biology approaches that group genes, transcripts and proteins into functionally meaningful networks will play crucial role in the future of personalized medicine. They will allow comparison of healthy and disease-affected tissues and organs from the same individual, as well as between healthy and disease-afflicted individuals. However, the field faces a multitude of challenges ranging from data integration to statistical and combinatorial issues in data analyses. This chapter describes computational approaches developed by us and the others to tackle challenges in tissue-specific network analyses, with the main focus on psychiatric diseases. PMID:28849569

  16. Stochastic phase segregation on surfaces

    PubMed Central

    Gera, Prerna

    2017-01-01

    Phase separation and coarsening is a phenomenon commonly seen in binary physical and chemical systems that occur in nature. Often, thermal fluctuations, modelled as stochastic noise, are present in the system and the phase segregation process occurs on a surface. In this work, the segregation process is modelled via the Cahn–Hilliard–Cook model, which is a fourth-order parabolic stochastic system. Coarsening is analysed on two sample surfaces: a unit sphere and a dumbbell. On both surfaces, a statistical analysis of the growth rate is performed, and the influence of noise level and mobility is also investigated. For the spherical interface, it is also shown that a lognormal distribution fits the growth rate well. PMID:28878994

  17. SOCR Analyses – an Instructional Java Web-based Statistical Analysis Toolkit

    PubMed Central

    Chu, Annie; Cui, Jenny; Dinov, Ivo D.

    2011-01-01

    The Statistical Online Computational Resource (SOCR) designs web-based tools for educational use in a variety of undergraduate courses (Dinov 2006). Several studies have demonstrated that these resources significantly improve students' motivation and learning experiences (Dinov et al. 2008). SOCR Analyses is a new component that concentrates on data modeling and analysis using parametric and non-parametric techniques supported with graphical model diagnostics. Currently implemented analyses include commonly used models in undergraduate statistics courses like linear models (Simple Linear Regression, Multiple Linear Regression, One-Way and Two-Way ANOVA). In addition, we implemented tests for sample comparisons, such as t-test in the parametric category; and Wilcoxon rank sum test, Kruskal-Wallis test, Friedman's test, in the non-parametric category. SOCR Analyses also include several hypothesis test models, such as Contingency tables, Friedman's test and Fisher's exact test. The code itself is open source (http://socr.googlecode.com/), hoping to contribute to the efforts of the statistical computing community. The code includes functionality for each specific analysis model and it has general utilities that can be applied in various statistical computing tasks. For example, concrete methods with API (Application Programming Interface) have been implemented in statistical summary, least square solutions of general linear models, rank calculations, etc. HTML interfaces, tutorials, source code, activities, and data are freely available via the web (www.SOCR.ucla.edu). Code examples for developers and demos for educators are provided on the SOCR Wiki website. In this article, the pedagogical utilization of the SOCR Analyses is discussed, as well as the underlying design framework. As the SOCR project is on-going and more functions and tools are being added to it, these resources are constantly improved. The reader is strongly encouraged to check the SOCR site for most updated information and newly added models. PMID:21546994

  18. Dissecting the genetics of complex traits using summary association statistics.

    PubMed

    Pasaniuc, Bogdan; Price, Alkes L

    2017-02-01

    During the past decade, genome-wide association studies (GWAS) have been used to successfully identify tens of thousands of genetic variants associated with complex traits and diseases. These studies have produced extensive repositories of genetic variation and trait measurements across large numbers of individuals, providing tremendous opportunities for further analyses. However, privacy concerns and other logistical considerations often limit access to individual-level genetic data, motivating the development of methods that analyse summary association statistics. Here, we review recent progress on statistical methods that leverage summary association data to gain insights into the genetic basis of complex traits and diseases.

  19. Statistical innovations in diagnostic device evaluation.

    PubMed

    Yu, Tinghui; Li, Qin; Gray, Gerry; Yue, Lilly Q

    2016-01-01

    Due to rapid technological development, innovations in diagnostic devices are proceeding at an extremely fast pace. Accordingly, the needs for adopting innovative statistical methods have emerged in the evaluation of diagnostic devices. Statisticians in the Center for Devices and Radiological Health at the Food and Drug Administration have provided leadership in implementing statistical innovations. The innovations discussed in this article include: the adoption of bootstrap and Jackknife methods, the implementation of appropriate multiple reader multiple case study design, the application of robustness analyses for missing data, and the development of study designs and data analyses for companion diagnostics.

  20. Quantitative analysis of tympanic membrane perforation: a simple and reliable method.

    PubMed

    Ibekwe, T S; Adeosun, A A; Nwaorgu, O G

    2009-01-01

    Accurate assessment of the features of tympanic membrane perforation, especially size, site, duration and aetiology, is important, as it enables optimum management. To describe a simple, cheap and effective method of quantitatively analysing tympanic membrane perforations. The system described comprises a video-otoscope (capable of generating still and video images of the tympanic membrane), adapted via a universal serial bus box to a computer screen, with images analysed using the Image J geometrical analysis software package. The reproducibility of results and their correlation with conventional otoscopic methods of estimation were tested statistically with the paired t-test and correlational tests, using the Statistical Package for the Social Sciences version 11 software. The following equation was generated: P/T x 100 per cent = percentage perforation, where P is the area (in pixels2) of the tympanic membrane perforation and T is the total area (in pixels2) for the entire tympanic membrane (including the perforation). Illustrations are shown. Comparison of blinded data on tympanic membrane perforation area obtained independently from assessments by two trained otologists, of comparative years of experience, using the video-otoscopy system described, showed similar findings, with strong correlations devoid of inter-observer error (p = 0.000, r = 1). Comparison with conventional otoscopic assessment also indicated significant correlation, comparing results for two trained otologists, but some inter-observer variation was present (p = 0.000, r = 0.896). Correlation between the two methods for each of the otologists was also highly significant (p = 0.000). A computer-adapted video-otoscope, with images analysed by Image J software, represents a cheap, reliable, technology-driven, clinical method of quantitative analysis of tympanic membrane perforations and injuries.

  1. Content-Based VLE Designs Improve Learning Efficiency in Constructivist Statistics Education

    PubMed Central

    Wessa, Patrick; De Rycker, Antoon; Holliday, Ian Edward

    2011-01-01

    Background We introduced a series of computer-supported workshops in our undergraduate statistics courses, in the hope that it would help students to gain a deeper understanding of statistical concepts. This raised questions about the appropriate design of the Virtual Learning Environment (VLE) in which such an approach had to be implemented. Therefore, we investigated two competing software design models for VLEs. In the first system, all learning features were a function of the classical VLE. The second system was designed from the perspective that learning features should be a function of the course's core content (statistical analyses), which required us to develop a specific–purpose Statistical Learning Environment (SLE) based on Reproducible Computing and newly developed Peer Review (PR) technology. Objectives The main research question is whether the second VLE design improved learning efficiency as compared to the standard type of VLE design that is commonly used in education. As a secondary objective we provide empirical evidence about the usefulness of PR as a constructivist learning activity which supports non-rote learning. Finally, this paper illustrates that it is possible to introduce a constructivist learning approach in large student populations, based on adequately designed educational technology, without subsuming educational content to technological convenience. Methods Both VLE systems were tested within a two-year quasi-experiment based on a Reliable Nonequivalent Group Design. This approach allowed us to draw valid conclusions about the treatment effect of the changed VLE design, even though the systems were implemented in successive years. The methodological aspects about the experiment's internal validity are explained extensively. Results The effect of the design change is shown to have substantially increased the efficiency of constructivist, computer-assisted learning activities for all cohorts of the student population under investigation. The findings demonstrate that a content–based design outperforms the traditional VLE–based design. PMID:21998652

  2. Fundamentals and Catalytic Innovation: The Statistical and Data Management Center of the Antibacterial Resistance Leadership Group

    PubMed Central

    Huvane, Jacqueline; Komarow, Lauren; Hill, Carol; Tran, Thuy Tien T.; Pereira, Carol; Rosenkranz, Susan L.; Finnemeyer, Matt; Earley, Michelle; Jiang, Hongyu (Jeanne); Wang, Rui; Lok, Judith

    2017-01-01

    Abstract The Statistical and Data Management Center (SDMC) provides the Antibacterial Resistance Leadership Group (ARLG) with statistical and data management expertise to advance the ARLG research agenda. The SDMC is active at all stages of a study, including design; data collection and monitoring; data analyses and archival; and publication of study results. The SDMC enhances the scientific integrity of ARLG studies through the development and implementation of innovative and practical statistical methodologies and by educating research colleagues regarding the application of clinical trial fundamentals. This article summarizes the challenges and roles, as well as the innovative contributions in the design, monitoring, and analyses of clinical trials and diagnostic studies, of the ARLG SDMC. PMID:28350899

  3. Continuous EEG signal analysis for asynchronous BCI application.

    PubMed

    Hsu, Wei-Yen

    2011-08-01

    In this study, we propose a two-stage recognition system for continuous analysis of electroencephalogram (EEG) signals. An independent component analysis (ICA) and correlation coefficient are used to automatically eliminate the electrooculography (EOG) artifacts. Based on the continuous wavelet transform (CWT) and Student's two-sample t-statistics, active segment selection then detects the location of active segment in the time-frequency domain. Next, multiresolution fractal feature vectors (MFFVs) are extracted with the proposed modified fractal dimension from wavelet data. Finally, the support vector machine (SVM) is adopted for the robust classification of MFFVs. The EEG signals are continuously analyzed in 1-s segments, and every 0.5 second moves forward to simulate asynchronous BCI works in the two-stage recognition architecture. The segment is first recognized as lifted or not in the first stage, and then is classified as left or right finger lifting at stage two if the segment is recognized as lifting in the first stage. Several statistical analyses are used to evaluate the performance of the proposed system. The results indicate that it is a promising system in the applications of asynchronous BCI work.

  4. Toward the assimilation of biogeochemical data in the CMEMS BIOMER coupled physical-biogeochemical operational system

    NASA Astrophysics Data System (ADS)

    Lamouroux, Julien; Testut, Charles-Emmanuel; Lellouche, Jean-Michel; Perruche, Coralie; Paul, Julien

    2017-04-01

    The operational production of data-assimilated biogeochemical state of the ocean is one of the challenging core projects of the Copernicus Marine Environment Monitoring Service. In that framework - and with the April 2018 CMEMS V4 release as a target - Mercator Ocean is in charge of improving the realism of its global ¼° BIOMER coupled physical-biogeochemical (NEMO/PISCES) simulations, analyses and re-analyses, and to develop an effective capacity to routinely estimate the biogeochemical state of the ocean, through the implementation of biogeochemical data assimilation. Primary objectives are to enhance the time representation of the seasonal cycle in the real time and reanalysis systems, and to provide a better control of the production in the equatorial regions. The assimilation of BGC data will rely on a simplified version of the SEEK filter, where the error statistics do not evolve with the model dynamics. The associated forecast error covariances are based on the statistics of a collection of 3D ocean state anomalies. The anomalies are computed from a multi-year numerical experiment (free run without assimilation) with respect to a running mean in order to estimate the 7-day scale error on the ocean state at a given period of the year. These forecast error covariances rely thus on a fixed-basis seasonally variable ensemble of anomalies. This methodology, which is currently implemented in the "blue" component of the CMEMS operational forecast system, is now under adaptation to be applied to the biogeochemical part of the operational system. Regarding observations - and as a first step - the system shall rely on the CMEMS GlobColour Global Ocean surface chlorophyll concentration products, delivered in NRT. The objective of this poster is to provide a detailed overview of the implementation of the aforementioned data assimilation methodology in the CMEMS BIOMER forecasting system. Focus shall be put on (1) the assessment of the capabilities of this data assimilation methodology to provide satisfying statistics of the model variability errors (through space-time analysis of dedicated representers of satellite surface Chla observations), (2) the dedicated features of the data assimilation configuration that have been implemented so far (e.g. log-transformation of the analysis state, multivariate Chlorophyll-Nutrient control vector, etc.) and (3) the assessment of the performances of this future operational data assimilation configuration.

  5. Grading the neuroendocrine tumors of the lung: an evidence-based proposal.

    PubMed

    Rindi, G; Klersy, C; Inzani, F; Fellegara, G; Ampollini, L; Ardizzoni, A; Campanini, N; Carbognani, P; De Pas, T M; Galetta, D; Granone, P L; Righi, L; Rusca, M; Spaggiari, L; Tiseo, M; Viale, G; Volante, M; Papotti, M; Pelosi, G

    2014-02-01

    Lung neuroendocrine tumors are catalogued in four categories by the World Health Organization (WHO 2004) classification. Its reproducibility and prognostic efficacy was disputed. The WHO 2010 classification of digestive neuroendocrine neoplasms is based on Ki67 proliferation assessment and proved prognostically effective. This study aims at comparing these two classifications and at defining a prognostic grading system for lung neuroendocrine tumors. The study included 399 patients who underwent surgery and with at least 1 year follow-up between 1989 and 2011. Data on 21 variables were collected, and performance of grading systems and their components was compared by Cox regression and multivariable analyses. All statistical tests were two-sided. At Cox analysis, WHO 2004 stratified patients into three major groups with statistically significant survival difference (typical carcinoid vs atypical carcinoid (AC), P=0.021; AC vs large-cell/small-cell lung neuroendocrine carcinomas, P<0.001). Optimal discrimination in three groups was observed by Ki67% (Ki67% cutoffs: G1 <4, G2 4-<25, G3 ≥25; G1 vs G2, P=0.021; and G2 vs G3, P≤0.001), mitotic count (G1 ≤2, G2 >2-47, G3 >47; G1 vs G2, P≤0.001; and G2 vs G3, P≤0.001), and presence of necrosis (G1 absent, G2 <10% of sample, G3 >10% of sample; G1 vs G2, P≤0.001; and G2 vs G3, P≤0.001) at uni and multivariable analyses. The combination of these three variables resulted in a simple and effective grading system. A three-tiers grading system based on Ki67 index, mitotic count, and necrosis with cutoffs specifically generated for lung neuroendocrine tumors is prognostically effective and accurate.

  6. Nonindependence and sensitivity analyses in ecological and evolutionary meta-analyses.

    PubMed

    Noble, Daniel W A; Lagisz, Malgorzata; O'dea, Rose E; Nakagawa, Shinichi

    2017-05-01

    Meta-analysis is an important tool for synthesizing research on a variety of topics in ecology and evolution, including molecular ecology, but can be susceptible to nonindependence. Nonindependence can affect two major interrelated components of a meta-analysis: (i) the calculation of effect size statistics and (ii) the estimation of overall meta-analytic estimates and their uncertainty. While some solutions to nonindependence exist at the statistical analysis stages, there is little advice on what to do when complex analyses are not possible, or when studies with nonindependent experimental designs exist in the data. Here we argue that exploring the effects of procedural decisions in a meta-analysis (e.g. inclusion of different quality data, choice of effect size) and statistical assumptions (e.g. assuming no phylogenetic covariance) using sensitivity analyses are extremely important in assessing the impact of nonindependence. Sensitivity analyses can provide greater confidence in results and highlight important limitations of empirical work (e.g. impact of study design on overall effects). Despite their importance, sensitivity analyses are seldom applied to problems of nonindependence. To encourage better practice for dealing with nonindependence in meta-analytic studies, we present accessible examples demonstrating the impact that ignoring nonindependence can have on meta-analytic estimates. We also provide pragmatic solutions for dealing with nonindependent study designs, and for analysing dependent effect sizes. Additionally, we offer reporting guidelines that will facilitate disclosure of the sources of nonindependence in meta-analyses, leading to greater transparency and more robust conclusions. © 2017 John Wiley & Sons Ltd.

  7. Conditional random matrix ensembles and the stability of dynamical systems

    NASA Astrophysics Data System (ADS)

    Kirk, Paul; Rolando, Delphine M. Y.; MacLean, Adam L.; Stumpf, Michael P. H.

    2015-08-01

    Random matrix theory (RMT) has found applications throughout physics and applied mathematics, in subject areas as diverse as communications networks, population dynamics, neuroscience, and models of the banking system. Many of these analyses exploit elegant analytical results, particularly the circular law and its extensions. In order to apply these results, assumptions must be made about the distribution of matrix elements. Here we demonstrate that the choice of matrix distribution is crucial. In particular, adopting an unrealistic matrix distribution for the sake of analytical tractability is liable to lead to misleading conclusions. We focus on the application of RMT to the long-standing, and at times fractious, ‘diversity-stability debate’, which is concerned with establishing whether large complex systems are likely to be stable. Early work (and subsequent elaborations) brought RMT to bear on the debate by modelling the entries of a system’s Jacobian matrix as independent and identically distributed (i.i.d.) random variables. These analyses were successful in yielding general results that were not tied to any specific system, but relied upon a restrictive i.i.d. assumption. Other studies took an opposing approach, seeking to elucidate general principles of stability through the analysis of specific systems. Here we develop a statistical framework that reconciles these two contrasting approaches. We use a range of illustrative dynamical systems examples to demonstrate that: (i) stability probability cannot be summarily deduced from any single property of the system (e.g. its diversity); and (ii) our assessment of stability depends on adequately capturing the details of the systems analysed. Failing to condition on the structure of dynamical systems will skew our analysis and can, even for very small systems, result in an unnecessarily pessimistic diagnosis of their stability.

  8. The Need for Speed in Rodent Locomotion Analyses

    PubMed Central

    Batka, Richard J.; Brown, Todd J.; Mcmillan, Kathryn P.; Meadows, Rena M.; Jones, Kathryn J.; Haulcomb, Melissa M.

    2016-01-01

    Locomotion analysis is now widely used across many animal species to understand the motor defects in disease, functional recovery following neural injury, and the effectiveness of various treatments. More recently, rodent locomotion analysis has become an increasingly popular method in a diverse range of research. Speed is an inseparable aspect of locomotion that is still not fully understood, and its effects are often not properly incorporated while analyzing data. In this hybrid manuscript, we accomplish three things: (1) review the interaction between speed and locomotion variables in rodent studies, (2) comprehensively analyze the relationship between speed and 162 locomotion variables in a group of 16 wild-type mice using the CatWalk gait analysis system, and (3) develop and test a statistical method in which locomotion variables are analyzed and reported in the context of speed. Notable results include the following: (1) over 90% of variables, reported by CatWalk, were dependent on speed with an average R2 value of 0.624, (2) most variables were related to speed in a nonlinear manner, (3) current methods of controlling for speed are insufficient, and (4) the linear mixed model is an appropriate and effective statistical method for locomotion analyses that is inclusive of speed-dependent relationships. Given the pervasive dependency of locomotion variables on speed, we maintain that valid conclusions from locomotion analyses cannot be made unless they are analyzed and reported within the context of speed. PMID:24890845

  9. Statistics for the Relative Detectability of Chemicals in Weak Gaseous Plumes in LWIR Hyperspectral Imagery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Metoyer, Candace N.; Walsh, Stephen J.; Tardiff, Mark F.

    2008-10-30

    The detection and identification of weak gaseous plumes using thermal imaging data is complicated by many factors. These include variability due to atmosphere, ground and plume temperature, and background clutter. This paper presents an analysis of one formulation of the physics-based model that describes the at-sensor observed radiance. The motivating question for the analyses performed in this paper is as follows. Given a set of backgrounds, is there a way to predict the background over which the probability of detecting a given chemical will be the highest? Two statistics were developed to address this question. These statistics incorporate data frommore » the long-wave infrared band to predict the background over which chemical detectability will be the highest. These statistics can be computed prior to data collection. As a preliminary exploration into the predictive ability of these statistics, analyses were performed on synthetic hyperspectral images. Each image contained one chemical (either carbon tetrachloride or ammonia) spread across six distinct background types. The statistics were used to generate predictions for the background ranks. Then, the predicted ranks were compared to the empirical ranks obtained from the analyses of the synthetic images. For the simplified images under consideration, the predicted and empirical ranks showed a promising amount of agreement. One statistic accurately predicted the best and worst background for detection in all of the images. Future work may include explorations of more complicated plume ingredients, background types, and noise structures.« less

  10. Statistical analysis of 4 types of neck whiplash injuries based on classical meridian theory.

    PubMed

    Chen, Yemeng; Zhao, Yan; Xue, Xiaolin; Li, Hui; Wu, Xiuyan; Zhang, Qunce; Zheng, Xin; Wang, Tianfang

    2015-01-01

    As one component of the Chinese medicine meridian system, the meridian sinew (Jingjin, (see text), tendino-musculo) is specially described as being for acupuncture treatment of the musculoskeletal system because of its dynamic attributes and tender point correlations. In recent decades, the therapeutic importance of the sinew meridian has become revalued in clinical application. Based on this theory, the authors have established therapeutic strategies of acupuncture treatment in Whiplash-Associated Disorders (WAD) by categorizing four types of neck symptom presentations. The advantage of this new system is to make it much easier for the clinician to find effective acupuncture points. This study attempts to prove the significance of the proposed therapeutic strategies by analyzing data collected from a clinical survey of various WAD using non-supervised statistical methods, such as correlation analysis, factor analysis, and cluster analysis. The clinical survey data have successfully verified discrete characteristics of four neck syndromes, based upon the range of motion (ROM) and tender point location findings. A summary of the relationships among the symptoms of the four neck syndromes has shown the correlation coefficient as having a statistical significance (P < 0.01 or P < 0.05), especially with regard to ROM. Furthermore, factor and cluster analyses resulted in a total of 11 categories of general symptoms, which implies syndrome factors are more related to the Liver, as originally described in classical theory. The hypothesis of meridian sinew syndromes in WAD is clearly supported by the statistical analysis of the clinical trials. This new discovery should be beneficial in improving therapeutic outcomes.

  11. Statistical analysis and interpretation of prenatal diagnostic imaging studies, Part 2: descriptive and inferential statistical methods.

    PubMed

    Tuuli, Methodius G; Odibo, Anthony O

    2011-08-01

    The objective of this article is to discuss the rationale for common statistical tests used for the analysis and interpretation of prenatal diagnostic imaging studies. Examples from the literature are used to illustrate descriptive and inferential statistics. The uses and limitations of linear and logistic regression analyses are discussed in detail.

  12. Using a Five-Step Procedure for Inferential Statistical Analyses

    ERIC Educational Resources Information Center

    Kamin, Lawrence F.

    2010-01-01

    Many statistics texts pose inferential statistical problems in a disjointed way. By using a simple five-step procedure as a template for statistical inference problems, the student can solve problems in an organized fashion. The problem and its solution will thus be a stand-by-itself organic whole and a single unit of thought and effort. The…

  13. The Applications of Mindfulness with Students of Secondary School: Results on the Academic Performance, Self-concept and Anxiety

    NASA Astrophysics Data System (ADS)

    Franco, Clemente; Mañas, Israel; Cangas, Adolfo J.; Gallego, José

    The aim of the present research is to verify the impact of a mindfulness programme on the levels academic performance, self-concept and anxiety, of a group of students in Year 1 at secondary school. The statistical analyses carried out on the variables studied showed significant differences in favour of the experimental group with regard to the control group in all the variables analysed. In the experimental group we can observe a significant increase of academic performance as well as an improvement in all the self-concept dimensions, and a significant decrease in anxiety states and traits. The importance and usefulness of mindfulness techniques in the educative system is discussed.

  14. Surface pressure maps from scatterometer data

    NASA Technical Reports Server (NTRS)

    Brown, R. A.; Levy, Gad

    1991-01-01

    The ability to determine surface pressure fields from satellite scatterometer data was shown by Brown and Levy (1986). The surface winds are used to calculate the gradient winds above the planetary boundary layer, and these are directly related to the pressure gradients. There are corrections for variable stratification, variable surface roughness, horizontal inhomogeneity, humidity and baroclinity. The Seasat-A Satellite Scatterometer (SASS) data have been used in a systematic study of 50 synoptic weather events (regions of approximately 1000 X 1000 km). The preliminary statistics of agreement with national weather service surface pressure maps are calculated. The resulting surface pressure maps can be used together with SASS winds and Scanning Multichannel Microwave Radiometer (SMMR) water vapor and liquid water analyses to provide good front and storm system analyses.

  15. Engineering evaluation of SSME dynamic data from engine tests and SSV flights

    NASA Technical Reports Server (NTRS)

    1986-01-01

    An engineering evaluation of dynamic data from SSME hot firing tests and SSV flights is summarized. The basic objective of the study is to provide analyses of vibration, strain and dynamic pressure measurements in support of MSFC performance and reliability improvement programs. A brief description of the SSME test program is given and a typical test evaluation cycle reviewed. Data banks generated to characterize SSME component dynamic characteristics are described and statistical analyses performed on these data base measurements are discussed. Analytical models applied to define the dynamic behavior of SSME components (such as turbopump bearing elements and the flight accelerometer safety cut-off system) are also summarized. Appendices are included to illustrate some typical tasks performed under this study.

  16. Statistical studies of selected trace elements with reference to geology and genesis of the Carlin gold deposit, Nevada

    USGS Publications Warehouse

    Harris, Michael; Radtke, Arthur S.

    1976-01-01

    Linear regression and discriminant analyses techniques were applied to gold, mercury, arsenic, antimony, barium, copper, molybdenum, lead, zinc, boron, tellurium, selenium, and tungsten analyses from drill holes into unoxidized gold ore at the Carlin gold mine near Carlin, Nev. The statistical treatments employed were used to judge proposed hypotheses on the origin and geochemical paragenesis of this disseminated gold deposit.

  17. The Effects of Using a Wiki on Student Engagement and Learning of Report Writing Skills in a University Statistics Course

    ERIC Educational Resources Information Center

    Neumann, David L.; Hood, Michelle

    2009-01-01

    A wiki was used as part of a blended learning approach to promote collaborative learning among students in a first year university statistics class. One group of students analysed a data set and communicated the results by jointly writing a practice report using a wiki. A second group analysed the same data but communicated the results in a…

  18. Extreme between-study homogeneity in meta-analyses could offer useful insights.

    PubMed

    Ioannidis, John P A; Trikalinos, Thomas A; Zintzaras, Elias

    2006-10-01

    Meta-analyses are routinely evaluated for the presence of large between-study heterogeneity. We examined whether it is also important to probe whether there is extreme between-study homogeneity. We used heterogeneity tests with left-sided statistical significance for inference and developed a Monte Carlo simulation test for testing extreme homogeneity in risk ratios across studies, using the empiric distribution of the summary risk ratio and heterogeneity statistic. A left-sided P=0.01 threshold was set for claiming extreme homogeneity to minimize type I error. Among 11,803 meta-analyses with binary contrasts from the Cochrane Library, 143 (1.21%) had left-sided P-value <0.01 for the asymptotic Q statistic and 1,004 (8.50%) had left-sided P-value <0.10. The frequency of extreme between-study homogeneity did not depend on the number of studies in the meta-analyses. We identified examples where extreme between-study homogeneity (left-sided P-value <0.01) could result from various possibilities beyond chance. These included inappropriate statistical inference (asymptotic vs. Monte Carlo), use of a specific effect metric, correlated data or stratification using strong predictors of outcome, and biases and potential fraud. Extreme between-study homogeneity may provide useful insights about a meta-analysis and its constituent studies.

  19. On Designing Multicore-Aware Simulators for Systems Biology Endowed with OnLine Statistics

    PubMed Central

    Calcagno, Cristina; Coppo, Mario

    2014-01-01

    The paper arguments are on enabling methodologies for the design of a fully parallel, online, interactive tool aiming to support the bioinformatics scientists .In particular, the features of these methodologies, supported by the FastFlow parallel programming framework, are shown on a simulation tool to perform the modeling, the tuning, and the sensitivity analysis of stochastic biological models. A stochastic simulation needs thousands of independent simulation trajectories turning into big data that should be analysed by statistic and data mining tools. In the considered approach the two stages are pipelined in such a way that the simulation stage streams out the partial results of all simulation trajectories to the analysis stage that immediately produces a partial result. The simulation-analysis workflow is validated for performance and effectiveness of the online analysis in capturing biological systems behavior on a multicore platform and representative proof-of-concept biological systems. The exploited methodologies include pattern-based parallel programming and data streaming that provide key features to the software designers such as performance portability and efficient in-memory (big) data management and movement. Two paradigmatic classes of biological systems exhibiting multistable and oscillatory behavior are used as a testbed. PMID:25050327

  20. On designing multicore-aware simulators for systems biology endowed with OnLine statistics.

    PubMed

    Aldinucci, Marco; Calcagno, Cristina; Coppo, Mario; Damiani, Ferruccio; Drocco, Maurizio; Sciacca, Eva; Spinella, Salvatore; Torquati, Massimo; Troina, Angelo

    2014-01-01

    The paper arguments are on enabling methodologies for the design of a fully parallel, online, interactive tool aiming to support the bioinformatics scientists .In particular, the features of these methodologies, supported by the FastFlow parallel programming framework, are shown on a simulation tool to perform the modeling, the tuning, and the sensitivity analysis of stochastic biological models. A stochastic simulation needs thousands of independent simulation trajectories turning into big data that should be analysed by statistic and data mining tools. In the considered approach the two stages are pipelined in such a way that the simulation stage streams out the partial results of all simulation trajectories to the analysis stage that immediately produces a partial result. The simulation-analysis workflow is validated for performance and effectiveness of the online analysis in capturing biological systems behavior on a multicore platform and representative proof-of-concept biological systems. The exploited methodologies include pattern-based parallel programming and data streaming that provide key features to the software designers such as performance portability and efficient in-memory (big) data management and movement. Two paradigmatic classes of biological systems exhibiting multistable and oscillatory behavior are used as a testbed.

  1. Sensitivity of Assimilated Tropical Tropospheric Ozone to the Meteorological Analyses

    NASA Technical Reports Server (NTRS)

    Hayashi, Hiroo; Stajner, Ivanka; Pawson, Steven; Thompson, Anne M.

    2002-01-01

    Tropical tropospheric ozone fields from two different experiments performed with an off-line ozone assimilation system developed in NASA's Data Assimilation Office (DAO) are examined. Assimilated ozone fields from the two experiments are compared with the collocated ozone profiles from the Southern Hemispheric Additional Ozonesondes (SHADOZ) network. Results are presented for 1998. The ozone assimilation system includes a chemistry-transport model, which uses analyzed winds from the Goddard Earth Observing System (GEOS) Data Assimilation System (DAS). The two experiments use wind fields from different versions of GEOS DAS: an operational version of the GEOS-2 system and a prototype of the GEOS-4 system. While both versions of the DAS utilize the Physical-space Statistical Analysis System and use comparable observations, they use entirely different general circulation models and data insertion techniques. The shape of the annual-mean vertical profile of the assimilated ozone fields is sensitive to the meteorological analyses, with the GEOS-4-based ozone being closest to the observations. This indicates that the resolved transport in GEOS-4 is more realistic than in GEOS-2. Remaining uncertainties include quantification of the representation of sub-grid-scale processes in the transport calculations, which plays an important role in the locations and seasons where convection dominates the transport.

  2. From sexless to sexy: Why it is time for human genetics to consider and report analyses of sex.

    PubMed

    Powers, Matthew S; Smith, Phillip H; McKee, Sherry A; Ehringer, Marissa A

    2017-01-01

    Science has come a long way with regard to the consideration of sex differences in clinical and preclinical research, but one field remains behind the curve: human statistical genetics. The goal of this commentary is to raise awareness and discussion about how to best consider and evaluate possible sex effects in the context of large-scale human genetic studies. Over the course of this commentary, we reinforce the importance of interpreting genetic results in the context of biological sex, establish evidence that sex differences are not being considered in human statistical genetics, and discuss how best to conduct and report such analyses. Our recommendation is to run stratified analyses by sex no matter the sample size or the result and report the findings. Summary statistics from stratified analyses are helpful for meta-analyses, and patterns of sex-dependent associations may be hidden in a combined dataset. In the age of declining sequencing costs, large consortia efforts, and a number of useful control samples, it is now time for the field of human genetics to appropriately include sex in the design, analysis, and reporting of results.

  3. Transfusion Indication Threshold Reduction (TITRe2) randomized controlled trial in cardiac surgery: statistical analysis plan.

    PubMed

    Pike, Katie; Nash, Rachel L; Murphy, Gavin J; Reeves, Barnaby C; Rogers, Chris A

    2015-02-22

    The Transfusion Indication Threshold Reduction (TITRe2) trial is the largest randomized controlled trial to date to compare red blood cell transfusion strategies following cardiac surgery. This update presents the statistical analysis plan, detailing how the study will be analyzed and presented. The statistical analysis plan has been written following recommendations from the International Conference on Harmonisation of Technical Requirements for Registration of Pharmaceuticals for Human Use, prior to database lock and the final analysis of trial data. Outlined analyses are in line with the Consolidated Standards of Reporting Trials (CONSORT). The study aims to randomize 2000 patients from 17 UK centres. Patients are randomized to either a restrictive (transfuse if haemoglobin concentration <7.5 g/dl) or liberal (transfuse if haemoglobin concentration <9 g/dl) transfusion strategy. The primary outcome is a binary composite outcome of any serious infectious or ischaemic event in the first 3 months following randomization. The statistical analysis plan details how non-adherence with the intervention, withdrawals from the study, and the study population will be derived and dealt with in the analysis. The planned analyses of the trial primary and secondary outcome measures are described in detail, including approaches taken to deal with multiple testing, model assumptions not being met and missing data. Details of planned subgroup and sensitivity analyses and pre-specified ancillary analyses are given, along with potential issues that have been identified with such analyses and possible approaches to overcome such issues. ISRCTN70923932 .

  4. Automated mesostructural analyses using GIS, Beta test: Paleozoic structures from the New Jersey Great Valley region

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Herman, G.C.; French, M.A.; Monteverde, D.H.

    1993-03-01

    An automated method has been developed for representing outcrop data on geologic structures on maps. Using a MS-DOS custom database management system in conjunction with the ARC/INFO Geographic Information System (GIS), trends of geologic structures are plotted with user-specific symbols. The length of structural symbols can be frequency-weighted based on collective values from structural domains. The PC-based data manager is the NJGS Field data Management System (FMS) Version 2.0 which includes sort, output, and analysis functions for structural data input in either azimuth or quadrant form. Program options include lineament sorting, data output to other data management and analysis software,more » and a circular histogram (rose diagram) routine for trend frequency analysis. Trends can be displayed with either half-or full-rose diagrams using either 10[degree] sectors or one degree spikes for strike, trend, or dip azimuth readings. Scalar and vector statistics are both included. For the mesostructural analysis, ASCII files containing the station number, structural trend and inclination, and plot-symbol-length value are downloaded from FMS and uploaded into an ARC/INFO macro which sequentially plots the information. Plots can be generated in conjunction with any complimentary GIS coverage for various types of spatial analyses. Mesostructural plots can be used for regional tectonic analyses, for hydrogeologic analysis of fractured bedrock aquifers, or for ground-truthing data from fracture-trace or lineament analyses.« less

  5. Empirically Derived Personality Subtyping for Predicting Clinical Symptoms and Treatment Response in Bulimia Nervosa

    PubMed Central

    Haynos, Ann F.; Pearson, Carolyn M.; Utzinger, Linsey M.; Wonderlich, Stephen A.; Crosby, Ross D.; Mitchell, James E.; Crow, Scott J.; Peterson, Carol B.

    2016-01-01

    Objective Evidence suggests that eating disorder subtypes reflecting under-controlled, over-controlled, and low psychopathology personality traits constitute reliable phenotypes that differentiate treatment response. This study is the first to use statistical analyses to identify these subtypes within treatment-seeking individuals with bulimia nervosa (BN) and to use these statistically derived clusters to predict clinical outcomes. Methods Using variables from the Dimensional Assessment of Personality Pathology–Basic Questionnaire, K-means cluster analyses identified under-controlled, over-controlled, and low psychopathology subtypes within BN patients (n = 80) enrolled in a treatment trial. Generalized linear models examined the impact of personality subtypes on Eating Disorder Examination global score, binge eating frequency, and purging frequency cross-sectionally at baseline and longitudinally at end of treatment (EOT) and follow-up. In the longitudinal models, secondary analyses were conducted to examine personality subtype as a potential moderator of response to Cognitive Behavioral Therapy-Enhanced (CBT-E) or Integrative Cognitive-Affective Therapy for BN (ICAT-BN). Results There were no baseline clinical differences between groups. In the longitudinal models, personality subtype predicted binge eating (p = .03) and purging (p = .01) frequency at EOT and binge eating frequency at follow-up (p = .045). The over-controlled group demonstrated the best outcomes on these variables. In secondary analyses, there was a treatment by subtype interaction for purging at follow-up (p = .04), which indicated a superiority of CBT-E over ICAT-BN for reducing purging among the over-controlled group. Discussion Empirically derived personality subtyping is appears to be a valid classification system with potential to guide eating disorder treatment decisions. PMID:27611235

  6. Uncovering hidden heterogeneity: Geo-statistical models illuminate the fine scale effects of boating infrastructure on sediment characteristics and contaminants.

    PubMed

    Hedge, L H; Dafforn, K A; Simpson, S L; Johnston, E L

    2017-06-30

    Infrastructure associated with coastal communities is likely to not only directly displace natural systems, but also leave environmental footprints' that stretch over multiple scales. Some coastal infrastructure will, there- fore, generate a hidden layer of habitat heterogeneity in sediment systems that is not immediately observable in classical impact assessment frameworks. We examine the hidden heterogeneity associated with one of the most ubiquitous coastal modifications; dense swing moorings fields. Using a model based geo-statistical framework we highlight the variation in sedimentology throughout mooring fields and reference locations. Moorings were correlated with patches of sediment with larger particle sizes, and associated metal(loid) concentrations in these patches were depressed. Our work highlights two important ideas i) mooring fields create a mosaic of habitat in which contamination decreases and grain sizes increase close to moorings, and ii) model- based frameworks provide an information rich, easy-to-interpret way to communicate complex analyses to stakeholders. Crown Copyright © 2017. Published by Elsevier Ltd. All rights reserved.

  7. Quantitative EEG analysis of the maturational changes associated with childhood absence epilepsy

    NASA Astrophysics Data System (ADS)

    Rosso, O. A.; Hyslop, W.; Gerlach, R.; Smith, R. L. L.; Rostas, J. A. P.; Hunter, M.

    2005-10-01

    This study aimed to examine the background electroencephalography (EEG) in children with childhood absence epilepsy, a condition whose presentation has strong developmental links. EEG hallmarks of absence seizure activity are widely accepted and there is recognition that the bulk of inter-ictal EEG in this group is normal to the naked eye. This multidisciplinary study aimed to use the normalized total wavelet entropy (NTWS) (Signal Processing 83 (2003) 1275) to examine the background EEG of those patients demonstrating absence seizure activity, and compare it with children without absence epilepsy. This calculation can be used to define the degree of order in a system, with higher levels of entropy indicating a more disordered (chaotic) system. Results were subjected to further statistical analyses of significance. Entropy values were calculated for patients versus controls. For all channels combined, patients with absence epilepsy showed (statistically significant) lower entropy values than controls. The size of the difference in entropy values was not uniform, with certain EEG electrodes consistently showing greater differences than others.

  8. A framework for conducting mechanistic based reliability assessments of components operating in complex systems

    NASA Astrophysics Data System (ADS)

    Wallace, Jon Michael

    2003-10-01

    Reliability prediction of components operating in complex systems has historically been conducted in a statistically isolated manner. Current physics-based, i.e. mechanistic, component reliability approaches focus more on component-specific attributes and mathematical algorithms and not enough on the influence of the system. The result is that significant error can be introduced into the component reliability assessment process. The objective of this study is the development of a framework that infuses the needs and influence of the system into the process of conducting mechanistic-based component reliability assessments. The formulated framework consists of six primary steps. The first three steps, identification, decomposition, and synthesis, are primarily qualitative in nature and employ system reliability and safety engineering principles to construct an appropriate starting point for the component reliability assessment. The following two steps are the most unique. They involve a step to efficiently characterize and quantify the system-driven local parameter space and a subsequent step using this information to guide the reduction of the component parameter space. The local statistical space quantification step is accomplished using two proposed multivariate probability models: Multi-Response First Order Second Moment and Taylor-Based Inverse Transformation. Where existing joint probability models require preliminary distribution and correlation information of the responses, these models combine statistical information of the input parameters with an efficient sampling of the response analyses to produce the multi-response joint probability distribution. Parameter space reduction is accomplished using Approximate Canonical Correlation Analysis (ACCA) employed as a multi-response screening technique. The novelty of this approach is that each individual local parameter and even subsets of parameters representing entire contributing analyses can now be rank ordered with respect to their contribution to not just one response, but the entire vector of component responses simultaneously. The final step of the framework is the actual probabilistic assessment of the component. Although the same multivariate probability tools employed in the characterization step can be used for the component probability assessment, variations of this final step are given to allow for the utilization of existing probabilistic methods such as response surface Monte Carlo and Fast Probability Integration. The overall framework developed in this study is implemented to assess the finite-element based reliability prediction of a gas turbine airfoil involving several failure responses. Results of this implementation are compared to results generated using the conventional 'isolated' approach as well as a validation approach conducted through large sample Monte Carlo simulations. The framework resulted in a considerable improvement to the accuracy of the part reliability assessment and an improved understanding of the component failure behavior. Considerable statistical complexity in the form of joint non-normal behavior was found and accounted for using the framework. Future applications of the framework elements are discussed.

  9. An evolving computational platform for biological mass spectrometry: workflows, statistics and data mining with MASSyPup64.

    PubMed

    Winkler, Robert

    2015-01-01

    In biological mass spectrometry, crude instrumental data need to be converted into meaningful theoretical models. Several data processing and data evaluation steps are required to come to the final results. These operations are often difficult to reproduce, because of too specific computing platforms. This effect, known as 'workflow decay', can be diminished by using a standardized informatic infrastructure. Thus, we compiled an integrated platform, which contains ready-to-use tools and workflows for mass spectrometry data analysis. Apart from general unit operations, such as peak picking and identification of proteins and metabolites, we put a strong emphasis on the statistical validation of results and Data Mining. MASSyPup64 includes e.g., the OpenMS/TOPPAS framework, the Trans-Proteomic-Pipeline programs, the ProteoWizard tools, X!Tandem, Comet and SpiderMass. The statistical computing language R is installed with packages for MS data analyses, such as XCMS/metaXCMS and MetabR. The R package Rattle provides a user-friendly access to multiple Data Mining methods. Further, we added the non-conventional spreadsheet program teapot for editing large data sets and a command line tool for transposing large matrices. Individual programs, console commands and modules can be integrated using the Workflow Management System (WMS) taverna. We explain the useful combination of the tools by practical examples: (1) A workflow for protein identification and validation, with subsequent Association Analysis of peptides, (2) Cluster analysis and Data Mining in targeted Metabolomics, and (3) Raw data processing, Data Mining and identification of metabolites in untargeted Metabolomics. Association Analyses reveal relationships between variables across different sample sets. We present its application for finding co-occurring peptides, which can be used for target proteomics, the discovery of alternative biomarkers and protein-protein interactions. Data Mining derived models displayed a higher robustness and accuracy for classifying sample groups in targeted Metabolomics than cluster analyses. Random Forest models do not only provide predictive models, which can be deployed for new data sets, but also the variable importance. We demonstrate that the later is especially useful for tracking down significant signals and affected pathways in untargeted Metabolomics. Thus, Random Forest modeling supports the unbiased search for relevant biological features in Metabolomics. Our results clearly manifest the importance of Data Mining methods to disclose non-obvious information in biological mass spectrometry . The application of a Workflow Management System and the integration of all required programs and data in a consistent platform makes the presented data analyses strategies reproducible for non-expert users. The simple remastering process and the Open Source licenses of MASSyPup64 (http://www.bioprocess.org/massypup/) enable the continuous improvement of the system.

  10. Cause-specific mortality according to urine albumin creatinine ratio in the general population.

    PubMed

    Skaaby, Tea; Husemoen, Lise Lotte Nystrup; Ahluwalia, Tarunveer Singh; Rossing, Peter; Jørgensen, Torben; Thuesen, Betina Heinsbæk; Pisinger, Charlotta; Rasmussen, Knud; Linneberg, Allan

    2014-01-01

    Urine albumin creatinine ratio, UACR, is positively associated with all-cause mortality, cardiovascular disease and diabetes in observational studies. Whether a high UACR is also associated with other causes of death is unclear. We investigated the association between UACR and cause-specific mortality. We included a total of 9,125 individuals from two population-based studies, Monica10 and Inter99, conducted in 1993-94 and 1999-2001, respectively. Urine albumin creatinine ratio was measured from spot urine samples by standard methods. Information on causes of death was obtained from The Danish Register of Causes of Death until 31 December 2010. There were a total of 920 deaths, and the median follow-up was 11.3 years. Multivariable Cox regression analyses with age as underlying time axis showed statistically significant positive associations between UACR status and risk of all-cause mortality, endocrine nutritional and metabolic diseases, mental and behavioural disorders, diseases of the circulatory system, and diseases of the respiratory system with hazard ratios 1.56, 6.98, 2.34, 2.03, and 1.91, for the fourth UACR compared with the first, respectively. Using UACR as a continuous variable, we also found a statistically significant positive association with risk of death caused by diseases of the digestive system with a hazard ratio of 1.02 per 10 mg/g higher UACR. We found statistically significant positive associations between baseline UACR and death from all-cause mortality, endocrine nutritional and metabolic diseases, and diseases of the circulatory system and possibly mental and behavioural disorders, and diseases of the respiratory and digestive system.

  11. Simplified greywater treatment systems: Slow filters of sand and slate waste followed by granular activated carbon.

    PubMed

    Zipf, Mariah Siebert; Pinheiro, Ivone Gohr; Conegero, Mariana Garcia

    2016-07-01

    One of the main actions of sustainability that is applicable to residential, commercial, and public buildings is the rational use of water that contemplates the reuse of greywater as one of the main options for reducing the consumption of drinking water. Therefore, this research aimed to study the efficiencies of simplified treatments for greywater reuse using slow sand and slow slate waste filtration, both followed by granular activated carbon filters. The system monitoring was conducted over 28 weeks, using analyses of the following parameters: pH, turbidity, apparent color, biochemical oxygen demand (BOD), chemical oxygen demand (COD), surfactants, total coliforms, and thermotolerant coliforms. The system was run at two different filtration rates: 6 and 2 m(3)/m(2)/day. Statistical analyses showed no significant differences in the majority of the results when filtration rate changed from 6 to 2 m(3)/m(2)/day. The average removal efficiencies with regard to the turbidity, apparent color, COD and BOD were 61, 54, 56, and 56%, respectively, for the sand filter, and 66, 61, 60, and 51%, respectively, for the slate waste filter. Both systems showed good efficiencies in removing surfactants, around 70%, while the pH reached values of around 7.80. The average removal efficiencies of the total and thermotolerant coliforms were of 61 and 90%, respectively, for the sand filter, and 67 and 80%, respectively, for the slate waste filter. The statistical analysis found no significant differences between the responses of the two systems, which attest to the fact that the slate waste can be a substitute for sand. The maximum levels of efficiency were high, indicating the potential of the systems, and suggesting their optimization in order to achieve much higher average efficiencies. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. CMG-biotools, a free workbench for basic comparative microbial genomics.

    PubMed

    Vesth, Tammi; Lagesen, Karin; Acar, Öncel; Ussery, David

    2013-01-01

    Today, there are more than a hundred times as many sequenced prokaryotic genomes than were present in the year 2000. The economical sequencing of genomic DNA has facilitated a whole new approach to microbial genomics. The real power of genomics is manifested through comparative genomics that can reveal strain specific characteristics, diversity within species and many other aspects. However, comparative genomics is a field not easily entered into by scientists with few computational skills. The CMG-biotools package is designed for microbiologists with limited knowledge of computational analysis and can be used to perform a number of analyses and comparisons of genomic data. The CMG-biotools system presents a stand-alone interface for comparative microbial genomics. The package is a customized operating system, based on Xubuntu 10.10, available through the open source Ubuntu project. The system can be installed on a virtual computer, allowing the user to run the system alongside any other operating system. Source codes for all programs are provided under GNU license, which makes it possible to transfer the programs to other systems if so desired. We here demonstrate the package by comparing and analyzing the diversity within the class Negativicutes, represented by 31 genomes including 10 genera. The analyses include 16S rRNA phylogeny, basic DNA and codon statistics, proteome comparisons using BLAST and graphical analyses of DNA structures. This paper shows the strength and diverse use of the CMG-biotools system. The system can be installed on a vide range of host operating systems and utilizes as much of the host computer as desired. It allows the user to compare multiple genomes, from various sources using standardized data formats and intuitive visualizations of results. The examples presented here clearly shows that users with limited computational experience can perform complicated analysis without much training.

  13. Formative assessment in mathematics for engineering students

    NASA Astrophysics Data System (ADS)

    Ní Fhloinn, Eabhnat; Carr, Michael

    2017-07-01

    In this paper, we present a range of formative assessment types for engineering mathematics, including in-class exercises, homework, mock examination questions, table quizzes, presentations, critical analyses of statistical papers, peer-to-peer teaching, online assessments and electronic voting systems. We provide practical tips for the implementation of such assessments, with a particular focus on time or resource constraints and large class sizes, as well as effective methods of feedback. In addition, we consider the benefits of such formative assessments for students and staff.

  14. Are we there yet?

    PubMed

    Cristianini, Nello

    2010-05-01

    Statistical approaches to Artificial Intelligence are behind most success stories of the field in the past decade. The idea of generating non-trivial behaviour by analysing vast amounts of data has enabled recommendation systems, search engines, spam filters, optical character recognition, machine translation and speech recognition, among other things. As we celebrate the spectacular achievements of this line of research, we need to assess its full potential and its limitations. What are the next steps to take towards machine intelligence? 2010 Elsevier Ltd. All rights reserved.

  15. The 1978 Italian mental health law--a personal evaluation: a review.

    PubMed Central

    Palermo, G B

    1991-01-01

    The author discusses the sociopsychiatric consequences of the 1978 Italian mental health law. He also reviews the international scientific ideas that led up to it. The sociopolitical psychiatric views of the late Franco Basaglia, pioneer of the change in the mental health system of the Italian Republic, are described. Statistical reports and critical analyses are reported. Objective data, based on the author's personal experience as a practising psychiatrist in Rome, Italy, from 1969 to 1987, are given. PMID:1999825

  16. Conceptual and statistical problems associated with the use of diversity indices in ecology.

    PubMed

    Barrantes, Gilbert; Sandoval, Luis

    2009-09-01

    Diversity indices, particularly the Shannon-Wiener index, have extensively been used in analyzing patterns of diversity at different geographic and ecological scales. These indices have serious conceptual and statistical problems which make comparisons of species richness or species abundances across communities nearly impossible. There is often no a single statistical method that retains all information needed to answer even a simple question. However, multivariate analyses could be used instead of diversity indices, such as cluster analyses or multiple regressions. More complex multivariate analyses, such as Canonical Correspondence Analysis, provide very valuable information on environmental variables associated to the presence and abundance of the species in a community. In addition, particular hypotheses associated to changes in species richness across localities, or change in abundance of one, or a group of species can be tested using univariate, bivariate, and/or rarefaction statistical tests. The rarefaction method has proved to be robust to standardize all samples to a common size. Even the simplest method as reporting the number of species per taxonomic category possibly provides more information than a diversity index value.

  17. Assessing signal-to-noise in quantitative proteomics: multivariate statistical analysis in DIGE experiments.

    PubMed

    Friedman, David B

    2012-01-01

    All quantitative proteomics experiments measure variation between samples. When performing large-scale experiments that involve multiple conditions or treatments, the experimental design should include the appropriate number of individual biological replicates from each condition to enable the distinction between a relevant biological signal from technical noise. Multivariate statistical analyses, such as principal component analysis (PCA), provide a global perspective on experimental variation, thereby enabling the assessment of whether the variation describes the expected biological signal or the unanticipated technical/biological noise inherent in the system. Examples will be shown from high-resolution multivariable DIGE experiments where PCA was instrumental in demonstrating biologically significant variation as well as sample outliers, fouled samples, and overriding technical variation that would not be readily observed using standard univariate tests.

  18. A canonical neural mechanism for behavioral variability

    NASA Astrophysics Data System (ADS)

    Darshan, Ran; Wood, William E.; Peters, Susan; Leblois, Arthur; Hansel, David

    2017-05-01

    The ability to generate variable movements is essential for learning and adjusting complex behaviours. This variability has been linked to the temporal irregularity of neuronal activity in the central nervous system. However, how neuronal irregularity actually translates into behavioural variability is unclear. Here we combine modelling, electrophysiological and behavioural studies to address this issue. We demonstrate that a model circuit comprising topographically organized and strongly recurrent neural networks can autonomously generate irregular motor behaviours. Simultaneous recordings of neurons in singing finches reveal that neural correlations increase across the circuit driving song variability, in agreement with the model predictions. Analysing behavioural data, we find remarkable similarities in the babbling statistics of 5-6-month-old human infants and juveniles from three songbird species and show that our model naturally accounts for these `universal' statistics.

  19. Information processing requirements for on-board monitoring of automatic landing

    NASA Technical Reports Server (NTRS)

    Sorensen, J. A.; Karmarkar, J. S.

    1977-01-01

    A systematic procedure is presented for determining the information processing requirements for on-board monitoring of automatic landing systems. The monitoring system detects landing anomalies through use of appropriate statistical tests. The time-to-correct aircraft perturbations is determined from covariance analyses using a sequence of suitable aircraft/autoland/pilot models. The covariance results are used to establish landing safety and a fault recovery operating envelope via an event outcome tree. This procedure is demonstrated with examples using the NASA Terminal Configured Vehicle (B-737 aircraft). The procedure can also be used to define decision height, assess monitoring implementation requirements, and evaluate alternate autoland configurations.

  20. Public benefits of public power. [Booklet

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1980-01-01

    The principal characteristics and benefits of public power are described using a question and answer format. The book begins by defining public power, describing its history, and confirming that people have a right to choose. The answers to questions about the benefits of public power are grouped under three major headings: rates, local control; economic and political benefits; and power supply and consumption. Establishing community public systems is hard work, requiring a progression through local government authorization, legal and financial analyses, public persuasion, voter approval, and a bond issue. Electric utility statistics show that local public systems outnumber all othermore » types of ownership. (DCK)« less

  1. [The National Registry of Occupational Exposures to Carcinogens (SIREP): information system and results].

    PubMed

    Scarselli, Alberto

    2011-01-01

    The recording of occupational exposure to carcinogens is a fundamental step in order to assess exposure risk factors in workplaces. The aim of this paper is to describe the characteristics of the Italian register of occupational exposures to carcinogen agents (SIREP). The core data collected in the system are: firm characteristics, worker demographics, and exposure information. Statistical descriptive analyses were performed by economic activity sector, carcinogen agent and geographic location. Currently, the information recorded regard: 12,300 firms, 130,000 workers, and 250,000 exposures. The SIREP database has been set up in order to assess, control and reduce the carcinogen risk at workplace.

  2. Routine sampling and the control of Legionella spp. in cooling tower water systems.

    PubMed

    Bentham, R H

    2000-10-01

    Cooling water samples from 31 cooling tower systems were cultured for Legionella over a 16-week summer period. The selected systems were known to be colonized by Legionella. Mean Legionella counts and standard deviations were calculated and time series correlograms prepared for each system. The standard deviations of Legionella counts in all the systems were very large, indicating great variability in the systems over the time period. Time series analyses demonstrated that in the majority of cases there was no significant relationship between the Legionella counts in the cooling tower at time of collection and the culture result once it was available. In the majority of systems (25/28), culture results from Legionella samples taken from the same systems 2 weeks apart were not statistically related. The data suggest that determinations of health risks from cooling towers cannot be reliably based upon single or infrequent Legionella tests.

  3. Comparison of Data Quality of NOAA's ISIS and SURFRAD Networks to NREL's SRRL-BMS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderberg, M.; Sengupta, M.

    2014-11-01

    This report provides analyses of broadband solar radiometric data quality for the National Oceanic and Atmospheric Administration's Integrated Surface Irradiance Study and Surface Radiation Budget Network (SURFRAD) solar measurement networks. The data quality of these networks is compared to that of the National Renewable Energy Laboratory's Solar Radiation Research Laboratory Baseline Measurement System (SRRL-BMS) native data resolutions and hourly averages of the data from the years 2002 through 2013. This report describes the solar radiometric data quality testing and flagging procedures and the method used to determine and tabulate data quality statistics. Monthly data quality statistics for each network weremore » plotted by year against the statistics for the SRRL-BMS. Some of the plots are presented in the body of the report, but most are in the appendix. These plots indicate that the overall solar radiometric data quality of the SURFRAD network is superior to that of the Integrated Surface Irradiance Study network and can be comparable to SRRL-BMS.« less

  4. GenomeGraphs: integrated genomic data visualization with R.

    PubMed

    Durinck, Steffen; Bullard, James; Spellman, Paul T; Dudoit, Sandrine

    2009-01-06

    Biological studies involve a growing number of distinct high-throughput experiments to characterize samples of interest. There is a lack of methods to visualize these different genomic datasets in a versatile manner. In addition, genomic data analysis requires integrated visualization of experimental data along with constantly changing genomic annotation and statistical analyses. We developed GenomeGraphs, as an add-on software package for the statistical programming environment R, to facilitate integrated visualization of genomic datasets. GenomeGraphs uses the biomaRt package to perform on-line annotation queries to Ensembl and translates these to gene/transcript structures in viewports of the grid graphics package. This allows genomic annotation to be plotted together with experimental data. GenomeGraphs can also be used to plot custom annotation tracks in combination with different experimental data types together in one plot using the same genomic coordinate system. GenomeGraphs is a flexible and extensible software package which can be used to visualize a multitude of genomic datasets within the statistical programming environment R.

  5. Statistical analysis of Thematic Mapper Simulator data for the geobotanical discrimination of rock types in southwest Oregon

    NASA Technical Reports Server (NTRS)

    Morrissey, L. A.; Weinstock, K. J.; Mouat, D. A.; Card, D. H.

    1984-01-01

    An evaluation of Thematic Mapper Simulator (TMS) data for the geobotanical discrimination of rock types based on vegetative cover characteristics is addressed in this research. A methodology for accomplishing this evaluation utilizing univariate and multivariate techniques is presented. TMS data acquired with a Daedalus DEI-1260 multispectral scanner were integrated with vegetation and geologic information for subsequent statistical analyses, which included a chi-square test, an analysis of variance, stepwise discriminant analysis, and Duncan's multiple range test. Results indicate that ultramafic rock types are spectrally separable from nonultramafics based on vegetative cover through the use of statistical analyses.

  6. [Clinical research=design*measurements*statistical analyses].

    PubMed

    Furukawa, Toshiaki

    2012-06-01

    A clinical study must address true endpoints that matter for the patients and the doctors. A good clinical study starts with a good clinical question. Formulating a clinical question in the form of PECO can sharpen one's original question. In order to perform a good clinical study one must have a knowledge of study design, measurements and statistical analyses: The first is taught by epidemiology, the second by psychometrics and the third by biostatistics.

  7. Reframing Serial Murder Within Empirical Research.

    PubMed

    Gurian, Elizabeth A

    2017-04-01

    Empirical research on serial murder is limited due to the lack of consensus on a definition, the continued use of primarily descriptive statistics, and linkage to popular culture depictions. These limitations also inhibit our understanding of these offenders and affect credibility in the field of research. Therefore, this comprehensive overview of a sample of 508 cases (738 total offenders, including partnered groups of two or more offenders) provides analyses of solo male, solo female, and partnered serial killers to elucidate statistical differences and similarities in offending and adjudication patterns among the three groups. This analysis of serial homicide offenders not only supports previous research on offending patterns present in the serial homicide literature but also reveals that empirically based analyses can enhance our understanding beyond traditional case studies and descriptive statistics. Further research based on these empirical analyses can aid in the development of more accurate classifications and definitions of serial murderers.

  8. [Continuity of hospital identifiers in hospital discharge data - Analysis of the nationwide German DRG Statistics from 2005 to 2013].

    PubMed

    Nimptsch, Ulrike; Wengler, Annelene; Mansky, Thomas

    2016-11-01

    In Germany, nationwide hospital discharge data (DRG statistics provided by the research data centers of the Federal Statistical Office and the Statistical Offices of the 'Länder') are increasingly used as data source for health services research. Within this data hospitals can be separated via their hospital identifier ([Institutionskennzeichen] IK). However, this hospital identifier primarily designates the invoicing unit and is not necessarily equivalent to one hospital location. Aiming to investigate direction and extent of possible bias in hospital-level analyses this study examines the continuity of the hospital identifier within a cross-sectional and longitudinal approach and compares the results to official hospital census statistics. Within the DRG statistics from 2005 to 2013 the annual number of hospitals as classified by hospital identifiers was counted for each year of observation. The annual number of hospitals derived from DRG statistics was compared to the number of hospitals in the official census statistics 'Grunddaten der Krankenhäuser'. Subsequently, the temporal continuity of hospital identifiers in the DRG statistics was analyzed within cohorts of hospitals. Until 2013, the annual number of hospital identifiers in the DRG statistics fell by 175 (from 1,725 to 1,550). This decline affected only providers with small or medium case volume. The number of hospitals identified in the DRG statistics was lower than the number given in the census statistics (e.g., in 2013 1,550 IK vs. 1,668 hospitals in the census statistics). The longitudinal analyses revealed that the majority of hospital identifiers persisted in the years of observation, while one fifth of hospital identifiers changed. In cross-sectional studies of German hospital discharge data the separation of hospitals via the hospital identifier might lead to underestimating the number of hospitals and consequential overestimation of caseload per hospital. Discontinuities of hospital identifiers over time might impair the follow-up of hospital cohorts. These limitations must be taken into account in analyses of German hospital discharge data focusing on the hospital level. Copyright © 2016. Published by Elsevier GmbH.

  9. Trends in statistical methods in articles published in Archives of Plastic Surgery between 2012 and 2017.

    PubMed

    Han, Kyunghwa; Jung, Inkyung

    2018-05-01

    This review article presents an assessment of trends in statistical methods and an evaluation of their appropriateness in articles published in the Archives of Plastic Surgery (APS) from 2012 to 2017. We reviewed 388 original articles published in APS between 2012 and 2017. We categorized the articles that used statistical methods according to the type of statistical method, the number of statistical methods, and the type of statistical software used. We checked whether there were errors in the description of statistical methods and results. A total of 230 articles (59.3%) published in APS between 2012 and 2017 used one or more statistical method. Within these articles, there were 261 applications of statistical methods with continuous or ordinal outcomes, and 139 applications of statistical methods with categorical outcome. The Pearson chi-square test (17.4%) and the Mann-Whitney U test (14.4%) were the most frequently used methods. Errors in describing statistical methods and results were found in 133 of the 230 articles (57.8%). Inadequate description of P-values was the most common error (39.1%). Among the 230 articles that used statistical methods, 71.7% provided details about the statistical software programs used for the analyses. SPSS was predominantly used in the articles that presented statistical analyses. We found that the use of statistical methods in APS has increased over the last 6 years. It seems that researchers have been paying more attention to the proper use of statistics in recent years. It is expected that these positive trends will continue in APS.

  10. Confronting Models with Data: The GEWEX Cloud Systems Study

    NASA Technical Reports Server (NTRS)

    Randall, David; Curry, Judith; Duynkerke, Peter; Krueger, Steven; Moncrieff, Mitchell; Ryan, Brian; Starr, David OC.; Miller, Martin; Rossow, William; Tselioudis, George

    2002-01-01

    The GEWEX Cloud System Study (GCSS; GEWEX is the Global Energy and Water Cycle Experiment) was organized to promote development of improved parameterizations of cloud systems for use in climate and numerical weather prediction models, with an emphasis on the climate applications. The strategy of GCSS is to use two distinct kinds of models to analyze and understand observations of the behavior of several different types of clouds systems. Cloud-system-resolving models (CSRMs) have high enough spatial and temporal resolutions to represent individual cloud elements, but cover a wide enough range of space and time scales to permit statistical analysis of simulated cloud systems. Results from CSRMs are compared with detailed observations, representing specific cases based on field experiments, and also with statistical composites obtained from satellite and meteorological analyses. Single-column models (SCMs) are the surgically extracted column physics of atmospheric general circulation models. SCMs are used to test cloud parameterizations in an un-coupled mode, by comparison with field data and statistical composites. In the original GCSS strategy, data is collected in various field programs and provided to the CSRM Community, which uses the data to "certify" the CSRMs as reliable tools for the simulation of particular cloud regimes, and then uses the CSRMs to develop parameterizations, which are provided to the GCM Community. We report here the results of a re-thinking of the scientific strategy of GCSS, which takes into account the practical issues that arise in confronting models with data. The main elements of the proposed new strategy are a more active role for the large-scale modeling community, and an explicit recognition of the importance of data integration.

  11. Pooling sexes when assessing ground reaction forces during walking: Statistical Parametric Mapping versus traditional approach.

    PubMed

    Castro, Marcelo P; Pataky, Todd C; Sole, Gisela; Vilas-Boas, Joao Paulo

    2015-07-16

    Ground reaction force (GRF) data from men and women are commonly pooled for analyses. However, it may not be justifiable to pool sexes on the basis of discrete parameters extracted from continuous GRF gait waveforms because this can miss continuous effects. Forty healthy participants (20 men and 20 women) walked at a cadence of 100 steps per minute across two force plates, recording GRFs. Two statistical methods were used to test the null hypothesis of no mean GRF differences between sexes: (i) Statistical Parametric Mapping-using the entire three-component GRF waveform; and (ii) traditional approach-using the first and second vertical GRF peaks. Statistical Parametric Mapping results suggested large sex differences, which post-hoc analyses suggested were due predominantly to higher anterior-posterior and vertical GRFs in early stance in women compared to men. Statistically significant differences were observed for the first GRF peak and similar values for the second GRF peak. These contrasting results emphasise that different parts of the waveform have different signal strengths and thus that one may use the traditional approach to choose arbitrary metrics and make arbitrary conclusions. We suggest that researchers and clinicians consider both the entire gait waveforms and sex-specificity when analysing GRF data. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Perceived Effectiveness among College Students of Selected Statistical Measures in Motivating Exercise Behavior

    ERIC Educational Resources Information Center

    Merrill, Ray M.; Chatterley, Amanda; Shields, Eric C.

    2005-01-01

    This study explored the effectiveness of selected statistical measures at motivating or maintaining regular exercise among college students. The study also considered whether ease in understanding these statistical measures was associated with perceived effectiveness at motivating or maintaining regular exercise. Analyses were based on a…

  13. Statistical Diversions

    ERIC Educational Resources Information Center

    Petocz, Peter; Sowey, Eric

    2012-01-01

    The term "data snooping" refers to the practice of choosing which statistical analyses to apply to a set of data after having first looked at those data. Data snooping contradicts a fundamental precept of applied statistics, that the scheme of analysis is to be planned in advance. In this column, the authors shall elucidate the…

  14. The Empirical Nature and Statistical Treatment of Missing Data

    ERIC Educational Resources Information Center

    Tannenbaum, Christyn E.

    2009-01-01

    Introduction. Missing data is a common problem in research and can produce severely misleading analyses, including biased estimates of statistical parameters, and erroneous conclusions. In its 1999 report, the APA Task Force on Statistical Inference encouraged authors to report complications such as missing data and discouraged the use of…

  15. Statistical Significance Testing in Second Language Research: Basic Problems and Suggestions for Reform

    ERIC Educational Resources Information Center

    Norris, John M.

    2015-01-01

    Traditions of statistical significance testing in second language (L2) quantitative research are strongly entrenched in how researchers design studies, select analyses, and interpret results. However, statistical significance tests using "p" values are commonly misinterpreted by researchers, reviewers, readers, and others, leading to…

  16. 75 FR 24718 - Guidance for Industry on Documenting Statistical Analysis Programs and Data Files; Availability

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-05-05

    ...] Guidance for Industry on Documenting Statistical Analysis Programs and Data Files; Availability AGENCY... Programs and Data Files.'' This guidance is provided to inform study statisticians of recommendations for documenting statistical analyses and data files submitted to the Center for Veterinary Medicine (CVM) for the...

  17. Development of the Statistical Reasoning in Biology Concept Inventory (SRBCI).

    PubMed

    Deane, Thomas; Nomme, Kathy; Jeffery, Erica; Pollock, Carol; Birol, Gülnur

    2016-01-01

    We followed established best practices in concept inventory design and developed a 12-item inventory to assess student ability in statistical reasoning in biology (Statistical Reasoning in Biology Concept Inventory [SRBCI]). It is important to assess student thinking in this conceptual area, because it is a fundamental requirement of being statistically literate and associated skills are needed in almost all walks of life. Despite this, previous work shows that non-expert-like thinking in statistical reasoning is common, even after instruction. As science educators, our goal should be to move students along a novice-to-expert spectrum, which could be achieved with growing experience in statistical reasoning. We used item response theory analyses (the one-parameter Rasch model and associated analyses) to assess responses gathered from biology students in two populations at a large research university in Canada in order to test SRBCI's robustness and sensitivity in capturing useful data relating to the students' conceptual ability in statistical reasoning. Our analyses indicated that SRBCI is a unidimensional construct, with items that vary widely in difficulty and provide useful information about such student ability. SRBCI should be useful as a diagnostic tool in a variety of biology settings and as a means of measuring the success of teaching interventions designed to improve statistical reasoning skills. © 2016 T. Deane et al. CBE—Life Sciences Education © 2016 The American Society for Cell Biology. This article is distributed by The American Society for Cell Biology under license from the author(s). It is available to the public under an Attribution–Noncommercial–Share Alike 3.0 Unported Creative Commons License (http://creativecommons.org/licenses/by-nc-sa/3.0).

  18. Evaluation and application of summary statistic imputation to discover new height-associated loci.

    PubMed

    Rüeger, Sina; McDaid, Aaron; Kutalik, Zoltán

    2018-05-01

    As most of the heritability of complex traits is attributed to common and low frequency genetic variants, imputing them by combining genotyping chips and large sequenced reference panels is the most cost-effective approach to discover the genetic basis of these traits. Association summary statistics from genome-wide meta-analyses are available for hundreds of traits. Updating these to ever-increasing reference panels is very cumbersome as it requires reimputation of the genetic data, rerunning the association scan, and meta-analysing the results. A much more efficient method is to directly impute the summary statistics, termed as summary statistics imputation, which we improved to accommodate variable sample size across SNVs. Its performance relative to genotype imputation and practical utility has not yet been fully investigated. To this end, we compared the two approaches on real (genotyped and imputed) data from 120K samples from the UK Biobank and show that, genotype imputation boasts a 3- to 5-fold lower root-mean-square error, and better distinguishes true associations from null ones: We observed the largest differences in power for variants with low minor allele frequency and low imputation quality. For fixed false positive rates of 0.001, 0.01, 0.05, using summary statistics imputation yielded a decrease in statistical power by 9, 43 and 35%, respectively. To test its capacity to discover novel associations, we applied summary statistics imputation to the GIANT height meta-analysis summary statistics covering HapMap variants, and identified 34 novel loci, 19 of which replicated using data in the UK Biobank. Additionally, we successfully replicated 55 out of the 111 variants published in an exome chip study. Our study demonstrates that summary statistics imputation is a very efficient and cost-effective way to identify and fine-map trait-associated loci. Moreover, the ability to impute summary statistics is important for follow-up analyses, such as Mendelian randomisation or LD-score regression.

  19. Evaluation and application of summary statistic imputation to discover new height-associated loci

    PubMed Central

    2018-01-01

    As most of the heritability of complex traits is attributed to common and low frequency genetic variants, imputing them by combining genotyping chips and large sequenced reference panels is the most cost-effective approach to discover the genetic basis of these traits. Association summary statistics from genome-wide meta-analyses are available for hundreds of traits. Updating these to ever-increasing reference panels is very cumbersome as it requires reimputation of the genetic data, rerunning the association scan, and meta-analysing the results. A much more efficient method is to directly impute the summary statistics, termed as summary statistics imputation, which we improved to accommodate variable sample size across SNVs. Its performance relative to genotype imputation and practical utility has not yet been fully investigated. To this end, we compared the two approaches on real (genotyped and imputed) data from 120K samples from the UK Biobank and show that, genotype imputation boasts a 3- to 5-fold lower root-mean-square error, and better distinguishes true associations from null ones: We observed the largest differences in power for variants with low minor allele frequency and low imputation quality. For fixed false positive rates of 0.001, 0.01, 0.05, using summary statistics imputation yielded a decrease in statistical power by 9, 43 and 35%, respectively. To test its capacity to discover novel associations, we applied summary statistics imputation to the GIANT height meta-analysis summary statistics covering HapMap variants, and identified 34 novel loci, 19 of which replicated using data in the UK Biobank. Additionally, we successfully replicated 55 out of the 111 variants published in an exome chip study. Our study demonstrates that summary statistics imputation is a very efficient and cost-effective way to identify and fine-map trait-associated loci. Moreover, the ability to impute summary statistics is important for follow-up analyses, such as Mendelian randomisation or LD-score regression. PMID:29782485

  20. ParallABEL: an R library for generalized parallelization of genome-wide association studies.

    PubMed

    Sangket, Unitsa; Mahasirimongkol, Surakameth; Chantratita, Wasun; Tandayya, Pichaya; Aulchenko, Yurii S

    2010-04-29

    Genome-Wide Association (GWA) analysis is a powerful method for identifying loci associated with complex traits and drug response. Parts of GWA analyses, especially those involving thousands of individuals and consuming hours to months, will benefit from parallel computation. It is arduous acquiring the necessary programming skills to correctly partition and distribute data, control and monitor tasks on clustered computers, and merge output files. Most components of GWA analysis can be divided into four groups based on the types of input data and statistical outputs. The first group contains statistics computed for a particular Single Nucleotide Polymorphism (SNP), or trait, such as SNP characterization statistics or association test statistics. The input data of this group includes the SNPs/traits. The second group concerns statistics characterizing an individual in a study, for example, the summary statistics of genotype quality for each sample. The input data of this group includes individuals. The third group consists of pair-wise statistics derived from analyses between each pair of individuals in the study, for example genome-wide identity-by-state or genomic kinship analyses. The input data of this group includes pairs of SNPs/traits. The final group concerns pair-wise statistics derived for pairs of SNPs, such as the linkage disequilibrium characterisation. The input data of this group includes pairs of individuals. We developed the ParallABEL library, which utilizes the Rmpi library, to parallelize these four types of computations. ParallABEL library is not only aimed at GenABEL, but may also be employed to parallelize various GWA packages in R. The data set from the North American Rheumatoid Arthritis Consortium (NARAC) includes 2,062 individuals with 545,080, SNPs' genotyping, was used to measure ParallABEL performance. Almost perfect speed-up was achieved for many types of analyses. For example, the computing time for the identity-by-state matrix was linearly reduced from approximately eight hours to one hour when ParallABEL employed eight processors. Executing genome-wide association analysis using the ParallABEL library on a computer cluster is an effective way to boost performance, and simplify the parallelization of GWA studies. ParallABEL is a user-friendly parallelization of GenABEL.

  1. Formalizing the definition of meta-analysis in Molecular Ecology.

    PubMed

    ArchMiller, Althea A; Bauer, Eric F; Koch, Rebecca E; Wijayawardena, Bhagya K; Anil, Ammu; Kottwitz, Jack J; Munsterman, Amelia S; Wilson, Alan E

    2015-08-01

    Meta-analysis, the statistical synthesis of pertinent literature to develop evidence-based conclusions, is relatively new to the field of molecular ecology, with the first meta-analysis published in the journal Molecular Ecology in 2003 (Slate & Phua 2003). The goal of this article is to formalize the definition of meta-analysis for the authors, editors, reviewers and readers of Molecular Ecology by completing a review of the meta-analyses previously published in this journal. We also provide a brief overview of the many components required for meta-analysis with a more specific discussion of the issues related to the field of molecular ecology, including the use and statistical considerations of Wright's FST and its related analogues as effect sizes in meta-analysis. We performed a literature review to identify articles published as 'meta-analyses' in Molecular Ecology, which were then evaluated by at least two reviewers. We specifically targeted Molecular Ecology publications because as a flagship journal in this field, meta-analyses published in Molecular Ecology have the potential to set the standard for meta-analyses in other journals. We found that while many of these reviewed articles were strong meta-analyses, others failed to follow standard meta-analytical techniques. One of these unsatisfactory meta-analyses was in fact a secondary analysis. Other studies attempted meta-analyses but lacked the fundamental statistics that are considered necessary for an effective and powerful meta-analysis. By drawing attention to the inconsistency of studies labelled as meta-analyses, we emphasize the importance of understanding the components of traditional meta-analyses to fully embrace the strengths of quantitative data synthesis in the field of molecular ecology. © 2015 John Wiley & Sons Ltd.

  2. Regional regression equations for the estimation of selected monthly low-flow duration and frequency statistics at ungaged sites on streams in New Jersey

    USGS Publications Warehouse

    Watson, Kara M.; McHugh, Amy R.

    2014-01-01

    Regional regression equations were developed for estimating monthly flow-duration and monthly low-flow frequency statistics for ungaged streams in Coastal Plain and non-coastal regions of New Jersey for baseline and current land- and water-use conditions. The equations were developed to estimate 87 different streamflow statistics, which include the monthly 99-, 90-, 85-, 75-, 50-, and 25-percentile flow-durations of the minimum 1-day daily flow; the August–September 99-, 90-, and 75-percentile minimum 1-day daily flow; and the monthly 7-day, 10-year (M7D10Y) low-flow frequency. These 87 streamflow statistics were computed for 41 continuous-record streamflow-gaging stations (streamgages) with 20 or more years of record and 167 low-flow partial-record stations in New Jersey with 10 or more streamflow measurements. The regression analyses used to develop equations to estimate selected streamflow statistics were performed by testing the relation between flow-duration statistics and low-flow frequency statistics for 32 basin characteristics (physical characteristics, land use, surficial geology, and climate) at the 41 streamgages and 167 low-flow partial-record stations. The regression analyses determined drainage area, soil permeability, average April precipitation, average June precipitation, and percent storage (water bodies and wetlands) were the significant explanatory variables for estimating the selected flow-duration and low-flow frequency statistics. Streamflow estimates were computed for two land- and water-use conditions in New Jersey—land- and water-use during the baseline period of record (defined as the years a streamgage had little to no change in development and water use) and current land- and water-use conditions (1989–2008)—for each selected station using data collected through water year 2008. The baseline period of record is representative of a period when the basin was unaffected by change in development. The current period is representative of the increased development of the last 20 years (1989–2008). The two different land- and water-use conditions were used as surrogates for development to determine whether there have been changes in low-flow statistics as a result of changes in development over time. The State was divided into two low-flow regression regions, the Coastal Plain and the non-coastal region, in order to improve the accuracy of the regression equations. The left-censored parametric survival regression method was used for the analyses to account for streamgages and partial-record stations that had zero flow values for some of the statistics. The average standard error of estimate for the 348 regression equations ranged from 16 to 340 percent. These regression equations and basin characteristics are presented in the U.S. Geological Survey (USGS) StreamStats Web-based geographic information system application. This tool allows users to click on an ungaged site on a stream in New Jersey and get the estimated flow-duration and low-flow frequency statistics. Additionally, the user can click on a streamgage or partial-record station and get the “at-site” streamflow statistics. The low-flow characteristics of a stream ultimately affect the use of the stream by humans. Specific information on the low-flow characteristics of streams is essential to water managers who deal with problems related to municipal and industrial water supply, fish and wildlife conservation, and dilution of wastewater.

  3. Statistical power analyses using G*Power 3.1: tests for correlation and regression analyses.

    PubMed

    Faul, Franz; Erdfelder, Edgar; Buchner, Axel; Lang, Albert-Georg

    2009-11-01

    G*Power is a free power analysis program for a variety of statistical tests. We present extensions and improvements of the version introduced by Faul, Erdfelder, Lang, and Buchner (2007) in the domain of correlation and regression analyses. In the new version, we have added procedures to analyze the power of tests based on (1) single-sample tetrachoric correlations, (2) comparisons of dependent correlations, (3) bivariate linear regression, (4) multiple linear regression based on the random predictor model, (5) logistic regression, and (6) Poisson regression. We describe these new features and provide a brief introduction to their scope and handling.

  4. Physical Uncertainty Bounds (PUB)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vaughan, Diane Elizabeth; Preston, Dean L.

    2015-03-19

    This paper introduces and motivates the need for a new methodology for determining upper bounds on the uncertainties in simulations of engineered systems due to limited fidelity in the composite continuum-level physics models needed to simulate the systems. We show that traditional uncertainty quantification methods provide, at best, a lower bound on this uncertainty. We propose to obtain bounds on the simulation uncertainties by first determining bounds on the physical quantities or processes relevant to system performance. By bounding these physics processes, as opposed to carrying out statistical analyses of the parameter sets of specific physics models or simply switchingmore » out the available physics models, one can obtain upper bounds on the uncertainties in simulated quantities of interest.« less

  5. Spatial and Alignment Analyses for a field of Small Volcanic Vents South of Pavonis Mons Mars

    NASA Technical Reports Server (NTRS)

    Bleacher, J. E.; Glaze, L. S.; Greeley, R.; Hauber, E.; Baloga, S. M.; Sakimoto, S. E. H.; Williams, D. A.; Glotch, T. D.

    2008-01-01

    The Tharsis province of Mars displays a variety of small volcanic vent (10s krn in diameter) morphologies. These features were identified in Mariner and Viking images [1-4], and Mars Orbiter Laser Altimeter (MOLA) data show them to be more abundant than originally observed [5,6]. Recent studies are classifying their diverse morphologies [7-9]. Building on this work, we are mapping the location of small volcanic vents (small-vents) in the Tharsis province using MOLA, Thermal Emission Imaging System, and High Resolution Stereo Camera data [10]. Here we report on a preliminary study of the spatial and alignment relationships between small-vents south of Pavonis Mons, as determined by nearest neighbor and two-point azimuth statistical analyses. Terrestrial monogenetic volcanic fields display four fundamental characteristics: 1) recurrence rates of eruptions,2 ) vent abundance, 3) vent distribution, and 4) tectonic relationships [11]. While understanding recurrence rates typically requires field measurements, insight into vent abundance, distribution, and tectonic relationships can be established by mapping of remotely sensed data, and subsequent application of spatial statistical studies [11,12], the goal of which is to link the distribution of vents to causal processes.

  6. Helmholtz and Gibbs ensembles, thermodynamic limit and bistability in polymer lattice models

    NASA Astrophysics Data System (ADS)

    Giordano, Stefano

    2017-12-01

    Representing polymers by random walks on a lattice is a fruitful approach largely exploited to study configurational statistics of polymer chains and to develop efficient Monte Carlo algorithms. Nevertheless, the stretching and the folding/unfolding of polymer chains within the Gibbs (isotensional) and the Helmholtz (isometric) ensembles of the statistical mechanics have not been yet thoroughly analysed by means of the lattice methodology. This topic, motivated by the recent introduction of several single-molecule force spectroscopy techniques, is investigated in the present paper. In particular, we analyse the force-extension curves under the Gibbs and Helmholtz conditions and we give a proof of the ensembles equivalence in the thermodynamic limit for polymers represented by a standard random walk on a lattice. Then, we generalize these concepts for lattice polymers that can undergo conformational transitions or, equivalently, for chains composed of bistable or two-state elements (that can be either folded or unfolded). In this case, the isotensional condition leads to a plateau-like force-extension response, whereas the isometric condition causes a sawtooth-like force-extension curve, as predicted by numerous experiments. The equivalence of the ensembles is finally proved also for lattice polymer systems exhibiting conformational transitions.

  7. Prognostic value of inflammation-based scores in patients with osteosarcoma

    PubMed Central

    Liu, Bangjian; Huang, Yujing; Sun, Yuanjue; Zhang, Jianjun; Yao, Yang; Shen, Zan; Xiang, Dongxi; He, Aina

    2016-01-01

    Systemic inflammation responses have been associated with cancer development and progression. C-reactive protein (CRP), Glasgow prognostic score (GPS), neutrophil-lymphocyte ratio (NLR), platelet-lymphocyte ratio (PLR), lymphocyte-monocyte ratio (LMR), and neutrophil-platelet score (NPS) have been shown to be independent risk factors in various types of malignant tumors. This retrospective analysis of 162 osteosarcoma cases was performed to estimate their predictive value of survival in osteosarcoma. All statistical analyses were performed by SPSS statistical software. Receiver operating characteristic (ROC) analysis was generated to set optimal thresholds; area under the curve (AUC) was used to show the discriminatory abilities of inflammation-based scores; Kaplan-Meier analysis was performed to plot the survival curve; cox regression models were employed to determine the independent prognostic factors. The optimal cut-off points of NLR, PLR, and LMR were 2.57, 123.5 and 4.73, respectively. GPS and NLR had a markedly larger AUC than CRP, PLR and LMR. High levels of CRP, GPS, NLR, PLR, and low level of LMR were significantly associated with adverse prognosis (P < 0.05). Multivariate Cox regression analyses revealed that GPS, NLR, and occurrence of metastasis were top risk factors associated with death of osteosarcoma patients. PMID:28008988

  8. Temporal scaling and spatial statistical analyses of groundwater level fluctuations

    NASA Astrophysics Data System (ADS)

    Sun, H.; Yuan, L., Sr.; Zhang, Y.

    2017-12-01

    Natural dynamics such as groundwater level fluctuations can exhibit multifractionality and/or multifractality due likely to multi-scale aquifer heterogeneity and controlling factors, whose statistics requires efficient quantification methods. This study explores multifractionality and non-Gaussian properties in groundwater dynamics expressed by time series of daily level fluctuation at three wells located in the lower Mississippi valley, after removing the seasonal cycle in the temporal scaling and spatial statistical analysis. First, using the time-scale multifractional analysis, a systematic statistical method is developed to analyze groundwater level fluctuations quantified by the time-scale local Hurst exponent (TS-LHE). Results show that the TS-LHE does not remain constant, implying the fractal-scaling behavior changing with time and location. Hence, we can distinguish the potentially location-dependent scaling feature, which may characterize the hydrology dynamic system. Second, spatial statistical analysis shows that the increment of groundwater level fluctuations exhibits a heavy tailed, non-Gaussian distribution, which can be better quantified by a Lévy stable distribution. Monte Carlo simulations of the fluctuation process also show that the linear fractional stable motion model can well depict the transient dynamics (i.e., fractal non-Gaussian property) of groundwater level, while fractional Brownian motion is inadequate to describe natural processes with anomalous dynamics. Analysis of temporal scaling and spatial statistics therefore may provide useful information and quantification to understand further the nature of complex dynamics in hydrology.

  9. The Development of Statistical Models for Predicting Surgical Site Infections in Japan: Toward a Statistical Model-Based Standardized Infection Ratio.

    PubMed

    Fukuda, Haruhisa; Kuroki, Manabu

    2016-03-01

    To develop and internally validate a surgical site infection (SSI) prediction model for Japan. Retrospective observational cohort study. We analyzed surveillance data submitted to the Japan Nosocomial Infections Surveillance system for patients who had undergone target surgical procedures from January 1, 2010, through December 31, 2012. Logistic regression analyses were used to develop statistical models for predicting SSIs. An SSI prediction model was constructed for each of the procedure categories by statistically selecting the appropriate risk factors from among the collected surveillance data and determining their optimal categorization. Standard bootstrapping techniques were applied to assess potential overfitting. The C-index was used to compare the predictive performances of the new statistical models with those of models based on conventional risk index variables. The study sample comprised 349,987 cases from 428 participant hospitals throughout Japan, and the overall SSI incidence was 7.0%. The C-indices of the new statistical models were significantly higher than those of the conventional risk index models in 21 (67.7%) of the 31 procedure categories (P<.05). No significant overfitting was detected. Japan-specific SSI prediction models were shown to generally have higher accuracy than conventional risk index models. These new models may have applications in assessing hospital performance and identifying high-risk patients in specific procedure categories.

  10. Adaptive Communication: Languages with More Non-Native Speakers Tend to Have Fewer Word Forms

    PubMed Central

    Bentz, Christian; Verkerk, Annemarie; Kiela, Douwe; Hill, Felix; Buttery, Paula

    2015-01-01

    Explaining the diversity of languages across the world is one of the central aims of typological, historical, and evolutionary linguistics. We consider the effect of language contact-the number of non-native speakers a language has-on the way languages change and evolve. By analysing hundreds of languages within and across language families, regions, and text types, we show that languages with greater levels of contact typically employ fewer word forms to encode the same information content (a property we refer to as lexical diversity). Based on three types of statistical analyses, we demonstrate that this variance can in part be explained by the impact of non-native speakers on information encoding strategies. Finally, we argue that languages are information encoding systems shaped by the varying needs of their speakers. Language evolution and change should be modeled as the co-evolution of multiple intertwined adaptive systems: On one hand, the structure of human societies and human learning capabilities, and on the other, the structure of language. PMID:26083380

  11. Enhanced Component Performance Study. Emergency Diesel Generators 1998–2013

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schroeder, John Alton

    2014-11-01

    This report presents an enhanced performance evaluation of emergency diesel generators (EDGs) at U.S. commercial nuclear power plants. This report evaluates component performance over time using Institute of Nuclear Power Operations (INPO) Consolidated Events Database (ICES) data from 1998 through 2013 and maintenance unavailability (UA) performance data using Mitigating Systems Performance Index (MSPI) Basis Document data from 2002 through 2013. The objective is to present an analysis of factors that could influence the system and component trends in addition to annual performance trends of failure rates and probabilities. The factors analyzed for the EDG component are the differences in failuresmore » between all demands and actual unplanned engineered safety feature (ESF) demands, differences among manufacturers, and differences among EDG ratings. Statistical analyses of these differences are performed and results showing whether pooling is acceptable across these factors. In addition, engineering analyses were performed with respect to time period and failure mode. The factors analyzed are: sub-component, failure cause, detection method, recovery, manufacturer, and EDG rating.« less

  12. Using System Dynamic Model and Neural Network Model to Analyse Water Scarcity in Sudan

    NASA Astrophysics Data System (ADS)

    Li, Y.; Tang, C.; Xu, L.; Ye, S.

    2017-07-01

    Many parts of the world are facing the problem of Water Scarcity. Analysing Water Scarcity quantitatively is an important step to solve the problem. Water scarcity in a region is gauged by WSI (water scarcity index), which incorporate water supply and water demand. To get the WSI, Neural Network Model and SDM (System Dynamic Model) that depict how environmental and social factors affect water supply and demand are developed to depict how environmental and social factors affect water supply and demand. The uneven distribution of water resource and water demand across a region leads to an uneven distribution of WSI within this region. To predict WSI for the future, logistic model, Grey Prediction, and statistics are applied in predicting variables. Sudan suffers from severe water scarcity problem with WSI of 1 in 2014, water resource unevenly distributed. According to the result of modified model, after the intervention, Sudan’s water situation will become better.

  13. Facilitating the Transition from Bright to Dim Environments

    DTIC Science & Technology

    2016-03-04

    For the parametric data, a multivariate ANOVA was used in determining the systematic presence of any statistically significant performance differences...performed. All significance levels were p < 0.05, and statistical analyses were performed with the Statistical Package for Social Sciences ( SPSS ...1950. Age changes in rate and level of visual dark adaptation. Journal of Applied Physiology, 2, 407–411. Field, A. 2009. Discovering statistics

  14. RADSS: an integration of GIS, spatial statistics, and network service for regional data mining

    NASA Astrophysics Data System (ADS)

    Hu, Haitang; Bao, Shuming; Lin, Hui; Zhu, Qing

    2005-10-01

    Regional data mining, which aims at the discovery of knowledge about spatial patterns, clusters or association between regions, has widely applications nowadays in social science, such as sociology, economics, epidemiology, crime, and so on. Many applications in the regional or other social sciences are more concerned with the spatial relationship, rather than the precise geographical location. Based on the spatial continuity rule derived from Tobler's first law of geography: observations at two sites tend to be more similar to each other if the sites are close together than if far apart, spatial statistics, as an important means for spatial data mining, allow the users to extract the interesting and useful information like spatial pattern, spatial structure, spatial association, spatial outlier and spatial interaction, from the vast amount of spatial data or non-spatial data. Therefore, by integrating with the spatial statistical methods, the geographical information systems will become more powerful in gaining further insights into the nature of spatial structure of regional system, and help the researchers to be more careful when selecting appropriate models. However, the lack of such tools holds back the application of spatial data analysis techniques and development of new methods and models (e.g., spatio-temporal models). Herein, we make an attempt to develop such an integrated software and apply it into the complex system analysis for the Poyang Lake Basin. This paper presents a framework for integrating GIS, spatial statistics and network service in regional data mining, as well as their implementation. After discussing the spatial statistics methods involved in regional complex system analysis, we introduce RADSS (Regional Analysis and Decision Support System), our new regional data mining tool, by integrating GIS, spatial statistics and network service. RADSS includes the functions of spatial data visualization, exploratory spatial data analysis, and spatial statistics. The tool also includes some fundamental spatial and non-spatial database in regional population and environment, which can be updated by external database via CD or network. Utilizing this data mining and exploratory analytical tool, the users can easily and quickly analyse the huge mount of the interrelated regional data, and better understand the spatial patterns and trends of the regional development, so as to make a credible and scientific decision. Moreover, it can be used as an educational tool for spatial data analysis and environmental studies. In this paper, we also present a case study on Poyang Lake Basin as an application of the tool and spatial data mining in complex environmental studies. At last, several concluding remarks are discussed.

  15. Imaging Depression in Adults with ASD

    DTIC Science & Technology

    2017-10-01

    collected temporally close enough to imaging data in Phase 2 to be confidently incorporated in the planned statistical analyses, and (b) not unduly risk...Phase 2 to be confidently incorporated in the planned statistical analyses, and (b) not unduly risk attrition between Phase 1 and 2, we chose to hold...supervision is ongoing (since 9/2014). • Co-l Dr. Lerner’s 2nd year Clinical Psychology PhD students have participated in ADOS- 2 Introductory Clinical

  16. The chemiluminescence based Ziplex automated workstation focus array reproduces ovarian cancer Affymetrix GeneChip expression profiles.

    PubMed

    Quinn, Michael C J; Wilson, Daniel J; Young, Fiona; Dempsey, Adam A; Arcand, Suzanna L; Birch, Ashley H; Wojnarowicz, Paulina M; Provencher, Diane; Mes-Masson, Anne-Marie; Englert, David; Tonin, Patricia N

    2009-07-06

    As gene expression signatures may serve as biomarkers, there is a need to develop technologies based on mRNA expression patterns that are adaptable for translational research. Xceed Molecular has recently developed a Ziplex technology, that can assay for gene expression of a discrete number of genes as a focused array. The present study has evaluated the reproducibility of the Ziplex system as applied to ovarian cancer research of genes shown to exhibit distinct expression profiles initially assessed by Affymetrix GeneChip analyses. The new chemiluminescence-based Ziplex gene expression array technology was evaluated for the expression of 93 genes selected based on their Affymetrix GeneChip profiles as applied to ovarian cancer research. Probe design was based on the Affymetrix target sequence that favors the 3' UTR of transcripts in order to maximize reproducibility across platforms. Gene expression analysis was performed using the Ziplex Automated Workstation. Statistical analyses were performed to evaluate reproducibility of both the magnitude of expression and differences between normal and tumor samples by correlation analyses, fold change differences and statistical significance testing. Expressions of 82 of 93 (88.2%) genes were highly correlated (p < 0.01) in a comparison of the two platforms. Overall, 75 of 93 (80.6%) genes exhibited consistent results in normal versus tumor tissue comparisons for both platforms (p < 0.001). The fold change differences were concordant for 87 of 93 (94%) genes, where there was agreement between the platforms regarding statistical significance for 71 (76%) of 87 genes. There was a strong agreement between the two platforms as shown by comparisons of log2 fold differences of gene expression between tumor versus normal samples (R = 0.93) and by Bland-Altman analysis, where greater than 90% of expression values fell within the 95% limits of agreement. Overall concordance of gene expression patterns based on correlations, statistical significance between tumor and normal ovary data, and fold changes was consistent between the Ziplex and Affymetrix platforms. The reproducibility and ease-of-use of the technology suggests that the Ziplex array is a suitable platform for translational research.

  17. Ubiquity of Polynucleobacter necessarius subsp. asymbioticus in lentic freshwater habitats of a heterogenous 2000 km2 area

    PubMed Central

    Jezberová, Jitka; Jezbera, Jan; Brandt, Ulrike; Lindström, Eva S.; Langenheder, Silke; Hahn, Martin W.

    2010-01-01

    Summary We present a survey on the distribution and habitat range of P. necessarius subspecies asymbioticus (PnecC), an important taxon in the water column of freshwater systems. We systematically sampled stagnant freshwater habitats in a heterogeneous 2000 km2 area, together with ecologically different habitats outside this area. In total, 137 lakes, ponds and puddles were investigated, which represent an enormous diversity of habitats differing, i.e., in depth (<10 cm – 171 m) and pH (3.9 – 8.5). PnecC was detected by cultivation-independent methods in all investigated habitats, and its presence was confirmed by cultivation of strains from selected habitats including the most extreme ones. The determined relative abundance of the subspecies ranged from slightly above 0% to 67% (average 14.5% ± 14.3%), and the highest observed absolute abundance was 5.3×106 cells mL−1. Statistical analyses revealed that the abundance of PnecC is partially controlled by factors linked to concentrations of humic substances, which might support the hypothesis that these bacteria utilize photodegradation products of humic substances. . Statistical analyses revealed that the abundance of PnecC is partially controlled by low conductivity and pH and factors linked to concentrations of humic substances. Based on the revealed statistical relationships, an average relative abundance of this subspecies of 20% in global freshwater habitats was extrapolated. Our study provides important implications for the current debate on ubiquity and biogeography in microorganisms. PMID:20041938

  18. Spatial variation in the bacterial and denitrifying bacterial community in a biofilter treating subsurface agricultural drainage.

    PubMed

    Andrus, J Malia; Porter, Matthew D; Rodríguez, Luis F; Kuehlhorn, Timothy; Cooke, Richard A C; Zhang, Yuanhui; Kent, Angela D; Zilles, Julie L

    2014-02-01

    Denitrifying biofilters can remove agricultural nitrates from subsurface drainage, reducing nitrate pollution that contributes to coastal hypoxic zones. The performance and reliability of natural and engineered systems dependent upon microbially mediated processes, such as the denitrifying biofilters, can be affected by the spatial structure of their microbial communities. Furthermore, our understanding of the relationship between microbial community composition and function is influenced by the spatial distribution of samples.In this study we characterized the spatial structure of bacterial communities in a denitrifying biofilter in central Illinois. Bacterial communities were assessed using automated ribosomal intergenic spacer analysis for bacteria and terminal restriction fragment length polymorphism of nosZ for denitrifying bacteria.Non-metric multidimensional scaling and analysis of similarity (ANOSIM) analyses indicated that bacteria showed statistically significant spatial structure by depth and transect,while denitrifying bacteria did not exhibit significant spatial structure. For determination of spatial patterns, we developed a package of automated functions for the R statistical environment that allows directional analysis of microbial community composition data using either ANOSIM or Mantel statistics.Applying this package to the biofilter data, the flow path correlation range for the bacterial community was 6.4 m at the shallower, periodically in undated depth and 10.7 m at the deeper, continually submerged depth. These spatial structures suggest a strong influence of hydrology on the microbial community composition in these denitrifying biofilters. Understanding such spatial structure can also guide optimal sample collection strategies for microbial community analyses.

  19. An improved approach for flight readiness certification: Methodology for failure risk assessment and application examples. Volume 2: Software documentation

    NASA Technical Reports Server (NTRS)

    Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.

    1992-01-01

    An improved methodology for quantitatively evaluating failure risk of spaceflight systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with engineering analysis to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in engineering analyses of failure phenomena, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which engineering analysis models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes, These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. Conventional engineering analysis models currently employed for design of failure prediction are used in this methodology. The PFA methodology is described and examples of its application are presented. Conventional approaches to failure risk evaluation for spaceflight systems are discussed, and the rationale for the approach taken in the PFA methodology is presented. The statistical methods, engineering models, and computer software used in fatigue failure mode applications are thoroughly documented.

  20. An improved approach for flight readiness certification: Methodology for failure risk assessment and application examples, volume 1

    NASA Technical Reports Server (NTRS)

    Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.

    1992-01-01

    An improved methodology for quantitatively evaluating failure risk of spaceflight systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with engineering analysis to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in engineering analyses of failure phenomena, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which engineering analysis models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes. These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. Conventional engineering analysis models currently employed for design of failure prediction are used in this methodology. The PFA methodology is described and examples of its application are presented. Conventional approaches to failure risk evaluation for spaceflight systems are discussed, and the rationale for the approach taken in the PFA methodology is presented. The statistical methods, engineering models, and computer software used in fatigue failure mode applications are thoroughly documented.

  1. A new statistical method for design and analyses of component tolerance

    NASA Astrophysics Data System (ADS)

    Movahedi, Mohammad Mehdi; Khounsiavash, Mohsen; Otadi, Mahmood; Mosleh, Maryam

    2017-03-01

    Tolerancing conducted by design engineers to meet customers' needs is a prerequisite for producing high-quality products. Engineers use handbooks to conduct tolerancing. While use of statistical methods for tolerancing is not something new, engineers often use known distributions, including the normal distribution. Yet, if the statistical distribution of the given variable is unknown, a new statistical method will be employed to design tolerance. In this paper, we use generalized lambda distribution for design and analyses component tolerance. We use percentile method (PM) to estimate the distribution parameters. The findings indicated that, when the distribution of the component data is unknown, the proposed method can be used to expedite the design of component tolerance. Moreover, in the case of assembled sets, more extensive tolerance for each component with the same target performance can be utilized.

  2. Computed statistics at streamgages, and methods for estimating low-flow frequency statistics and development of regional regression equations for estimating low-flow frequency statistics at ungaged locations in Missouri

    USGS Publications Warehouse

    Southard, Rodney E.

    2013-01-01

    The weather and precipitation patterns in Missouri vary considerably from year to year. In 2008, the statewide average rainfall was 57.34 inches and in 2012, the statewide average rainfall was 30.64 inches. This variability in precipitation and resulting streamflow in Missouri underlies the necessity for water managers and users to have reliable streamflow statistics and a means to compute select statistics at ungaged locations for a better understanding of water availability. Knowledge of surface-water availability is dependent on the streamflow data that have been collected and analyzed by the U.S. Geological Survey for more than 100 years at approximately 350 streamgages throughout Missouri. The U.S. Geological Survey, in cooperation with the Missouri Department of Natural Resources, computed streamflow statistics at streamgages through the 2010 water year, defined periods of drought and defined methods to estimate streamflow statistics at ungaged locations, and developed regional regression equations to compute selected streamflow statistics at ungaged locations. Streamflow statistics and flow durations were computed for 532 streamgages in Missouri and in neighboring States of Missouri. For streamgages with more than 10 years of record, Kendall’s tau was computed to evaluate for trends in streamflow data. If trends were detected, the variable length method was used to define the period of no trend. Water years were removed from the dataset from the beginning of the record for a streamgage until no trend was detected. Low-flow frequency statistics were then computed for the entire period of record and for the period of no trend if 10 or more years of record were available for each analysis. Three methods are presented for computing selected streamflow statistics at ungaged locations. The first method uses power curve equations developed for 28 selected streams in Missouri and neighboring States that have multiple streamgages on the same streams. Statistical estimates on one of these streams can be calculated at an ungaged location that has a drainage area that is between 40 percent of the drainage area of the farthest upstream streamgage and within 150 percent of the drainage area of the farthest downstream streamgage along the stream of interest. The second method may be used on any stream with a streamgage that has operated for 10 years or longer and for which anthropogenic effects have not changed the low-flow characteristics at the ungaged location since collection of the streamflow data. A ratio of drainage area of the stream at the ungaged location to the drainage area of the stream at the streamgage was computed to estimate the statistic at the ungaged location. The range of applicability is between 40- and 150-percent of the drainage area of the streamgage, and the ungaged location must be located on the same stream as the streamgage. The third method uses regional regression equations to estimate selected low-flow frequency statistics for unregulated streams in Missouri. This report presents regression equations to estimate frequency statistics for the 10-year recurrence interval and for the N-day durations of 1, 2, 3, 7, 10, 30, and 60 days. Basin and climatic characteristics were computed using geographic information system software and digital geospatial data. A total of 35 characteristics were computed for use in preliminary statewide and regional regression analyses based on existing digital geospatial data and previous studies. Spatial analyses for geographical bias in the predictive accuracy of the regional regression equations defined three low-flow regions with the State representing the three major physiographic provinces in Missouri. Region 1 includes the Central Lowlands, Region 2 includes the Ozark Plateaus, and Region 3 includes the Mississippi Alluvial Plain. A total of 207 streamgages were used in the regression analyses for the regional equations. Of the 207 U.S. Geological Survey streamgages, 77 were located in Region 1, 120 were located in Region 2, and 10 were located in Region 3. Streamgages located outside of Missouri were selected to extend the range of data used for the independent variables in the regression analyses. Streamgages included in the regression analyses had 10 or more years of record and were considered to be affected minimally by anthropogenic activities or trends. Regional regression analyses identified three characteristics as statistically significant for the development of regional equations. For Region 1, drainage area, longest flow path, and streamflow-variability index were statistically significant. The range in the standard error of estimate for Region 1 is 79.6 to 94.2 percent. For Region 2, drainage area and streamflow variability index were statistically significant, and the range in the standard error of estimate is 48.2 to 72.1 percent. For Region 3, drainage area and streamflow-variability index also were statistically significant with a range in the standard error of estimate of 48.1 to 96.2 percent. Limitations on the use of estimating low-flow frequency statistics at ungaged locations are dependent on the method used. The first method outlined for use in Missouri, power curve equations, were developed to estimate the selected statistics for ungaged locations on 28 selected streams with multiple streamgages located on the same stream. A second method uses a drainage-area ratio to compute statistics at an ungaged location using data from a single streamgage on the same stream with 10 or more years of record. Ungaged locations on these streams may use the ratio of the drainage area at an ungaged location to the drainage area at a streamgage location to scale the selected statistic value from the streamgage location to the ungaged location. This method can be used if the drainage area of the ungaged location is within 40 to 150 percent of the streamgage drainage area. The third method is the use of the regional regression equations. The limits for the use of these equations are based on the ranges of the characteristics used as independent variables and that streams must be affected minimally by anthropogenic activities.

  3. Methods in pharmacoepidemiology: a review of statistical analyses and data reporting in pediatric drug utilization studies.

    PubMed

    Sequi, Marco; Campi, Rita; Clavenna, Antonio; Bonati, Maurizio

    2013-03-01

    To evaluate the quality of data reporting and statistical methods performed in drug utilization studies in the pediatric population. Drug utilization studies evaluating all drug prescriptions to children and adolescents published between January 1994 and December 2011 were retrieved and analyzed. For each study, information on measures of exposure/consumption, the covariates considered, descriptive and inferential analyses, statistical tests, and methods of data reporting was extracted. An overall quality score was created for each study using a 12-item checklist that took into account the presence of outcome measures, covariates of measures, descriptive measures, statistical tests, and graphical representation. A total of 22 studies were reviewed and analyzed. Of these, 20 studies reported at least one descriptive measure. The mean was the most commonly used measure (18 studies), but only five of these also reported the standard deviation. Statistical analyses were performed in 12 studies, with the chi-square test being the most commonly performed test. Graphs were presented in 14 papers. Sixteen papers reported the number of drug prescriptions and/or packages, and ten reported the prevalence of the drug prescription. The mean quality score was 8 (median 9). Only seven of the 22 studies received a score of ≥10, while four studies received a score of <6. Our findings document that only a few of the studies reviewed applied statistical methods and reported data in a satisfactory manner. We therefore conclude that the methodology of drug utilization studies needs to be improved.

  4. Quantitative Methods for Analysing Joint Questionnaire Data: Exploring the Role of Joint in Force Design

    DTIC Science & Technology

    2015-08-01

    the nine questions. The Statistical Package for the Social Sciences ( SPSS ) [11] was used to conduct statistical analysis on the sample. Two types...constructs. SPSS was again used to conduct statistical analysis on the sample. This time factor analysis was conducted. Factor analysis attempts to...Business Research Methods and Statistics using SPSS . P432. 11 IBM SPSS Statistics . (2012) 12 Burns, R.B., Burns, R.A. (2008) ‘Business Research

  5. Implementation and validation of fully relativistic GW calculations: Spin–orbit coupling in molecules, nanocrystals, and solids

    DOE PAGES

    Scherpelz, Peter; Govoni, Marco; Hamada, Ikutaro; ...

    2016-06-22

    We present an implementation of G 0W 0 calculations including spin–orbit coupling (SOC) enabling investigations of large systems, with thousands of electrons, and we discuss results for molecules, solids, and nanocrystals. Using a newly developed set of molecules with heavy elements (called GW-SOC81), we find that, when based upon hybrid density functional calculations, fully relativistic (FR) and scalar-relativistic (SR) G 0W 0 calculations of vertical ionization potentials both yield excellent performance compared to experiment, with errors below 1.9%. We demonstrate that while SR calculations have higher random errors, FR calculations systematically underestimate the VIP by 0.1 to 0.2 eV. Wemore » further verify that SOC effects may be well approximated at the FR density functional level and then added to SR G 0W 0 results for a broad class of systems. We also address the use of different root-finding algorithms for the G 0W 0 quasiparticle equation and the significant influence of including d electrons in the valence partition of the pseudopotential for G 0W 0 calculations. Lastly, we present statistical analyses of our data, highlighting the importance of separating definitive improvements from those that may occur by chance due to a limited number of samples. We suggest the statistical analyses used here will be useful in the assessment of the accuracy of a large variety of electronic structure methods« less

  6. Implementation and validation of fully relativistic GW calculations: Spin–orbit coupling in molecules, nanocrystals, and solids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scherpelz, Peter; Govoni, Marco; Hamada, Ikutaro

    We present an implementation of G 0W 0 calculations including spin–orbit coupling (SOC) enabling investigations of large systems, with thousands of electrons, and we discuss results for molecules, solids, and nanocrystals. Using a newly developed set of molecules with heavy elements (called GW-SOC81), we find that, when based upon hybrid density functional calculations, fully relativistic (FR) and scalar-relativistic (SR) G 0W 0 calculations of vertical ionization potentials both yield excellent performance compared to experiment, with errors below 1.9%. We demonstrate that while SR calculations have higher random errors, FR calculations systematically underestimate the VIP by 0.1 to 0.2 eV. Wemore » further verify that SOC effects may be well approximated at the FR density functional level and then added to SR G 0W 0 results for a broad class of systems. We also address the use of different root-finding algorithms for the G 0W 0 quasiparticle equation and the significant influence of including d electrons in the valence partition of the pseudopotential for G 0W 0 calculations. Lastly, we present statistical analyses of our data, highlighting the importance of separating definitive improvements from those that may occur by chance due to a limited number of samples. We suggest the statistical analyses used here will be useful in the assessment of the accuracy of a large variety of electronic structure methods« less

  7. Quantum behaviour of open pumped and damped Bose-Hubbard trimers

    NASA Astrophysics Data System (ADS)

    Chianca, C. V.; Olsen, M. K.

    2018-01-01

    We propose and analyse analogs of optical cavities for atoms using three-well inline Bose-Hubbard models with pumping and losses. With one well pumped and one damped, we find that both the mean-field dynamics and the quantum statistics show a qualitative dependence on the choice of damped well. The systems we analyse remain far from equilibrium, although most do enter a steady-state regime. We find quadrature squeezing, bipartite and tripartite inseparability and entanglement, and states exhibiting the EPR paradox, depending on the parameter regimes. We also discover situations where the mean-field solutions of our models are noticeably different from the quantum solutions for the mean fields. Due to recent experimental advances, it should be possible to demonstrate the effects we predict and investigate in this article.

  8. Data-driven fuel consumption estimation: A multivariate adaptive regression spline approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Yuche; Zhu, Lei; Gonder, Jeffrey

    Providing guidance and information to drivers to help them make fuel-efficient route choices remains an important and effective strategy in the near term to reduce fuel consumption from the transportation sector. One key component in implementing this strategy is a fuel-consumption estimation model. In this paper, we developed a mesoscopic fuel consumption estimation model that can be implemented into an eco-routing system. Our proposed model presents a framework that utilizes large-scale, real-world driving data, clusters road links by free-flow speed and fits one statistical model for each of cluster. This model includes predicting variables that were rarely or never consideredmore » before, such as free-flow speed and number of lanes. We applied the model to a real-world driving data set based on a global positioning system travel survey in the Philadelphia-Camden-Trenton metropolitan area. Results from the statistical analyses indicate that the independent variables we chose influence the fuel consumption rates of vehicles. But the magnitude and direction of the influences are dependent on the type of road links, specifically free-flow speeds of links. Here, a statistical diagnostic is conducted to ensure the validity of the models and results. Although the real-world driving data we used to develop statistical relationships are specific to one region, the framework we developed can be easily adjusted and used to explore the fuel consumption relationship in other regions.« less

  9. Data-driven fuel consumption estimation: A multivariate adaptive regression spline approach

    DOE PAGES

    Chen, Yuche; Zhu, Lei; Gonder, Jeffrey; ...

    2017-08-12

    Providing guidance and information to drivers to help them make fuel-efficient route choices remains an important and effective strategy in the near term to reduce fuel consumption from the transportation sector. One key component in implementing this strategy is a fuel-consumption estimation model. In this paper, we developed a mesoscopic fuel consumption estimation model that can be implemented into an eco-routing system. Our proposed model presents a framework that utilizes large-scale, real-world driving data, clusters road links by free-flow speed and fits one statistical model for each of cluster. This model includes predicting variables that were rarely or never consideredmore » before, such as free-flow speed and number of lanes. We applied the model to a real-world driving data set based on a global positioning system travel survey in the Philadelphia-Camden-Trenton metropolitan area. Results from the statistical analyses indicate that the independent variables we chose influence the fuel consumption rates of vehicles. But the magnitude and direction of the influences are dependent on the type of road links, specifically free-flow speeds of links. Here, a statistical diagnostic is conducted to ensure the validity of the models and results. Although the real-world driving data we used to develop statistical relationships are specific to one region, the framework we developed can be easily adjusted and used to explore the fuel consumption relationship in other regions.« less

  10. Research Design and Statistical Methods in Indian Medical Journals: A Retrospective Survey

    PubMed Central

    Hassan, Shabbeer; Yellur, Rajashree; Subramani, Pooventhan; Adiga, Poornima; Gokhale, Manoj; Iyer, Manasa S.; Mayya, Shreemathi S.

    2015-01-01

    Good quality medical research generally requires not only an expertise in the chosen medical field of interest but also a sound knowledge of statistical methodology. The number of medical research articles which have been published in Indian medical journals has increased quite substantially in the past decade. The aim of this study was to collate all evidence on study design quality and statistical analyses used in selected leading Indian medical journals. Ten (10) leading Indian medical journals were selected based on impact factors and all original research articles published in 2003 (N = 588) and 2013 (N = 774) were categorized and reviewed. A validated checklist on study design, statistical analyses, results presentation, and interpretation was used for review and evaluation of the articles. Main outcomes considered in the present study were – study design types and their frequencies, error/defects proportion in study design, statistical analyses, and implementation of CONSORT checklist in RCT (randomized clinical trials). From 2003 to 2013: The proportion of erroneous statistical analyses did not decrease (χ2=0.592, Φ=0.027, p=0.4418), 25% (80/320) in 2003 compared to 22.6% (111/490) in 2013. Compared with 2003, significant improvement was seen in 2013; the proportion of papers using statistical tests increased significantly (χ2=26.96, Φ=0.16, p<0.0001) from 42.5% (250/588) to 56.7 % (439/774). The overall proportion of errors in study design decreased significantly (χ2=16.783, Φ=0.12 p<0.0001), 41.3% (243/588) compared to 30.6% (237/774). In 2013, randomized clinical trials designs has remained very low (7.3%, 43/588) with majority showing some errors (41 papers, 95.3%). Majority of the published studies were retrospective in nature both in 2003 [79.1% (465/588)] and in 2013 [78.2% (605/774)]. Major decreases in error proportions were observed in both results presentation (χ2=24.477, Φ=0.17, p<0.0001), 82.2% (263/320) compared to 66.3% (325/490) and interpretation (χ2=25.616, Φ=0.173, p<0.0001), 32.5% (104/320) compared to 17.1% (84/490), though some serious ones were still present. Indian medical research seems to have made no major progress regarding using correct statistical analyses, but error/defects in study designs have decreased significantly. Randomized clinical trials are quite rarely published and have high proportion of methodological problems. PMID:25856194

  11. Research design and statistical methods in Indian medical journals: a retrospective survey.

    PubMed

    Hassan, Shabbeer; Yellur, Rajashree; Subramani, Pooventhan; Adiga, Poornima; Gokhale, Manoj; Iyer, Manasa S; Mayya, Shreemathi S

    2015-01-01

    Good quality medical research generally requires not only an expertise in the chosen medical field of interest but also a sound knowledge of statistical methodology. The number of medical research articles which have been published in Indian medical journals has increased quite substantially in the past decade. The aim of this study was to collate all evidence on study design quality and statistical analyses used in selected leading Indian medical journals. Ten (10) leading Indian medical journals were selected based on impact factors and all original research articles published in 2003 (N = 588) and 2013 (N = 774) were categorized and reviewed. A validated checklist on study design, statistical analyses, results presentation, and interpretation was used for review and evaluation of the articles. Main outcomes considered in the present study were - study design types and their frequencies, error/defects proportion in study design, statistical analyses, and implementation of CONSORT checklist in RCT (randomized clinical trials). From 2003 to 2013: The proportion of erroneous statistical analyses did not decrease (χ2=0.592, Φ=0.027, p=0.4418), 25% (80/320) in 2003 compared to 22.6% (111/490) in 2013. Compared with 2003, significant improvement was seen in 2013; the proportion of papers using statistical tests increased significantly (χ2=26.96, Φ=0.16, p<0.0001) from 42.5% (250/588) to 56.7 % (439/774). The overall proportion of errors in study design decreased significantly (χ2=16.783, Φ=0.12 p<0.0001), 41.3% (243/588) compared to 30.6% (237/774). In 2013, randomized clinical trials designs has remained very low (7.3%, 43/588) with majority showing some errors (41 papers, 95.3%). Majority of the published studies were retrospective in nature both in 2003 [79.1% (465/588)] and in 2013 [78.2% (605/774)]. Major decreases in error proportions were observed in both results presentation (χ2=24.477, Φ=0.17, p<0.0001), 82.2% (263/320) compared to 66.3% (325/490) and interpretation (χ2=25.616, Φ=0.173, p<0.0001), 32.5% (104/320) compared to 17.1% (84/490), though some serious ones were still present. Indian medical research seems to have made no major progress regarding using correct statistical analyses, but error/defects in study designs have decreased significantly. Randomized clinical trials are quite rarely published and have high proportion of methodological problems.

  12. Statistical Literacy in the Data Science Workplace

    ERIC Educational Resources Information Center

    Grant, Robert

    2017-01-01

    Statistical literacy, the ability to understand and make use of statistical information including methods, has particular relevance in the age of data science, when complex analyses are undertaken by teams from diverse backgrounds. Not only is it essential to communicate to the consumers of information but also within the team. Writing from the…

  13. Reporting Practices and Use of Quantitative Methods in Canadian Journal Articles in Psychology.

    PubMed

    Counsell, Alyssa; Harlow, Lisa L

    2017-05-01

    With recent focus on the state of research in psychology, it is essential to assess the nature of the statistical methods and analyses used and reported by psychological researchers. To that end, we investigated the prevalence of different statistical procedures and the nature of statistical reporting practices in recent articles from the four major Canadian psychology journals. The majority of authors evaluated their research hypotheses through the use of analysis of variance (ANOVA), t -tests, and multiple regression. Multivariate approaches were less common. Null hypothesis significance testing remains a popular strategy, but the majority of authors reported a standardized or unstandardized effect size measure alongside their significance test results. Confidence intervals on effect sizes were infrequently employed. Many authors provided minimal details about their statistical analyses and less than a third of the articles presented on data complications such as missing data and violations of statistical assumptions. Strengths of and areas needing improvement for reporting quantitative results are highlighted. The paper concludes with recommendations for how researchers and reviewers can improve comprehension and transparency in statistical reporting.

  14. The SPARC Intercomparison of Middle Atmosphere Climatologies

    NASA Technical Reports Server (NTRS)

    Randel, William; Fleming, Eric; Geller, Marvin; Gelman, Mel; Hamilton, Kevin; Karoly, David; Ortland, Dave; Pawson, Steve; Swinbank, Richard; Udelhofen, Petra

    2003-01-01

    Our current confidence in 'observed' climatological winds and temperatures in the middle atmosphere (over altitudes approx. 10-80 km) is assessed by detailed intercomparisons of contemporary and historic data sets. These data sets include global meteorological analyses and assimilations, climatologies derived from research satellite measurements, and historical reference atmosphere circulation statistics. We also include comparisons with historical rocketsonde wind and temperature data, and with more recent lidar temperature measurements. The comparisons focus on a few basic circulation statistics, such as temperature, zonal wind, and eddy flux statistics. Special attention is focused on tropical winds and temperatures, where large differences exist among separate analyses. Assimilated data sets provide the most realistic tropical variability, but substantial differences exist among current schemes.

  15. An Asynchronous Many-Task Implementation of In-Situ Statistical Analysis using Legion.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pebay, Philippe Pierre; Bennett, Janine Camille

    2015-11-01

    In this report, we propose a framework for the design and implementation of in-situ analy- ses using an asynchronous many-task (AMT) model, using the Legion programming model together with the MiniAero mini-application as a surrogate for full-scale parallel scientific computing applications. The bulk of this work consists of converting the Learn/Derive/Assess model which we had initially developed for parallel statistical analysis using MPI [PTBM11], from a SPMD to an AMT model. In this goal, we propose an original use of the concept of Legion logical regions as a replacement for the parallel communication schemes used for the only operation ofmore » the statistics engines that require explicit communication. We then evaluate this proposed scheme in a shared memory environment, using the Legion port of MiniAero as a proxy for a full-scale scientific application, as a means to provide input data sets of variable size for the in-situ statistical analyses in an AMT context. We demonstrate in particular that the approach has merit, and warrants further investigation, in collaboration with ongoing efforts to improve the overall parallel performance of the Legion system.« less

  16. Cross-frequency and band-averaged response variance prediction in the hybrid deterministic-statistical energy analysis method

    NASA Astrophysics Data System (ADS)

    Reynders, Edwin P. B.; Langley, Robin S.

    2018-08-01

    The hybrid deterministic-statistical energy analysis method has proven to be a versatile framework for modeling built-up vibro-acoustic systems. The stiff system components are modeled deterministically, e.g., using the finite element method, while the wave fields in the flexible components are modeled as diffuse. In the present paper, the hybrid method is extended such that not only the ensemble mean and variance of the harmonic system response can be computed, but also of the band-averaged system response. This variance represents the uncertainty that is due to the assumption of a diffuse field in the flexible components of the hybrid system. The developments start with a cross-frequency generalization of the reciprocity relationship between the total energy in a diffuse field and the cross spectrum of the blocked reverberant loading at the boundaries of that field. By making extensive use of this generalization in a first-order perturbation analysis, explicit expressions are derived for the cross-frequency and band-averaged variance of the vibrational energies in the diffuse components and for the cross-frequency and band-averaged variance of the cross spectrum of the vibro-acoustic field response of the deterministic components. These expressions are extensively validated against detailed Monte Carlo analyses of coupled plate systems in which diffuse fields are simulated by randomly distributing small point masses across the flexible components, and good agreement is found.

  17. IMPACT: a generic tool for modelling and simulating public health policy.

    PubMed

    Ainsworth, J D; Carruthers, E; Couch, P; Green, N; O'Flaherty, M; Sperrin, M; Williams, R; Asghar, Z; Capewell, S; Buchan, I E

    2011-01-01

    Populations are under-served by local health policies and management of resources. This partly reflects a lack of realistically complex models to enable appraisal of a wide range of potential options. Rising computing power coupled with advances in machine learning and healthcare information now enables such models to be constructed and executed. However, such models are not generally accessible to public health practitioners who often lack the requisite technical knowledge or skills. To design and develop a system for creating, executing and analysing the results of simulated public health and healthcare policy interventions, in ways that are accessible and usable by modellers and policy-makers. The system requirements were captured and analysed in parallel with the statistical method development for the simulation engine. From the resulting software requirement specification the system architecture was designed, implemented and tested. A model for Coronary Heart Disease (CHD) was created and validated against empirical data. The system was successfully used to create and validate the CHD model. The initial validation results show concordance between the simulation results and the empirical data. We have demonstrated the ability to connect health policy-modellers and policy-makers in a unified system, thereby making population health models easier to share, maintain, reuse and deploy.

  18. Correlates of genetic monogamy in socially monogamous mammals: insights from Azara's owl monkeys

    PubMed Central

    Huck, Maren; Fernandez-Duque, Eduardo; Babb, Paul; Schurr, Theodore

    2014-01-01

    Understanding the evolution of mating systems, a central topic in evolutionary biology for more than 50 years, requires examining the genetic consequences of mating and the relationships between social systems and mating systems. Among pair-living mammals, where genetic monogamy is extremely rare, the extent of extra-group paternity rates has been associated with male participation in infant care, strength of the pair bond and length of the breeding season. This study evaluated the relationship between two of those factors and the genetic mating system of socially monogamous mammals, testing predictions that male care and strength of pair bond would be negatively correlated with rates of extra-pair paternity (EPP). Autosomal microsatellite analyses provide evidence for genetic monogamy in a pair-living primate with bi-parental care, the Azara's owl monkey (Aotus azarae). A phylogenetically corrected generalized least square analysis was used to relate male care and strength of the pair bond to their genetic mating system (i.e. proportions of EPP) in 15 socially monogamous mammalian species. The intensity of male care was correlated with EPP rates in mammals, while strength of pair bond failed to reach statistical significance. Our analyses show that, once social monogamy has evolved, paternal care, and potentially also close bonds, may facilitate the evolution of genetic monogamy. PMID:24648230

  19. Correlates of genetic monogamy in socially monogamous mammals: insights from Azara's owl monkeys.

    PubMed

    Huck, Maren; Fernandez-Duque, Eduardo; Babb, Paul; Schurr, Theodore

    2014-05-07

    Understanding the evolution of mating systems, a central topic in evolutionary biology for more than 50 years, requires examining the genetic consequences of mating and the relationships between social systems and mating systems. Among pair-living mammals, where genetic monogamy is extremely rare, the extent of extra-group paternity rates has been associated with male participation in infant care, strength of the pair bond and length of the breeding season. This study evaluated the relationship between two of those factors and the genetic mating system of socially monogamous mammals, testing predictions that male care and strength of pair bond would be negatively correlated with rates of extra-pair paternity (EPP). Autosomal microsatellite analyses provide evidence for genetic monogamy in a pair-living primate with bi-parental care, the Azara's owl monkey (Aotus azarae). A phylogenetically corrected generalized least square analysis was used to relate male care and strength of the pair bond to their genetic mating system (i.e. proportions of EPP) in 15 socially monogamous mammalian species. The intensity of male care was correlated with EPP rates in mammals, while strength of pair bond failed to reach statistical significance. Our analyses show that, once social monogamy has evolved, paternal care, and potentially also close bonds, may facilitate the evolution of genetic monogamy.

  20. Statistical analysis of the time and space characteristic scales for large precipitating systems in the equatorial, tropical, sahelian and mid-latitude regions.

    NASA Astrophysics Data System (ADS)

    Duroure, Christophe; Sy, Abdoulaye; Baray, Jean luc; Van baelen, Joel; Diop, Bouya

    2017-04-01

    Precipitation plays a key role in the management of sustainable water resources and flood risk analyses. Changes in rainfall will be a critical factor determining the overall impact of climate change. We propose to analyse long series (10 years) of daily precipitation at different regions. We present the Fourier densities energy spectra and morphological spectra (i.e. probability repartition functions of the duration and the horizontal scale) of large precipitating systems. Satellite data from the Global precipitation climatology project (GPCP) and local pluviometers long time series in Senegal and France are used and compared in this work. For mid-latitude and Sahelian regions (North of 12°N), the morphological spectra are close to exponential decreasing distribution. This fact allows to define two characteristic scales (duration and space extension) for the precipitating region embedded into the large meso-scale convective system (MCS). For tropical and equatorial regions (South of 12°N) the morphological spectra are close to a Levy-stable distribution (power law decrease) which does not allow to define a characteristic scale (scaling range). When the time and space characteristic scales are defined, a "statistical velocity" of precipitating MCS can be defined, and compared to observed zonal advection. Maps of the characteristic scales and Levy-stable exponent over West Africa and south Europe are presented. The 12° latitude transition between exponential and Levy-stable behaviors of precipitating MCS is compared with the result of ECMWF ERA-Interim reanalysis for the same period. This morphological sharp transition could be used to test the different parameterizations of deep convection in forecast models.

  1. Empirically derived personality subtyping for predicting clinical symptoms and treatment response in bulimia nervosa.

    PubMed

    Haynos, Ann F; Pearson, Carolyn M; Utzinger, Linsey M; Wonderlich, Stephen A; Crosby, Ross D; Mitchell, James E; Crow, Scott J; Peterson, Carol B

    2017-05-01

    Evidence suggests that eating disorder subtypes reflecting under-controlled, over-controlled, and low psychopathology personality traits constitute reliable phenotypes that differentiate treatment response. This study is the first to use statistical analyses to identify these subtypes within treatment-seeking individuals with bulimia nervosa (BN) and to use these statistically derived clusters to predict clinical outcomes. Using variables from the Dimensional Assessment of Personality Pathology-Basic Questionnaire, K-means cluster analyses identified under-controlled, over-controlled, and low psychopathology subtypes within BN patients (n = 80) enrolled in a treatment trial. Generalized linear models examined the impact of personality subtypes on Eating Disorder Examination global score, binge eating frequency, and purging frequency cross-sectionally at baseline and longitudinally at end of treatment (EOT) and follow-up. In the longitudinal models, secondary analyses were conducted to examine personality subtype as a potential moderator of response to Cognitive Behavioral Therapy-Enhanced (CBT-E) or Integrative Cognitive-Affective Therapy for BN (ICAT-BN). There were no baseline clinical differences between groups. In the longitudinal models, personality subtype predicted binge eating (p = 0.03) and purging (p = 0.01) frequency at EOT and binge eating frequency at follow-up (p = 0.045). The over-controlled group demonstrated the best outcomes on these variables. In secondary analyses, there was a treatment by subtype interaction for purging at follow-up (p = 0.04), which indicated a superiority of CBT-E over ICAT-BN for reducing purging among the over-controlled group. Empirically derived personality subtyping appears to be a valid classification system with potential to guide eating disorder treatment decisions. © 2016 Wiley Periodicals, Inc.(Int J Eat Disord 2017; 50:506-514). © 2016 Wiley Periodicals, Inc.

  2. Pediatric patient safety events during hospitalization: approaches to accounting for institution-level effects.

    PubMed

    Slonim, Anthony D; Marcin, James P; Turenne, Wendy; Hall, Matt; Joseph, Jill G

    2007-12-01

    To determine the rates, patient, and institutional characteristics associated with the occurrence of patient safety indicators (PSIs) in hospitalized children and the degree of statistical difference derived from using three approaches of controlling for institution level effects. Pediatric Health Information System Dataset consisting of all pediatric discharges (<21 years of age) from 34 academic, freestanding children's hospitals for calendar year 2003. The rates of PSIs were computed for all discharges. The patient and institutional characteristics associated with these PSIs were calculated. The analyses sequentially applied three increasingly conservative methods to control for the institution-level effects robust standard error estimation, a fixed effects model, and a random effects model. The degree of difference from a "base state," which excluded institution-level variables, and between the models was calculated. The effects of these analyses on the interpretation of the PSIs are presented. PSIs are relatively infrequent events in hospitalized children ranging from 0 per 10,000 (postoperative hip fracture) to 87 per 10,000 (postoperative respiratory failure). Significant variables associated PSIs included age (neonates), race (Caucasians), payor status (public insurance), severity of illness (extreme), and hospital size (>300 beds), which all had higher rates of PSIs than their reference groups in the bivariable logistic regression results. The three different approaches of adjusting for institution-level effects demonstrated that there were similarities in both the clinical and statistical significance across each of the models. Institution-level effects can be appropriately controlled for by using a variety of methods in the analyses of administrative data. Whenever possible, resource-conservative methods should be used in the analyses especially if clinical implications are minimal.

  3. Comparison of the effects of Mitomycin-C and sodium hyaluronate/carboxymethylcellulose [NH/CMC] (Seprafilm) on abdominal adhesions.

    PubMed

    Ozerhan, Ismail Hakkı; Urkan, Murat; Meral, Ulvi Mehmet; Unlu, Aytekin; Ersöz, Nail; Demirag, Funda; Yagci, Gokhan

    2016-01-01

    Intra-abdominal adhesions (IA) may occur after abdominal surgery and also may lead to complications such as infertility, intestinal obstruction and chronic pain. The aim of this study was to compare the effects of Mitomycin-C (MM-C) and sodium hyaluronate/carboxymethylcellulose [NH/CMC] on abdominal adhesions in a cecal abrasion model and to investigate the toxicity of MM-C on complete blood count (CBC) and bone marrow analyses. The study comprised forty rats in four groups (Control, Sham, Cecal abrasion + MM-C, and Cecal abrasion + NH/CMC). On postoperative day 21, all rats except for the control (CBC + femur resection) group, were sacrificed. Macroscopical and histopathological evaluations of abdominal adhesions were performed. In order to elucidate the side effects of MM-C; CBC analyses and femur resections were performed to examine bone marrow cellularity. CBC analyses and bone marrow cellularity assessment revealed no statistically significant differences between MM-C, NH/CMC and control groups. No significant differences in inflammation scores were observed between the groups. The MM-C group had significantly lower fibrosis scores compared to the NH/CMC and sham groups. Although the adhesion scores were lower in the MM-C group, the differences were not statistically significant. Despite its potential for systemic toxicity, MM-C may show some anti-fibrosis and anti-adhesive effects. MM-C is a promising agent for the prevention of IAs, and as such, further trials are warranted to study efficacy.

  4. Manual vs. computer-assisted sperm analysis: can CASA replace manual assessment of human semen in clinical practice?

    PubMed

    Talarczyk-Desole, Joanna; Berger, Anna; Taszarek-Hauke, Grażyna; Hauke, Jan; Pawelczyk, Leszek; Jedrzejczak, Piotr

    2017-01-01

    The aim of the study was to check the quality of computer-assisted sperm analysis (CASA) system in comparison to the reference manual method as well as standardization of the computer-assisted semen assessment. The study was conducted between January and June 2015 at the Andrology Laboratory of the Division of Infertility and Reproductive Endocrinology, Poznań University of Medical Sciences, Poland. The study group consisted of 230 men who gave sperm samples for the first time in our center as part of an infertility investigation. The samples underwent manual and computer-assisted assessment of concentration, motility and morphology. A total of 184 samples were examined twice: manually, according to the 2010 WHO recommendations, and with CASA, using the program set-tings provided by the manufacturer. Additionally, 46 samples underwent two manual analyses and two computer-assisted analyses. The p-value of p < 0.05 was considered as statistically significant. Statistically significant differences were found between all of the investigated sperm parameters, except for non-progressive motility, measured with CASA and manually. In the group of patients where all analyses with each method were performed twice on the same sample we found no significant differences between both assessments of the same probe, neither in the samples analyzed manually nor with CASA, although standard deviation was higher in the CASA group. Our results suggest that computer-assisted sperm analysis requires further improvement for a wider application in clinical practice.

  5. Exploring longitudinal course and treatment-baseline severity interactions in secondary outcomes of smoking cessation treatment in individuals with attention-deficit hyperactivity disorder.

    PubMed

    Luo, Sean X; Wall, Melanie; Covey, Lirio; Hu, Mei-Chen; Scodes, Jennifer M; Levin, Frances R; Nunes, Edward V; Winhusen, Theresa

    2018-01-25

    A double blind, placebo-controlled randomized trial (NCT00253747) evaluating osmotic-release oral system methylphenidate (OROS-MPH) for smoking-cessation revealed a significant interaction effect in which participants with higher baseline ADHD severity had better abstinence outcomes with OROS-MPH while participants with lower baseline ADHD severity had worse outcomes. This current report examines secondary outcomes that might bear on the mechanism for this differential treatment effect. Longitudinal analyses were conducted to evaluate the effect of OROS-MPH on three secondary outcomes (ADHD symptom severity, nicotine craving, and withdrawal) in the total sample (N = 255, 56% Male), and in the high (N = 134) and low (N = 121) baseline ADHD severity groups. OROS-MPH significantly improved ADHD symptoms and nicotine withdrawal symptoms in the total sample, and exploratory analyses showed that in both higher and lower baseline severity groups, OROS-MPH statistically significantly improved these two outcomes. No effect on craving overall was detected, though exploratory analyses showed statistically significantly decreased craving in the high ADHD severity participants on OROS-MPH. No treatment by ADHD baseline severity interaction was detected for the outcomes. Methylphenidate improved secondary outcomes during smoking cessation independent of baseline ADHD severity, with no evident treatment-baseline severity interaction. Our results suggest divergent responses to smoking cessation treatment in the higher and lower severity groups cannot be explained by concordant divergence in craving, withdrawal and ADHD symptom severity, and alternative hypotheses may need to be identified.

  6. The Thurgood Marshall School of Law Empirical Findings: A Report of the Statistical Analysis of the July 2010 TMSL Texas Bar Results

    ERIC Educational Resources Information Center

    Kadhi, Tau; Holley, D.

    2010-01-01

    The following report gives the statistical findings of the July 2010 TMSL Bar results. Procedures: Data is pre-existing and was given to the Evaluator by email from the Registrar and Dean. Statistical analyses were run using SPSS 17 to address the following research questions: 1. What are the statistical descriptors of the July 2010 overall TMSL…

  7. Traffic accidents on expressways: new threat to China.

    PubMed

    Zhao, Jinbao; Deng, Wei

    2012-01-01

    As China is building one of the largest expressway systems in the world, expressway safety problems have become serious concerns to China. This article analyzed the trends in expressway accidents in China from 1995 to 2010 and examined the characteristics of these accidents. Expressway accident data were obtained from the Annual Report for Road Traffic Accidents published by the Ministry of Public Security of China. Expressway mileage data were obtained from the National Statistics Yearbook published by the National Bureau of Statistics of China. Descriptive statistical analyses were conducted based on these data. Expressway deaths increased by 10.2-fold from 616 persons in 1995 to 6300 persons in 2010, and the average annual increase was 17.9 percent over the past 15 years, and the overall other road traffic deaths was -0.33 percent. China's expressway mileage accounted for only 1.85 percent of highway mileage driven in 2010, but expressway deaths made up 13.54 percent of highway traffic deaths. The average annual accident lethality rate [accident deaths/(accident deaths + accident injuries)] for China's expressways was 27.76 percent during the period 1995 to 2010, which was 1.33 times higher than the accident lethality rate of highway traffic accidents. China's government should pay attention to expressway construction and safety interventions during the rapid development period of expressways. Related causes, such as geographic patterns, speeding, weather conditions, and traffic flow composition, need to be studied in the near future. An effective and scientific expressway safety management services system, composed of a speed monitoring system, warning system, and emergency rescue system, should be established in developed and underdeveloped provinces in China to improve safety on expressway.

  8. Evaluating the consistency of gene sets used in the analysis of bacterial gene expression data.

    PubMed

    Tintle, Nathan L; Sitarik, Alexandra; Boerema, Benjamin; Young, Kylie; Best, Aaron A; Dejongh, Matthew

    2012-08-08

    Statistical analyses of whole genome expression data require functional information about genes in order to yield meaningful biological conclusions. The Gene Ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) are common sources of functionally grouped gene sets. For bacteria, the SEED and MicrobesOnline provide alternative, complementary sources of gene sets. To date, no comprehensive evaluation of the data obtained from these resources has been performed. We define a series of gene set consistency metrics directly related to the most common classes of statistical analyses for gene expression data, and then perform a comprehensive analysis of 3581 Affymetrix® gene expression arrays across 17 diverse bacteria. We find that gene sets obtained from GO and KEGG demonstrate lower consistency than those obtained from the SEED and MicrobesOnline, regardless of gene set size. Despite the widespread use of GO and KEGG gene sets in bacterial gene expression data analysis, the SEED and MicrobesOnline provide more consistent sets for a wide variety of statistical analyses. Increased use of the SEED and MicrobesOnline gene sets in the analysis of bacterial gene expression data may improve statistical power and utility of expression data.

  9. Statistical Analyses of Scatterplots to Identify Important Factors in Large-Scale Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kleijnen, J.P.C.; Helton, J.C.

    1999-04-01

    The robustness of procedures for identifying patterns in scatterplots generated in Monte Carlo sensitivity analyses is investigated. These procedures are based on attempts to detect increasingly complex patterns in the scatterplots under consideration and involve the identification of (1) linear relationships with correlation coefficients, (2) monotonic relationships with rank correlation coefficients, (3) trends in central tendency as defined by means, medians and the Kruskal-Wallis statistic, (4) trends in variability as defined by variances and interquartile ranges, and (5) deviations from randomness as defined by the chi-square statistic. The following two topics related to the robustness of these procedures are consideredmore » for a sequence of example analyses with a large model for two-phase fluid flow: the presence of Type I and Type II errors, and the stability of results obtained with independent Latin hypercube samples. Observations from analysis include: (1) Type I errors are unavoidable, (2) Type II errors can occur when inappropriate analysis procedures are used, (3) physical explanations should always be sought for why statistical procedures identify variables as being important, and (4) the identification of important variables tends to be stable for independent Latin hypercube samples.« less

  10. A canonical neural mechanism for behavioral variability

    PubMed Central

    Darshan, Ran; Wood, William E.; Peters, Susan; Leblois, Arthur; Hansel, David

    2017-01-01

    The ability to generate variable movements is essential for learning and adjusting complex behaviours. This variability has been linked to the temporal irregularity of neuronal activity in the central nervous system. However, how neuronal irregularity actually translates into behavioural variability is unclear. Here we combine modelling, electrophysiological and behavioural studies to address this issue. We demonstrate that a model circuit comprising topographically organized and strongly recurrent neural networks can autonomously generate irregular motor behaviours. Simultaneous recordings of neurons in singing finches reveal that neural correlations increase across the circuit driving song variability, in agreement with the model predictions. Analysing behavioural data, we find remarkable similarities in the babbling statistics of 5–6-month-old human infants and juveniles from three songbird species and show that our model naturally accounts for these ‘universal' statistics. PMID:28530225

  11. A psychometric evaluation of the Rorschach comprehensive system's perceptual thinking index.

    PubMed

    Dao, Tam K; Prevatt, Frances

    2006-04-01

    In this study, we investigated evidence for reliability and validity of the Perceptual Thinking Index (PTI; Exner, 2000a, 2000b) among an adult inpatient population. We conducted reliability and validity analyses on 107 patients who met the Diagnostic and Statistical Manual of Mental Disorders (4th ed., text revision; American Psychiatric Association, 2000) criteria for a schizophrenia-spectrum disorder (SSD) or mood disorder with no psychotic features (MD). Results provided support for interrater reliability as well as internal consistency of the PTI. Furthermore, the PTI was an effective index in differentiating SSD patients from patients diagnosed with an MD. Finally, the PTI demonstrated adequate diagnostic statistics that can be useful in the classification of patients diagnosed with SSD and MD. We discuss methodological issues, implications for assessment practice, and directions for future research.

  12. Neutralising antibody titration in 25,000 sera of dogs and cats vaccinated against rabies in France, in the framework of the new regulations that offer an alternative to quarantine.

    PubMed

    Cliquet, F; Verdier, Y; Sagné, L; Aubert, M; Schereffer, J L; Selve, M; Wasniewski, M; Servat, A

    2003-12-01

    Regulations governing international movements of domestic carnivores from rabies-infected to rabies-free countries have recently been loosened, with the adoption of a system that combines vaccination against rabies and serological surveillance (neutralising antibody titration test with a threshold of 0.5 UI/ml). Since 1993, the Research Laboratory for Rabies and Wild Animal Pathology in Nancy, France, has analysed over 25,000 sera from dogs and cats using a viral seroneutralisation technique. The statistical analyses performed during this time show that cats respond better than dogs. Although no significant difference in titres was observed between primovaccinated and repeat-vaccinated cats, repeat-vaccinated dogs had titres above 0.5 IU/ml more frequently. In primovaccinated dogs, monovalent vaccines offered a better serological conversion rate than multivalent ones. Finally, the results of these analyses showed a strong correlation between antibody counts and the time that elapsed between the last vaccination and the blood sampling.

  13. Peripheral vascular damage in systemic lupus erythematosus: data from LUMINA, a large multi-ethnic U.S. cohort (LXIX).

    PubMed

    Burgos, P I; Vilá, L M; Reveille, J D; Alarcón, G S

    2009-12-01

    To determine the factors associated with peripheral vascular damage in systemic lupus erythematosus patients and its impact on survival from Lupus in Minorities, Nature versus Nurture, a longitudinal US multi-ethnic cohort. Peripheral vascular damage was defined by the Systemic Lupus International Collaborating Clinics Damage Index (SDI). Factors associated with peripheral vascular damage were examined by univariable and multi-variable logistic regression models and its impact on survival by a Cox multi-variable regression. Thirty-four (5.3%) of 637 patients (90% women, mean [SD] age 36.5 [12.6] [16-87] years) developed peripheral vascular damage. Age and the SDI (without peripheral vascular damage) were statistically significant (odds ratio [OR] = 1.05, 95% confidence interval [CI] 1.01-1.08; P = 0.0107 and OR = 1.30, 95% CI 0.09-1.56; P = 0.0043, respectively) in multi-variable analyses. Azathioprine, warfarin and statins were also statistically significant, and glucocorticoid use was borderline statistically significant (OR = 1.03, 95% CI 0.10-1.06; P = 0.0975). In the survival analysis, peripheral vascular damage was independently associated with a diminished survival (hazard ratio = 2.36; 95% CI 1.07-5.19; P = 0.0334). In short, age was independently associated with peripheral vascular damage, but so was the presence of damage in other organs (ocular, neuropsychiatric, renal, cardiovascular, pulmonary, musculoskeletal and integument) and some medications (probably reflecting more severe disease). Peripheral vascular damage also negatively affected survival.

  14. HyphArea--automated analysis of spatiotemporal fungal patterns.

    PubMed

    Baum, Tobias; Navarro-Quezada, Aura; Knogge, Wolfgang; Douchkov, Dimitar; Schweizer, Patrick; Seiffert, Udo

    2011-01-01

    In phytopathology quantitative measurements are rarely used to assess crop plant disease symptoms. Instead, a qualitative valuation by eye is often the method of choice. In order to close the gap between subjective human inspection and objective quantitative results, the development of an automated analysis system that is capable of recognizing and characterizing the growth patterns of fungal hyphae in micrograph images was developed. This system should enable the efficient screening of different host-pathogen combinations (e.g., barley-Blumeria graminis, barley-Rhynchosporium secalis) using different microscopy technologies (e.g., bright field, fluorescence). An image segmentation algorithm was developed for gray-scale image data that achieved good results with several microscope imaging protocols. Furthermore, adaptability towards different host-pathogen systems was obtained by using a classification that is based on a genetic algorithm. The developed software system was named HyphArea, since the quantification of the area covered by a hyphal colony is the basic task and prerequisite for all further morphological and statistical analyses in this context. By means of a typical use case the utilization and basic properties of HyphArea could be demonstrated. It was possible to detect statistically significant differences between the growth of an R. secalis wild-type strain and a virulence mutant. Copyright © 2010 Elsevier GmbH. All rights reserved.

  15. Coordinate based random effect size meta-analysis of neuroimaging studies.

    PubMed

    Tench, C R; Tanasescu, Radu; Constantinescu, C S; Auer, D P; Cottam, W J

    2017-06-01

    Low power in neuroimaging studies can make them difficult to interpret, and Coordinate based meta-analysis (CBMA) may go some way to mitigating this issue. CBMA has been used in many analyses to detect where published functional MRI or voxel-based morphometry studies testing similar hypotheses report significant summary results (coordinates) consistently. Only the reported coordinates and possibly t statistics are analysed, and statistical significance of clusters is determined by coordinate density. Here a method of performing coordinate based random effect size meta-analysis and meta-regression is introduced. The algorithm (ClusterZ) analyses both coordinates and reported t statistic or Z score, standardised by the number of subjects. Statistical significance is determined not by coordinate density, but by a random effects meta-analyses of reported effects performed cluster-wise using standard statistical methods and taking account of censoring inherent in the published summary results. Type 1 error control is achieved using the false cluster discovery rate (FCDR), which is based on the false discovery rate. This controls both the family wise error rate under the null hypothesis that coordinates are randomly drawn from a standard stereotaxic space, and the proportion of significant clusters that are expected under the null. Such control is necessary to avoid propagating and even amplifying the very issues motivating the meta-analysis in the first place. ClusterZ is demonstrated on both numerically simulated data and on real data from reports of grey matter loss in multiple sclerosis (MS) and syndromes suggestive of MS, and of painful stimulus in healthy controls. The software implementation is available to download and use freely. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. Implementing a Web-Based Decision Support System to Spatially and Statistically Analyze Ecological Conditions of the Sierra Nevada

    NASA Astrophysics Data System (ADS)

    Nguyen, A.; Mueller, C.; Brooks, A. N.; Kislik, E. A.; Baney, O. N.; Ramirez, C.; Schmidt, C.; Torres-Perez, J. L.

    2014-12-01

    The Sierra Nevada is experiencing changes in hydrologic regimes, such as decreases in snowmelt and peak runoff, which affect forest health and the availability of water resources. Currently, the USDA Forest Service Region 5 is undergoing Forest Plan revisions to include climate change impacts into mitigation and adaptation strategies. However, there are few processes in place to conduct quantitative assessments of forest conditions in relation to mountain hydrology, while easily and effectively delivering that information to forest managers. To assist the USDA Forest Service, this study is the final phase of a three-term project to create a Decision Support System (DSS) to allow ease of access to historical and forecasted hydrologic, climatic, and terrestrial conditions for the entire Sierra Nevada. This data is featured within three components of the DSS: the Mapping Viewer, Statistical Analysis Portal, and Geospatial Data Gateway. Utilizing ArcGIS Online, the Sierra DSS Mapping Viewer enables users to visually analyze and locate areas of interest. Once the areas of interest are targeted, the Statistical Analysis Portal provides subbasin level statistics for each variable over time by utilizing a recently developed web-based data analysis and visualization tool called Plotly. This tool allows users to generate graphs and conduct statistical analyses for the Sierra Nevada without the need to download the dataset of interest. For more comprehensive analysis, users are also able to download datasets via the Geospatial Data Gateway. The third phase of this project focused on Python-based data processing, the adaptation of the multiple capabilities of ArcGIS Online and Plotly, and the integration of the three Sierra DSS components within a website designed specifically for the USDA Forest Service.

  17. Evidence for the Selective Reporting of Analyses and Discrepancies in Clinical Trials: A Systematic Review of Cohort Studies of Clinical Trials

    PubMed Central

    Dwan, Kerry; Altman, Douglas G.; Clarke, Mike; Gamble, Carrol; Higgins, Julian P. T.; Sterne, Jonathan A. C.; Williamson, Paula R.; Kirkham, Jamie J.

    2014-01-01

    Background Most publications about selective reporting in clinical trials have focussed on outcomes. However, selective reporting of analyses for a given outcome may also affect the validity of findings. If analyses are selected on the basis of the results, reporting bias may occur. The aims of this study were to review and summarise the evidence from empirical cohort studies that assessed discrepant or selective reporting of analyses in randomised controlled trials (RCTs). Methods and Findings A systematic review was conducted and included cohort studies that assessed any aspect of the reporting of analyses of RCTs by comparing different trial documents, e.g., protocol compared to trial report, or different sections within a trial publication. The Cochrane Methodology Register, Medline (Ovid), PsycInfo (Ovid), and PubMed were searched on 5 February 2014. Two authors independently selected studies, performed data extraction, and assessed the methodological quality of the eligible studies. Twenty-two studies (containing 3,140 RCTs) published between 2000 and 2013 were included. Twenty-two studies reported on discrepancies between information given in different sources. Discrepancies were found in statistical analyses (eight studies), composite outcomes (one study), the handling of missing data (three studies), unadjusted versus adjusted analyses (three studies), handling of continuous data (three studies), and subgroup analyses (12 studies). Discrepancy rates varied, ranging from 7% (3/42) to 88% (7/8) in statistical analyses, 46% (36/79) to 82% (23/28) in adjusted versus unadjusted analyses, and 61% (11/18) to 100% (25/25) in subgroup analyses. This review is limited in that none of the included studies investigated the evidence for bias resulting from selective reporting of analyses. It was not possible to combine studies to provide overall summary estimates, and so the results of studies are discussed narratively. Conclusions Discrepancies in analyses between publications and other study documentation were common, but reasons for these discrepancies were not discussed in the trial reports. To ensure transparency, protocols and statistical analysis plans need to be published, and investigators should adhere to these or explain discrepancies. Please see later in the article for the Editors' Summary PMID:24959719

  18. CERN experience and strategy for the maintenance of cryogenic plants and distribution systems

    NASA Astrophysics Data System (ADS)

    Serio, L.; Bremer, J.; Claudet, S.; Delikaris, D.; Ferlin, G.; Pezzetti, M.; Pirotte, O.; Tavian, L.; Wagner, U.

    2015-12-01

    CERN operates and maintains the world largest cryogenic infrastructure ranging from ageing installations feeding detectors, test facilities and general services, to the state-of-the-art cryogenic system serving the flagship LHC machine complex. After several years of exploitation of a wide range of cryogenic installations and in particular following the last two years major shutdown to maintain and consolidate the LHC machine, we have analysed and reviewed the maintenance activities to implement an efficient and reliable exploitation of the installations. We report the results, statistics and lessons learned on the maintenance activities performed and in particular the required consolidations and major overhauling, the organization, management and methodologies implemented.

  19. Impact of satellite-based data on FGGE general circulation statistics

    NASA Technical Reports Server (NTRS)

    Salstein, David A.; Rosen, Richard D.; Baker, Wayman E.; Kalnay, Eugenia

    1987-01-01

    The NASA Goddard Laboratory for Atmospheres (GLA) analysis/forecast system was run in two different parallel modes in order to evaluate the influence that data from satellites and other FGGE observation platforms can have on analyses of large scale circulation; in the first mode, data from all observation systems were used, while in the second only conventional upper air and surface reports were used. The GLA model was also integrated for the same period without insertion of any data; an independent objective analysis based only on rawinsonde and pilot balloon data is also performed. A small decrease in the vigor of the general circulation is noted to follow from the inclusion of satellite observations.

  20. Contrail Tracking and ARM Data Product Development

    NASA Technical Reports Server (NTRS)

    Duda, David P.; Russell, James, III

    2005-01-01

    A contrail tracking system was developed to help in the assessment of the effect of commercial jet contrails on the Earth's radiative budget. The tracking system was built by combining meteorological data from the Rapid Update Cycle (RUC) numerical weather prediction model with commercial air traffic flight track data and satellite imagery. A statistical contrail-forecasting model was created a combination of surface-based contrail observations and numerical weather analyses and forecasts. This model allows predictions of widespread contrail occurrences for contrail research on either a real-time basis or for long-term time scales. Satellite-derived cirrus cloud properties in polluted and unpolluted regions were compared to determine the impact of air traffic on cirrus.

  1. Distinguishing advective and powered motion in self-propelled colloids

    NASA Astrophysics Data System (ADS)

    Byun, Young-Moo; Lammert, Paul E.; Hong, Yiying; Sen, Ayusman; Crespi, Vincent H.

    2017-11-01

    Self-powered motion in catalytic colloidal particles provides a compelling example of active matter, i.e. systems that engage in single-particle and collective behavior far from equilibrium. The long-time, long-distance behavior of such systems is of particular interest, since it connects their individual micro-scale behavior to macro-scale phenomena. In such analyses, it is important to distinguish motion due to subtle advective effects—which also has long time scales and length scales—from long-timescale phenomena that derive from intrinsically powered motion. Here, we develop a methodology to analyze the statistical properties of the translational and rotational motions of powered colloids to distinguish, for example, active chemotaxis from passive advection by bulk flow.

  2. Decoding the Nature of Emotion in the Brain.

    PubMed

    Kragel, Philip A; LaBar, Kevin S

    2016-06-01

    A central, unresolved problem in affective neuroscience is understanding how emotions are represented in nervous system activity. After prior localization approaches largely failed, researchers began applying multivariate statistical tools to reconceptualize how emotion constructs might be embedded in large-scale brain networks. Findings from pattern analyses of neuroimaging data show that affective dimensions and emotion categories are uniquely represented in the activity of distributed neural systems that span cortical and subcortical regions. Results from multiple-category decoding studies are incompatible with theories postulating that specific emotions emerge from the neural coding of valence and arousal. This 'new look' into emotion representation promises to improve and reformulate neurobiological models of affect. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Decoding the Nature of Emotion in the Brain

    PubMed Central

    Kragel, Philip A.; LaBar, Kevin S.

    2016-01-01

    A central, unresolved problem in affective neuroscience is understanding how emotions are represented in nervous system activity. After prior localization approaches largely failed, researchers began applying multivariate statistical tools to reconceptualize how emotion constructs might be embedded in large-scale brain networks. Findings from pattern analyses of neuroimaging data show that affective dimensions and emotion categories are uniquely represented in the activity of distributed neural systems that span cortical and subcortical regions. Results from multiple-category decoding studies are incompatible with theories postulating that specific emotions emerge from the neural coding of valence and arousal. This ‘new look’ into emotion representation promises to improve and reformulate neurobiological models of affect. PMID:27133227

  4. How the Mastery Rubric for Statistical Literacy Can Generate Actionable Evidence about Statistical and Quantitative Learning Outcomes

    ERIC Educational Resources Information Center

    Tractenberg, Rochelle E.

    2017-01-01

    Statistical literacy is essential to an informed citizenry; and two emerging trends highlight a growing need for training that achieves this literacy. The first trend is towards "big" data: while automated analyses can exploit massive amounts of data, the interpretation--and possibly more importantly, the replication--of results are…

  5. Statistical power of intervention analyses: simulation and empirical application to treated lumber prices

    Treesearch

    Jeffrey P. Prestemon

    2009-01-01

    Timber product markets are subject to large shocks deriving from natural disturbances and policy shifts. Statistical modeling of shocks is often done to assess their economic importance. In this article, I simulate the statistical power of univariate and bivariate methods of shock detection using time series intervention models. Simulations show that bivariate methods...

  6. Fundamentals and Catalytic Innovation: The Statistical and Data Management Center of the Antibacterial Resistance Leadership Group.

    PubMed

    Huvane, Jacqueline; Komarow, Lauren; Hill, Carol; Tran, Thuy Tien T; Pereira, Carol; Rosenkranz, Susan L; Finnemeyer, Matt; Earley, Michelle; Jiang, Hongyu Jeanne; Wang, Rui; Lok, Judith; Evans, Scott R

    2017-03-15

    The Statistical and Data Management Center (SDMC) provides the Antibacterial Resistance Leadership Group (ARLG) with statistical and data management expertise to advance the ARLG research agenda. The SDMC is active at all stages of a study, including design; data collection and monitoring; data analyses and archival; and publication of study results. The SDMC enhances the scientific integrity of ARLG studies through the development and implementation of innovative and practical statistical methodologies and by educating research colleagues regarding the application of clinical trial fundamentals. This article summarizes the challenges and roles, as well as the innovative contributions in the design, monitoring, and analyses of clinical trials and diagnostic studies, of the ARLG SDMC. © The Author 2017. Published by Oxford University Press for the Infectious Diseases Society of America. All rights reserved. For permissions, e-mail: journals.permissions@oup.com.

  7. P-MartCancer-Interactive Online Software to Enable Analysis of Shotgun Cancer Proteomic Datasets.

    PubMed

    Webb-Robertson, Bobbie-Jo M; Bramer, Lisa M; Jensen, Jeffrey L; Kobold, Markus A; Stratton, Kelly G; White, Amanda M; Rodland, Karin D

    2017-11-01

    P-MartCancer is an interactive web-based software environment that enables statistical analyses of peptide or protein data, quantitated from mass spectrometry-based global proteomics experiments, without requiring in-depth knowledge of statistical programming. P-MartCancer offers a series of statistical modules associated with quality assessment, peptide and protein statistics, protein quantification, and exploratory data analyses driven by the user via customized workflows and interactive visualization. Currently, P-MartCancer offers access and the capability to analyze multiple cancer proteomic datasets generated through the Clinical Proteomics Tumor Analysis Consortium at the peptide, gene, and protein levels. P-MartCancer is deployed as a web service (https://pmart.labworks.org/cptac.html), alternatively available via Docker Hub (https://hub.docker.com/r/pnnl/pmart-web/). Cancer Res; 77(21); e47-50. ©2017 AACR . ©2017 American Association for Cancer Research.

  8. "TNOs are Cool": A survey of the trans-Neptunian region. XIII. Statistical analysis of multiple trans-Neptunian objects observed with Herschel Space Observatory

    NASA Astrophysics Data System (ADS)

    Kovalenko, I. D.; Doressoundiram, A.; Lellouch, E.; Vilenius, E.; Müller, T.; Stansberry, J.

    2017-11-01

    Context. Gravitationally bound multiple systems provide an opportunity to estimate the mean bulk density of the objects, whereas this characteristic is not available for single objects. Being a primitive population of the outer solar system, binary and multiple trans-Neptunian objects (TNOs) provide unique information about bulk density and internal structure, improving our understanding of their formation and evolution. Aims: The goal of this work is to analyse parameters of multiple trans-Neptunian systems, observed with Herschel and Spitzer space telescopes. Particularly, statistical analysis is done for radiometric size and geometric albedo, obtained from photometric observations, and for estimated bulk density. Methods: We use Monte Carlo simulation to estimate the real size distribution of TNOs. For this purpose, we expand the dataset of diameters by adopting the Minor Planet Center database list with available values of the absolute magnitude therein, and the albedo distribution derived from Herschel radiometric measurements. We use the 2-sample Anderson-Darling non-parametric statistical method for testing whether two samples of diameters, for binary and single TNOs, come from the same distribution. Additionally, we use the Spearman's coefficient as a measure of rank correlations between parameters. Uncertainties of estimated parameters together with lack of data are taken into account. Conclusions about correlations between parameters are based on statistical hypothesis testing. Results: We have found that the difference in size distributions of multiple and single TNOs is biased by small objects. The test on correlations between parameters shows that the effective diameter of binary TNOs strongly correlates with heliocentric orbital inclination and with magnitude difference between components of binary system. The correlation between diameter and magnitude difference implies that small and large binaries are formed by different mechanisms. Furthermore, the statistical test indicates, although not significant with the sample size, that a moderately strong correlation exists between diameter and bulk density. Herschel is an ESA space observatory with science instruments provided by European-led Principal Investigator consortia and with important participation from NASA.

  9. Ratio index variables or ANCOVA? Fisher's cats revisited.

    PubMed

    Tu, Yu-Kang; Law, Graham R; Ellison, George T H; Gilthorpe, Mark S

    2010-01-01

    Over 60 years ago Ronald Fisher demonstrated a number of potential pitfalls with statistical analyses using ratio variables. Nonetheless, these pitfalls are largely overlooked in contemporary clinical and epidemiological research, which routinely uses ratio variables in statistical analyses. This article aims to demonstrate how very different findings can be generated as a result of less than perfect correlations among the data used to generate ratio variables. These imperfect correlations result from measurement error and random biological variation. While the former can often be reduced by improvements in measurement, random biological variation is difficult to estimate and eliminate in observational studies. Moreover, wherever the underlying biological relationships among epidemiological variables are unclear, and hence the choice of statistical model is also unclear, the different findings generated by different analytical strategies can lead to contradictory conclusions. Caution is therefore required when interpreting analyses of ratio variables whenever the underlying biological relationships among the variables involved are unspecified or unclear. (c) 2009 John Wiley & Sons, Ltd.

  10. Crop Species Diversity Changes in the United States: 1978–2012

    PubMed Central

    Aguilar, Jonathan; Gramig, Greta G.; Hendrickson, John R.; Archer, David W.; Forcella, Frank; Liebig, Mark A.

    2015-01-01

    Anecdotal accounts regarding reduced US cropping system diversity have raised concerns about negative impacts of increasingly homogeneous cropping systems. However, formal analyses to document such changes are lacking. Using US Agriculture Census data, which are collected every five years, we quantified crop species diversity from 1978 to 2012, for the contiguous US on a county level basis. We used Shannon diversity indices expressed as effective number of crop species (ENCS) to quantify crop diversity. We then evaluated changes in county-level crop diversity both nationally and for each of the eight Farm Resource Regions developed by the National Agriculture Statistics Service. During the 34 years we considered in our analyses, both national and regional ENCS changed. Nationally, crop diversity was lower in 2012 than in 1978. However, our analyses also revealed interesting trends between and within different Resource Regions. Overall, the Heartland Resource Region had the lowest crop diversity whereas the Fruitful Rim and Northern Crescent had the highest. In contrast to the other Resource Regions, the Mississippi Portal had significantly higher crop diversity in 2012 than in 1978. Also, within regions there were differences between counties in crop diversity. Spatial autocorrelation revealed clustering of low and high ENCS and this trend became stronger over time. These results show that, nationally counties have been clustering into areas of either low diversity or high diversity. Moreover, a significant trend of more counties shifting to lower rather than to higher crop diversity was detected. The clustering and shifting demonstrates a trend toward crop diversity loss and attendant homogenization of agricultural production systems, which could have far-reaching consequences for provision of ecosystem system services associated with agricultural systems as well as food system sustainability. PMID:26308552

  11. Satellite disintegration dynamics

    NASA Technical Reports Server (NTRS)

    Dasenbrock, R. R.; Kaufman, B.; Heard, W. B.

    1975-01-01

    The subject of satellite disintegration is examined in detail. Elements of the orbits of individual fragments, determined by DOD space surveillance systems, are used to accurately predict the time and place of fragmentation. Dual time independent and time dependent analyses are performed for simulated and real breakups. Methods of statistical mechanics are used to study the evolution of the fragment clouds. The fragments are treated as an ensemble of non-interacting particles. A solution of Liouville's equation is obtained which enables the spatial density to be calculated as a function of position, time and initial velocity distribution.

  12. Dissortativity and duplications in oral cancer

    NASA Astrophysics Data System (ADS)

    Shinde, Pramod; Yadav, Alok; Rai, Aparna; Jalan, Sarika

    2015-08-01

    More than 300 000 new cases worldwide are being diagnosed with oral cancer annually. Complexity of oral cancer renders designing drug targets very difficult. We analyse protein-protein interaction network for the normal and oral cancer tissue and detect crucial changes in the structural properties of the networks in terms of the interactions of the hub proteins and the degree-degree correlations. Further analysis of the spectra of both the networks, while exhibiting universal statistical behaviour, manifest distinction in terms of the zero degeneracy, providing insight to the complexity of the underlying system.

  13. Excellence through Special Education? Lessons from the Finnish School Reform

    NASA Astrophysics Data System (ADS)

    Kivirauma, Joel; Ruoho, Kari

    2007-05-01

    The present article focuses on connections between part-time special education and the good results of Finnish students in PISA studies. After a brief summary of the comprehensive school system and special education in Finland, PISA results are analysed. The analysis shows that the relative amount of special education targeted at language problems is highest in Finland among those countries from which comparative statistics are available. The writers argue that this preventive language-oriented part-time special education is an important factor behind the good PISA results.

  14. SPS market analysis

    NASA Astrophysics Data System (ADS)

    Goff, H. C.

    1980-05-01

    A market analysis task included personal interviews by GE personnel and supplemental mail surveys to acquire statistical data and to identify and measure attitudes, reactions and intentions of prospective small solar thermal power systems (SPS) users. Over 500 firms were contacted, including three ownership classes of electric utilities, industrial firms in the top SIC codes for energy consumption, and design engineering firms. A market demand model was developed which utilizes the data base developed by personal interviews and surveys, and projected energy price and consumption data to perform sensitivity analyses and estimate potential markets for SPS.

  15. A software platform for statistical evaluation of patient respiratory patterns in radiation therapy.

    PubMed

    Dunn, Leon; Kenny, John

    2017-10-01

    The aim of this work was to design and evaluate a software tool for analysis of a patient's respiration, with the goal of optimizing the effectiveness of motion management techniques during radiotherapy imaging and treatment. A software tool which analyses patient respiratory data files (.vxp files) created by the Varian Real-Time Position Management System (RPM) was developed to analyse patient respiratory data. The software, called RespAnalysis, was created in MATLAB and provides four modules, one each for determining respiration characteristics, providing breathing coaching (biofeedback training), comparing pre and post-training characteristics and performing a fraction-by-fraction assessment. The modules analyse respiratory traces to determine signal characteristics and specifically use a Sample Entropy algorithm as the key means to quantify breathing irregularity. Simulated respiratory signals, as well as 91 patient RPM traces were analysed with RespAnalysis to test the viability of using the Sample Entropy for predicting breathing regularity. Retrospective assessment of patient data demonstrated that the Sample Entropy metric was a predictor of periodic irregularity in respiration data, however, it was found to be insensitive to amplitude variation. Additional waveform statistics assessing the distribution of signal amplitudes over time coupled with Sample Entropy method were found to be useful in assessing breathing regularity. The RespAnalysis software tool presented in this work uses the Sample Entropy method to analyse patient respiratory data recorded for motion management purposes in radiation therapy. This is applicable during treatment simulation and during subsequent treatment fractions, providing a way to quantify breathing irregularity, as well as assess the need for breathing coaching. It was demonstrated that the Sample Entropy metric was correlated to the irregularity of the patient's respiratory motion in terms of periodicity, whilst other metrics, such as percentage deviation of inhale/exhale peak positions provided insight into respiratory amplitude regularity. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  16. S2O - A software tool for integrating research data from general purpose statistic software into electronic data capture systems.

    PubMed

    Bruland, Philipp; Dugas, Martin

    2017-01-07

    Data capture for clinical registries or pilot studies is often performed in spreadsheet-based applications like Microsoft Excel or IBM SPSS. Usually, data is transferred into statistic software, such as SAS, R or IBM SPSS Statistics, for analyses afterwards. Spreadsheet-based solutions suffer from several drawbacks: It is generally not possible to ensure a sufficient right and role management; it is not traced who has changed data when and why. Therefore, such systems are not able to comply with regulatory requirements for electronic data capture in clinical trials. In contrast, Electronic Data Capture (EDC) software enables a reliable, secure and auditable collection of data. In this regard, most EDC vendors support the CDISC ODM standard to define, communicate and archive clinical trial meta- and patient data. Advantages of EDC systems are support for multi-user and multicenter clinical trials as well as auditable data. Migration from spreadsheet based data collection to EDC systems is labor-intensive and time-consuming at present. Hence, the objectives of this research work are to develop a mapping model and implement a converter between the IBM SPSS and CDISC ODM standard and to evaluate this approach regarding syntactic and semantic correctness. A mapping model between IBM SPSS and CDISC ODM data structures was developed. SPSS variables and patient values can be mapped and converted into ODM. Statistical and display attributes from SPSS are not corresponding to any ODM elements; study related ODM elements are not available in SPSS. The S2O converting tool was implemented as command-line-tool using the SPSS internal Java plugin. Syntactic and semantic correctness was validated with different ODM tools and reverse transformation from ODM into SPSS format. Clinical data values were also successfully transformed into the ODM structure. Transformation between the spreadsheet format IBM SPSS and the ODM standard for definition and exchange of trial data is feasible. S2O facilitates migration from Excel- or SPSS-based data collections towards reliable EDC systems. Thereby, advantages of EDC systems like reliable software architecture for secure and traceable data collection and particularly compliance with regulatory requirements are achievable.

  17. The Thurgood Marshall School of Law Empirical Findings: A Report of the Statistical Analysis of the February 2010 TMSL Texas Bar Results

    ERIC Educational Resources Information Center

    Kadhi, T.; Holley, D.; Rudley, D.; Garrison, P.; Green, T.

    2010-01-01

    The following report gives the statistical findings of the 2010 Thurgood Marshall School of Law (TMSL) Texas Bar results. This data was pre-existing and was given to the Evaluator by email from the Dean. Then, in-depth statistical analyses were run using the SPSS 17 to address the following questions: 1. What are the statistical descriptors of the…

  18. A statistical package for computing time and frequency domain analysis

    NASA Technical Reports Server (NTRS)

    Brownlow, J.

    1978-01-01

    The spectrum analysis (SPA) program is a general purpose digital computer program designed to aid in data analysis. The program does time and frequency domain statistical analyses as well as some preanalysis data preparation. The capabilities of the SPA program include linear trend removal and/or digital filtering of data, plotting and/or listing of both filtered and unfiltered data, time domain statistical characterization of data, and frequency domain statistical characterization of data.

  19. Multivariate analysis of meat production traits in Murciano-Granadina goat kids.

    PubMed

    Zurita-Herrera, P; Delgado, J V; Argüello, A; Camacho, M E

    2011-07-01

    Growth, carcass quality, and meat quality data from Murciano-Granadina kids (n=61) raised under three different systems were collected. Canonical discriminatory analysis and cluster analysis of the entire meat production process and its stages were performed using the rearing systems as grouping criteria. All comparisons resulted in significant differences and indicated the existence of three products with different quality characteristics as a result of the three rearing systems. Differences among groups were greater when comparing carcass and meat qualities as compared with growth differences. The paired analyses of canonical correlations among groups of variables integrated in growth, carcass and meat quality, resulted in all being statistically significant, pointing out the canonical correlation coefficient between carcass quality and meat quality. Copyright © 2011 Elsevier Ltd. All rights reserved.

  20. A state-of-the-art review of transportation systems evaluation techniques relevant to air transportation, volume 1. [urban planning and urban transportation using decision theory

    NASA Technical Reports Server (NTRS)

    Haefner, L. E.

    1975-01-01

    Mathematical and philosophical approaches are presented for evaluation and implementation of ground and air transportation systems. Basic decision processes are examined that are used for cost analyses and planning (i.e, statistical decision theory, linear and dynamic programming, optimization, game theory). The effects on the environment and the community that a transportation system may have are discussed and modelled. Algorithmic structures are examined and selected bibliographic annotations are included. Transportation dynamic models were developed. Citizen participation in transportation projects (i.e, in Maryland and Massachusetts) is discussed. The relevance of the modelling and evaluation approaches to air transportation (i.e, airport planning) is examined in a case study in St. Louis, Missouri.

  1. Completeness and overlap in open access systems: Search engines, aggregate institutional repositories and physics-related open sources.

    PubMed

    Tsay, Ming-Yueh; Wu, Tai-Luan; Tseng, Ling-Li

    2017-01-01

    This study examines the completeness and overlap of coverage in physics of six open access scholarly communication systems, including two search engines (Google Scholar and Microsoft Academic), two aggregate institutional repositories (OAIster and OpenDOAR), and two physics-related open sources (arXiv.org and Astrophysics Data System). The 2001-2013 Nobel Laureates in Physics served as the sample. Bibliographic records of their publications were retrieved and downloaded from each system, and a computer program was developed to perform the analytical tasks of sorting, comparison, elimination, aggregation and statistical calculations. Quantitative analyses and cross-referencing were performed to determine the completeness and overlap of the system coverage of the six open access systems. The results may enable scholars to select an appropriate open access system as an efficient scholarly communication channel, and academic institutions may build institutional repositories or independently create citation index systems in the future. Suggestions on indicators and tools for academic assessment are presented based on the comprehensiveness assessment of each system.

  2. Universal Inverse Power-Law Distribution for Fractal Fluctuations in Dynamical Systems: Applications for Predictability of Inter-Annual Variability of Indian and USA Region Rainfall

    NASA Astrophysics Data System (ADS)

    Selvam, A. M.

    2017-01-01

    Dynamical systems in nature exhibit self-similar fractal space-time fluctuations on all scales indicating long-range correlations and, therefore, the statistical normal distribution with implicit assumption of independence, fixed mean and standard deviation cannot be used for description and quantification of fractal data sets. The author has developed a general systems theory based on classical statistical physics for fractal fluctuations which predicts the following. (1) The fractal fluctuations signify an underlying eddy continuum, the larger eddies being the integrated mean of enclosed smaller-scale fluctuations. (2) The probability distribution of eddy amplitudes and the variance (square of eddy amplitude) spectrum of fractal fluctuations follow the universal Boltzmann inverse power law expressed as a function of the golden mean. (3) Fractal fluctuations are signatures of quantum-like chaos since the additive amplitudes of eddies when squared represent probability densities analogous to the sub-atomic dynamics of quantum systems such as the photon or electron. (4) The model predicted distribution is very close to statistical normal distribution for moderate events within two standard deviations from the mean but exhibits a fat long tail that are associated with hazardous extreme events. Continuous periodogram power spectral analyses of available GHCN annual total rainfall time series for the period 1900-2008 for Indian and USA stations show that the power spectra and the corresponding probability distributions follow model predicted universal inverse power law form signifying an eddy continuum structure underlying the observed inter-annual variability of rainfall. On a global scale, man-made greenhouse gas related atmospheric warming would result in intensification of natural climate variability, seen immediately in high frequency fluctuations such as QBO and ENSO and even shorter timescales. Model concepts and results of analyses are discussed with reference to possible prediction of climate change. Model concepts, if correct, rule out unambiguously, linear trends in climate. Climate change will only be manifested as increase or decrease in the natural variability. However, more stringent tests of model concepts and predictions are required before applications to such an important issue as climate change. Observations and simulations with climate models show that precipitation extremes intensify in response to a warming climate (O'Gorman in Curr Clim Change Rep 1:49-59, 2015).

  3. Entropy for Mechanically Vibrating Systems

    NASA Astrophysics Data System (ADS)

    Tufano, Dante

    The research contained within this thesis deals with the subject of entropy as defined for and applied to mechanically vibrating systems. This work begins with an overview of entropy as it is understood in the fields of classical thermodynamics, information theory, statistical mechanics, and statistical vibroacoustics. Khinchin's definition of entropy, which is the primary definition used for the work contained in this thesis, is introduced in the context of vibroacoustic systems. The main goal of this research is to to establish a mathematical framework for the application of Khinchin's entropy in the field of statistical vibroacoustics by examining the entropy context of mechanically vibrating systems. The introduction of this thesis provides an overview of statistical energy analysis (SEA), a modeling approach to vibroacoustics that motivates this work on entropy. The objective of this thesis is given, and followed by a discussion of the intellectual merit of this work as well as a literature review of relevant material. Following the introduction, an entropy analysis of systems of coupled oscillators is performed utilizing Khinchin's definition of entropy. This analysis develops upon the mathematical theory relating to mixing entropy, which is generated by the coupling of vibroacoustic systems. The mixing entropy is shown to provide insight into the qualitative behavior of such systems. Additionally, it is shown that the entropy inequality property of Khinchin's entropy can be reduced to an equality using the mixing entropy concept. This equality can be interpreted as a facet of the second law of thermodynamics for vibroacoustic systems. Following this analysis, an investigation of continuous systems is performed using Khinchin's entropy. It is shown that entropy analyses using Khinchin's entropy are valid for continuous systems that can be decomposed into a finite number of modes. The results are shown to be analogous to those obtained for simple oscillators, which demonstrates the applicability of entropy-based approaches to real-world systems. Three systems are considered to demonstrate these findings: 1) a rod end-coupled to a simple oscillator, 2) two end-coupled rods, and 3) two end-coupled beams. The aforementioned work utilizes the weak coupling assumption to determine the entropy of composite systems. Following this discussion, a direct method of finding entropy is developed which does not rely on this limiting assumption. The resulting entropy provides a useful benchmark for evaluating the accuracy of the weak coupling approach, and is validated using systems of coupled oscillators. The later chapters of this work discuss Khinchin's entropy as applied to nonlinear and nonconservative systems, respectively. The discussion of entropy for nonlinear systems is motivated by the desire to expand the applicability of SEA techniques beyond the linear regime. The discussion of nonconservative systems is also crucial, since real-world systems interact with their environment, and it is necessary to confirm the validity of an entropy approach for systems that are relevant in the context of SEA. Having developed a mathematical framework for determining entropy under a number of previously unexplored cases, the relationship between thermodynamics and statistical vibroacoustics can be better understood. Specifically, vibroacoustic temperatures can be obtained for systems that are not necessarily linear or weakly coupled. In this way, entropy provides insight into how the power flow proportionality of statistical energy analysis (SEA) can be applied to a broader class of vibroacoustic systems. As such, entropy is a useful tool for both justifying and expanding the foundational results of SEA.

  4. Statistical strategies to quantify respiratory sinus arrhythmia: Are commonly used metrics equivalent?

    PubMed Central

    Lewis, Gregory F.; Furman, Senta A.; McCool, Martha F.; Porges, Stephen W.

    2011-01-01

    Three frequently used RSA metrics are investigated to document violations of assumptions for parametric analyses, moderation by respiration, influences of nonstationarity, and sensitivity to vagal blockade. Although all metrics are highly correlated, new findings illustrate that the metrics are noticeably different on the above dimensions. Only one method conforms to the assumptions for parametric analyses, is not moderated by respiration, is not influenced by nonstationarity, and reliably generates stronger effect sizes. Moreover, this method is also the most sensitive to vagal blockade. Specific features of this method may provide insights into improving the statistical characteristics of other commonly used RSA metrics. These data provide the evidence to question, based on statistical grounds, published reports using particular metrics of RSA. PMID:22138367

  5. Confidence crisis of results in biomechanics research.

    PubMed

    Knudson, Duane

    2017-11-01

    Many biomechanics studies have small sample sizes and incorrect statistical analyses, so reporting of inaccurate inferences and inflated magnitude of effects are common in the field. This review examines these issues in biomechanics research and summarises potential solutions from research in other fields to increase the confidence in the experimental effects reported in biomechanics. Authors, reviewers and editors of biomechanics research reports are encouraged to improve sample sizes and the resulting statistical power, improve reporting transparency, improve the rigour of statistical analyses used, and increase the acceptance of replication studies to improve the validity of inferences from data in biomechanics research. The application of sports biomechanics research results would also improve if a larger percentage of unbiased effects and their uncertainty were reported in the literature.

  6. Patterns of medicinal plant use: an examination of the Ecuadorian Shuar medicinal flora using contingency table and binomial analyses.

    PubMed

    Bennett, Bradley C; Husby, Chad E

    2008-03-28

    Botanical pharmacopoeias are non-random subsets of floras, with some taxonomic groups over- or under-represented. Moerman [Moerman, D.E., 1979. Symbols and selectivity: a statistical analysis of Native American medical ethnobotany, Journal of Ethnopharmacology 1, 111-119] introduced linear regression/residual analysis to examine these patterns. However, regression, the commonly-employed analysis, suffers from several statistical flaws. We use contingency table and binomial analyses to examine patterns of Shuar medicinal plant use (from Amazonian Ecuador). We first analyzed the Shuar data using Moerman's approach, modified to better meet requirements of linear regression analysis. Second, we assessed the exact randomization contingency table test for goodness of fit. Third, we developed a binomial model to test for non-random selection of plants in individual families. Modified regression models (which accommodated assumptions of linear regression) reduced R(2) to from 0.59 to 0.38, but did not eliminate all problems associated with regression analyses. Contingency table analyses revealed that the entire flora departs from the null model of equal proportions of medicinal plants in all families. In the binomial analysis, only 10 angiosperm families (of 115) differed significantly from the null model. These 10 families are largely responsible for patterns seen at higher taxonomic levels. Contingency table and binomial analyses offer an easy and statistically valid alternative to the regression approach.

  7. Validating Future Force Performance Measures (Army Class): Concluding Analyses

    DTIC Science & Technology

    2016-06-01

    32 Table 3.10. Descriptive Statistics and Intercorrelations for LV Final Predictor Factor Scores...55 Table 4.7. Descriptive Statistics for Analysis Criteria...Soldier attrition and performance: Dependability (Non- Delinquency ), Adjustment, Physical Conditioning, Leadership, Work Orientation, and Agreeableness

  8. FHWA statistical program : a customer's guide to using highway statistics

    DOT National Transportation Integrated Search

    1995-08-01

    The appropriate level of spatial and temporal data aggregation for highway vehicle emissions analyses is one of several important analytical questions that has received considerable interest following passage of the Clean Air Act Amendments (CAAA) of...

  9. A novel computer system for the evaluation of nasolabial morphology, symmetry and aesthetics after cleft lip and palate treatment. Part 1: General concept and validation.

    PubMed

    Pietruski, Piotr; Majak, Marcin; Debski, Tomasz; Antoszewski, Boguslaw

    2017-04-01

    The need for a widely accepted method suitable for a multicentre quantitative evaluation of facial aesthetics after surgical treatment of cleft lip and palate (CLP) has been emphasized for years. The aim of this study was to validate a novel computer system 'Analyse It Doc' (A.I.D.) as a tool for objective anthropometric analysis of the nasolabial region. An indirect anthropometric analysis of facial photographs was conducted with the A.I.D. system and Adobe Photoshop/ImageJ software. Intra-rater and inter-rater reliability and the time required for the analysis were estimated separately for each method and compared. Analysis with A.I.D. system was nearly 10-fold faster than that with the reference evaluation method. The A.I.D. system provided strong inter-rater and intra-rater correlations for linear, angular and area measurements of the nasolabial region, as well as a significantly higher accuracy and reproducibility of angular measurements in submental view. No statistically significant inter-method differences were found for other measurements. The hereby presented novel computer system is suitable for simple, time-efficient and reliable multicenter photogrammetric analyses of the nasolabial region in CLP patients and healthy subjects. Copyright © 2017 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.

  10. Nonparametric statistical modeling of binary star separations

    NASA Technical Reports Server (NTRS)

    Heacox, William D.; Gathright, John

    1994-01-01

    We develop a comprehensive statistical model for the distribution of observed separations in binary star systems, in terms of distributions of orbital elements, projection effects, and distances to systems. We use this model to derive several diagnostics for estimating the completeness of imaging searches for stellar companions, and the underlying stellar multiplicities. In application to recent imaging searches for low-luminosity companions to nearby M dwarf stars, and for companions to young stars in nearby star-forming regions, our analyses reveal substantial uncertainty in estimates of stellar multiplicity. For binary stars with late-type dwarf companions, semimajor axes appear to be distributed approximately as a(exp -1) for values ranging from about one to several thousand astronomical units. About one-quarter of the companions to field F and G dwarf stars have semimajor axes less than 1 AU, and about 15% lie beyond 1000 AU. The geometric efficiency (fraction of companions imaged onto the detector) of imaging searches is nearly independent of distances to program stars and orbital eccentricities, and varies only slowly with detector spatial limitations.

  11. ParallABEL: an R library for generalized parallelization of genome-wide association studies

    PubMed Central

    2010-01-01

    Background Genome-Wide Association (GWA) analysis is a powerful method for identifying loci associated with complex traits and drug response. Parts of GWA analyses, especially those involving thousands of individuals and consuming hours to months, will benefit from parallel computation. It is arduous acquiring the necessary programming skills to correctly partition and distribute data, control and monitor tasks on clustered computers, and merge output files. Results Most components of GWA analysis can be divided into four groups based on the types of input data and statistical outputs. The first group contains statistics computed for a particular Single Nucleotide Polymorphism (SNP), or trait, such as SNP characterization statistics or association test statistics. The input data of this group includes the SNPs/traits. The second group concerns statistics characterizing an individual in a study, for example, the summary statistics of genotype quality for each sample. The input data of this group includes individuals. The third group consists of pair-wise statistics derived from analyses between each pair of individuals in the study, for example genome-wide identity-by-state or genomic kinship analyses. The input data of this group includes pairs of SNPs/traits. The final group concerns pair-wise statistics derived for pairs of SNPs, such as the linkage disequilibrium characterisation. The input data of this group includes pairs of individuals. We developed the ParallABEL library, which utilizes the Rmpi library, to parallelize these four types of computations. ParallABEL library is not only aimed at GenABEL, but may also be employed to parallelize various GWA packages in R. The data set from the North American Rheumatoid Arthritis Consortium (NARAC) includes 2,062 individuals with 545,080, SNPs' genotyping, was used to measure ParallABEL performance. Almost perfect speed-up was achieved for many types of analyses. For example, the computing time for the identity-by-state matrix was linearly reduced from approximately eight hours to one hour when ParallABEL employed eight processors. Conclusions Executing genome-wide association analysis using the ParallABEL library on a computer cluster is an effective way to boost performance, and simplify the parallelization of GWA studies. ParallABEL is a user-friendly parallelization of GenABEL. PMID:20429914

  12. Dealing with missing standard deviation and mean values in meta-analysis of continuous outcomes: a systematic review.

    PubMed

    Weir, Christopher J; Butcher, Isabella; Assi, Valentina; Lewis, Stephanie C; Murray, Gordon D; Langhorne, Peter; Brady, Marian C

    2018-03-07

    Rigorous, informative meta-analyses rely on availability of appropriate summary statistics or individual participant data. For continuous outcomes, especially those with naturally skewed distributions, summary information on the mean or variability often goes unreported. While full reporting of original trial data is the ideal, we sought to identify methods for handling unreported mean or variability summary statistics in meta-analysis. We undertook two systematic literature reviews to identify methodological approaches used to deal with missing mean or variability summary statistics. Five electronic databases were searched, in addition to the Cochrane Colloquium abstract books and the Cochrane Statistics Methods Group mailing list archive. We also conducted cited reference searching and emailed topic experts to identify recent methodological developments. Details recorded included the description of the method, the information required to implement the method, any underlying assumptions and whether the method could be readily applied in standard statistical software. We provided a summary description of the methods identified, illustrating selected methods in example meta-analysis scenarios. For missing standard deviations (SDs), following screening of 503 articles, fifteen methods were identified in addition to those reported in a previous review. These included Bayesian hierarchical modelling at the meta-analysis level; summary statistic level imputation based on observed SD values from other trials in the meta-analysis; a practical approximation based on the range; and algebraic estimation of the SD based on other summary statistics. Following screening of 1124 articles for methods estimating the mean, one approximate Bayesian computation approach and three papers based on alternative summary statistics were identified. Illustrative meta-analyses showed that when replacing a missing SD the approximation using the range minimised loss of precision and generally performed better than omitting trials. When estimating missing means, a formula using the median, lower quartile and upper quartile performed best in preserving the precision of the meta-analysis findings, although in some scenarios, omitting trials gave superior results. Methods based on summary statistics (minimum, maximum, lower quartile, upper quartile, median) reported in the literature facilitate more comprehensive inclusion of randomised controlled trials with missing mean or variability summary statistics within meta-analyses.

  13. The evolution of the health system outcomes in Central and Eastern Europe and their association with social, economic and political factors: an analysis of 25 years of transition.

    PubMed

    Romaniuk, Piotr; Szromek, Adam R

    2016-03-17

    After the fall of communism, the countries of Central and Eastern Europe started the process of political, economic, and social transformation. In health system the reform directions were often similar, despite differences in transition dynamics and the degree of government determination to implement reforms. Nonetheless, for most post-communist countries, there is a gap in evidence regarding the effectiveness of implemented reforms and their impact on health system performance. The presented study attempts to analyse and evaluate the results of health reforms in CEE countries with regard to their influence on health system outcomes. We also analysed the external and internal health system environments during the transition period to determine the factors affecting the effectiveness of health reforms. We compared the indicators of population health status, lifestyle, occupational safety issues and health system resources in 21 post-communist countries between sub-periods across the entire transition period at the aggregate level. The dynamics of change in health system outcomes in individual countries, as well as between countries, was also compared. Finally, we analysed the correlations between health system outcomes gathered into one synthetic measure and factors considered as potential determinants affecting the effectiveness of health reforms. The analyses were performed based on one-dimensional, two-dimensional and multidimensional statistical methods. The data were retrieved from the international databases, such as WHO, World Bank, International Labour Organization, World Value Survey and the European Social Survey. Among the factors positively stimulating improvements in health system outcomes were the total expenditure on health and a lower financial burden on patients, but primarily they were determined by the broader economic context of the country. Another finding was that better initial position positively determined health system outcomes at later stages, but did not affect the degree of improvements. Countries that embarked on comprehensive reforms early on tended to achieve the greatest improvements in health system outcomes. Poorer countries may have only limited ability to improve health system outcomes by committing more financial resources to the health system. Progress can still be made in terms of health behaviours, since policies to address these have so far been insufficient or ineffective.

  14. Mathematical background and attitudes toward statistics in a sample of Spanish college students.

    PubMed

    Carmona, José; Martínez, Rafael J; Sánchez, Manuel

    2005-08-01

    To examine the relation of mathematical background and initial attitudes toward statistics of Spanish college students in social sciences the Survey of Attitudes Toward Statistics was given to 827 students. Multivariate analyses tested the effects of two indicators of mathematical background (amount of exposure and achievement in previous courses) on the four subscales. Analysis suggested grades in previous courses are more related to initial attitudes toward statistics than the number of mathematics courses taken. Mathematical background was related with students' affective responses to statistics but not with their valuing of statistics. Implications of possible research are discussed.

  15. Data harmonization and federated analysis of population-based studies: the BioSHaRE project

    PubMed Central

    2013-01-01

    Abstracts Background Individual-level data pooling of large population-based studies across research centres in international research projects faces many hurdles. The BioSHaRE (Biobank Standardisation and Harmonisation for Research Excellence in the European Union) project aims to address these issues by building a collaborative group of investigators and developing tools for data harmonization, database integration and federated data analyses. Methods Eight population-based studies in six European countries were recruited to participate in the BioSHaRE project. Through workshops, teleconferences and electronic communications, participating investigators identified a set of 96 variables targeted for harmonization to answer research questions of interest. Using each study’s questionnaires, standard operating procedures, and data dictionaries, harmonization potential was assessed. Whenever harmonization was deemed possible, processing algorithms were developed and implemented in an open-source software infrastructure to transform study-specific data into the target (i.e. harmonized) format. Harmonized datasets located on server in each research centres across Europe were interconnected through a federated database system to perform statistical analysis. Results Retrospective harmonization led to the generation of common format variables for 73% of matches considered (96 targeted variables across 8 studies). Authenticated investigators can now perform complex statistical analyses of harmonized datasets stored on distributed servers without actually sharing individual-level data using the DataSHIELD method. Conclusion New Internet-based networking technologies and database management systems are providing the means to support collaborative, multi-center research in an efficient and secure manner. The results from this pilot project show that, given a strong collaborative relationship between participating studies, it is possible to seamlessly co-analyse internationally harmonized research databases while allowing each study to retain full control over individual-level data. We encourage additional collaborative research networks in epidemiology, public health, and the social sciences to make use of the open source tools presented herein. PMID:24257327

  16. [Quality assessment in anesthesia].

    PubMed

    Kupperwasser, B

    1996-01-01

    Quality assessment (assurance/improvement) is the set of methods used to measure and improve the delivered care and the department's performance against pre-established criteria or standards. The four stages of the self-maintained quality assessment cycle are: problem identification, problem analysis, problem correction and evaluation of corrective actions. Quality assessment is a measurable entity for which it is necessary to define and calibrate measurement parameters (indicators) from available data gathered from the hospital anaesthesia environment. Problem identification comes from the accumulation of indicators. There are four types of quality indicators: structure, process, outcome and sentinel indicators. The latter signal a quality defect, are independent of outcomes, are easier to analyse by statistical methods and closely related to processes and main targets of quality improvement. The three types of methods to analyse the problems (indicators) are: peer review, quantitative methods and risks management techniques. Peer review is performed by qualified anaesthesiologists. To improve its validity, the review process should be explicited and conclusions based on standards of practice and literature references. The quantitative methods are statistical analyses applied to the collected data and presented in a graphic format (histogram, Pareto diagram, control charts). The risks management techniques include: a) critical incident analysis establishing an objective relationship between a 'critical' event and the associated human behaviours; b) system accident analysis, based on the fact that accidents continue to occur despite safety systems and sophisticated technologies, checks of all the process components leading to the impredictable outcome and not just the human factors; c) cause-effect diagrams facilitate the problem analysis in reducing its causes to four fundamental components (persons, regulations, equipment, process). Definition and implementation of corrective measures, based on the findings of the two previous stages, are the third step of the evaluation cycle. The Hawthorne effect is an outcome improvement, before the implementation of any corrective actions. Verification of the implemented actions is the final and mandatory step closing the evaluation cycle.

  17. Physician-patient relationship and medical accident victim compensation: some insights into the French regulatory system.

    PubMed

    Ancelot, Lydie; Oros, Cornel

    2015-06-01

    Given the growing amount of medical litigation heard by courts, the 2002 Kouchner law in France has created the Office National d'Indemnisation des Accidents Médicaux (ONIAM), whose main aim is to encourage out-of-court settlements when a conflict between a physician and the victim of a medical accident occurs. More than 10 years after the implementation of this law, the statistics analysing its effectiveness are contradictory, which raises the question of the potential negative effects of the ONIAM on the compensation system. In order to address this question, the article analyses the impact of the ONIAM on the nature of settlement negotiations between the physician and the victim. Using a dynamic game within incomplete information, we develop a comparative analysis of two types of compensation systems in case of medical accidents: socialised financing granted by the ONIAM and private financing provided by the physician. We show that the ONIAM could encourage out-of-court settlements provided that the hypothesis of judicial error is relevant. On the contrary, in the case of a low probability of judicial errors, the ONIAM could be effective only for severe medical accidents.

  18. Tiger beetles pursue prey using a proportional control law with a delay of one half-stride.

    PubMed

    Haselsteiner, Andreas F; Gilbert, Cole; Wang, Z Jane

    2014-06-06

    Tiger beetles are fast diurnal predators capable of chasing prey under closed-loop visual guidance. We investigated this control system using statistical analyses of high-speed digital recordings of beetles chasing a moving prey dummy in a laboratory arena. Correlation analyses reveal that the beetle uses a proportional control law in which the angular position of the prey relative to the beetle's body axis drives the beetle's angular velocity with a delay of about 28 ms. The proportionality coefficient or system gain, 12 s(-1), is just below critical damping. Pursuit simulations using the derived control law predict angular orientation during pursuits with a residual error of about 7°. This is of the same order of magnitude as the oscillation imposed by the beetle's alternating tripod gait, which was not factored into the control law. The system delay of 28 ms equals a half-stride period, i.e. the time between the touch down of alternating tripods. Based on these results, we propose a physical interpretation of the observed control law: to turn towards its prey, the beetle on average exerts a sideways force proportional to the angular position of the prey measured a half-stride earlier.

  19. Maintaining disorder: the micropolitics of drugs policy in Iran

    PubMed Central

    2018-01-01

    Abstract This article analyses the ways in which the state ‘treats’ addiction among precarious drug (ab)users in Iran. While most Muslim-majority as well as some Western states have been reluctant to adopt harm reduction measures, the Islamic Republic of Iran has done so on a nationwide scale and through a sophisticated system of welfare intervention. Additionally, it has introduced devices of management of ‘addiction’ (the ‘camps’) that defy statist modes of punishment and private violence. What legal and ethical framework has this new situation engendered? And what does this new situation tell us about the governmentality of the state? Through a combination of historical analysis and ethnographic fieldwork, the article analyses the paradigm of government of the Iranian state with regard to disorder as embodied by the lives of poor drug (ab)users. PMID:29456274

  20. Investigation of trace elements in ancient pottery from Jenini, Brong Ahafo region, Ghana by INAA and Compton suppression spectrometry

    NASA Astrophysics Data System (ADS)

    Nyarko, B. J. B.; Bredwa-Mensah, Y.; Serfor-Armah, Y.; Dampare, S. B.; Akaho, E. H. K.; Osae, S.; Perbi, A.; Chatt, A.

    2007-10-01

    Concentrations of trace elements in ancient pottery excavated from Jenini in the Brong Ahafo region of Ghana were determined using instrumental neutron activation analysis (INAA) in conjunction with both conventional and Compton suppression counting. Jenini was a slave Camp of Samory Toure during the indigenous slavery and the Trans-Atlantic slave trade. Pottery fragments found during the excavation of the grave tombs of the slaves who died in the slave camps were analysed. In all, 26 trace elements were determined in 40 pottery fragments. These elemental concentrations were processed using multivariate statistical methods, cluster, factor and discriminant analyses in order to determine similarities and correlation between the various samples. The suitability of the two counting systems for determination of trace elements in pottery objects has been evaluated.

  1. A randomized evaluation of a computer-based physician's workstation: design considerations and baseline results.

    PubMed Central

    Rotman, B. L.; Sullivan, A. N.; McDonald, T.; DeSmedt, P.; Goodnature, D.; Higgins, M.; Suermondt, H. J.; Young, C. Y.; Owens, D. K.

    1995-01-01

    We are performing a randomized, controlled trial of a Physician's Workstation (PWS), an ambulatory care information system, developed for use in the General Medical Clinic (GMC) of the Palo Alto VA. Goals for the project include selecting appropriate outcome variables and developing a statistically powerful experimental design with a limited number of subjects. As PWS provides real-time drug-ordering advice, we retrospectively examined drug costs and drug-drug interactions in order to select outcome variables sensitive to our short-term intervention as well as to estimate the statistical efficiency of alternative design possibilities. Drug cost data revealed the mean daily cost per physician per patient was 99.3 cents +/- 13.4 cents, with a range from 0.77 cent to 1.37 cents. The rate of major interactions per prescription for each physician was 2.9% +/- 1%, with a range from 1.5% to 4.8%. Based on these baseline analyses, we selected a two-period parallel design for the evaluation, which maximized statistical power while minimizing sources of bias. PMID:8563376

  2. The International Neuroblastoma Risk Group (INRG) Classification System: An INRG Task Force Report

    PubMed Central

    Cohn, Susan L.; Pearson, Andrew D.J.; London, Wendy B.; Monclair, Tom; Ambros, Peter F.; Brodeur, Garrett M.; Faldum, Andreas; Hero, Barbara; Iehara, Tomoko; Machin, David; Mosseri, Veronique; Simon, Thorsten; Garaventa, Alberto; Castel, Victoria; Matthay, Katherine K.

    2009-01-01

    Purpose Because current approaches to risk classification and treatment stratification for children with neuroblastoma (NB) vary greatly throughout the world, it is difficult to directly compare risk-based clinical trials. The International Neuroblastoma Risk Group (INRG) classification system was developed to establish a consensus approach for pretreatment risk stratification. Patients and Methods The statistical and clinical significance of 13 potential prognostic factors were analyzed in a cohort of 8,800 children diagnosed with NB between 1990 and 2002 from North America and Australia (Children's Oncology Group), Europe (International Society of Pediatric Oncology Europe Neuroblastoma Group and German Pediatric Oncology and Hematology Group), and Japan. Survival tree regression analyses using event-free survival (EFS) as the primary end point were performed to test the prognostic significance of the 13 factors. Results Stage, age, histologic category, grade of tumor differentiation, the status of the MYCN oncogene, chromosome 11q status, and DNA ploidy were the most highly statistically significant and clinically relevant factors. A new staging system (INRG Staging System) based on clinical criteria and tumor imaging was developed for the INRG Classification System. The optimal age cutoff was determined to be between 15 and 19 months, and 18 months was selected for the classification system. Sixteen pretreatment groups were defined on the basis of clinical criteria and statistically significantly different EFS of the cohort stratified by the INRG criteria. Patients with 5-year EFS more than 85%, more than 75% to ≤ 85%, ≥ 50% to ≤ 75%, or less than 50% were classified as very low risk, low risk, intermediate risk, or high risk, respectively. Conclusion By defining homogenous pretreatment patient cohorts, the INRG classification system will greatly facilitate the comparison of risk-based clinical trials conducted in different regions of the world and the development of international collaborative studies. PMID:19047291

  3. Prognostic Value of NME1 (NM23-H1) in Patients with Digestive System Neoplasms: A Systematic Review and Meta-Analysis.

    PubMed

    Han, Wei; Shi, Chun-Tao; Cao, Fei-Yun; Cao, Fang; Chen, Min-Bin; Lu, Rong-Zhu; Wang, Hua-Bing; Yu, Min; He, Da-Wei; Wang, Qing-Hua; Wang, Jie-Feng; Xu, Xuan-Xuan; Ding, Hou-Zhong

    2016-01-01

    There is a heated debate on whether the prognostic value of NME1 is favorable or unfavorable. Thus, we carried out a meta-analysis to evaluate the relationship between NME1 expression and the prognosis of patients with digestive system neoplasms. We searched PubMed, EMBASE and Web of Science for relevant articles. The pooled odd ratios (ORs) and corresponding 95%CI were calculated to evaluate the prognostic value of NME1 expression in patients with digestive system neoplasms, and the association between NME1 expression and clinicopathological factors. We also performed subgroup analyses to find out the source of heterogeneity. 2904 patients were pooled from 28 available studies in total. Neither the incorporative OR combined by 17 studies with overall survival (OR = 0.65, 95%CI:0.41-1.03, P = 0.07) nor the pooled OR with disease-free survival (OR = 0.75, 95%CI:0.17-3.36, P = 0.71) in statistics showed any significance. Although we couldn't find any significance in TNM stage (OR = 0.78, 95%CI:0.44-1.36, P = 0.38), elevated NME1 expression was related to well tumor differentiation (OR = 0.59, 95%CI:0.47-0.73, P<0.00001), negative N status (OR = 0.54, 95%CI:0.36-0.82, P = 0.003) and Dukes' stage (OR = 0.43, 95%CI:0.24-0.77, P = 0.004). And in the subgroup analyses, we only find the "years" which might be the source of heterogeneity of overall survival in gastric cancer. The results showed that statistically significant association was found between NME1 expression and the tumor differentiation, N status and Dukes' stage of patients with digestive system cancers, while no significance was found in overall survival, disease-free survival and TNM stage. More and further researches should be conducted to reveal the prognostic value of NME1.

  4. PULPAL BLOOD FLOW CHANGES IN ABUTMENT TEETH OF REMOVABLE PARTIAL DENTURES

    PubMed Central

    Kunt, Göknil Ergün; Kökçü, Deniz; Ceylan, Gözlem; Yılmaz, Nergiz; Güler, Ahmet Umut

    2009-01-01

    The purpose of this study was to investigate the effect of tooth supported (TSD) and toothtissue supported (TTSD) removable partial denture wearing on pulpal blood flow (PBF) of the abutment teeth by using Laser Doppler Flowmeter (LDF). Measurements were carried out on 60 teeth of 28 patients (28 teeth and 12 patients of TTSD group, 32 teeth and 16 patients of TSD group) who had not worn any type of removable partial dentures before, had no systemic problems and were non smokers. PBF values were recorded by LDF before insertion (day 0) and after insertion of dentures at day 1, day 7 and day 30. Statistical analysis was performed by student t test and covariance analyses of repeated measurements. In the group TTSD, the mean values of PBF decreased statistically significantly at day 1 after insertion when compared with PBF values before insertion (p<0,01). There was no statistically significant difference among PBF mean values on 1st, 7th and 30th day. However, in the group TSD, there was no statistically significant difference among PBF mean values before insertion and on 1st, 7th and 30th day. In other words, PBF mean values in group TSD continued without changing statistically significant on 1st, 7th and 30th day. TTSD wearing may show negative effect on the abutment teeth due to decreasing basal PBF. PMID:20001995

  5. Evaluation of Solid Rocket Motor Component Data Using a Commercially Available Statistical Software Package

    NASA Technical Reports Server (NTRS)

    Stefanski, Philip L.

    2015-01-01

    Commercially available software packages today allow users to quickly perform the routine evaluations of (1) descriptive statistics to numerically and graphically summarize both sample and population data, (2) inferential statistics that draws conclusions about a given population from samples taken of it, (3) probability determinations that can be used to generate estimates of reliability allowables, and finally (4) the setup of designed experiments and analysis of their data to identify significant material and process characteristics for application in both product manufacturing and performance enhancement. This paper presents examples of analysis and experimental design work that has been conducted using Statgraphics®(Registered Trademark) statistical software to obtain useful information with regard to solid rocket motor propellants and internal insulation material. Data were obtained from a number of programs (Shuttle, Constellation, and Space Launch System) and sources that include solid propellant burn rate strands, tensile specimens, sub-scale test motors, full-scale operational motors, rubber insulation specimens, and sub-scale rubber insulation analog samples. Besides facilitating the experimental design process to yield meaningful results, statistical software has demonstrated its ability to quickly perform complex data analyses and yield significant findings that might otherwise have gone unnoticed. One caveat to these successes is that useful results not only derive from the inherent power of the software package, but also from the skill and understanding of the data analyst.

  6. Mass univariate analysis of event-related brain potentials/fields I: a critical tutorial review.

    PubMed

    Groppe, David M; Urbach, Thomas P; Kutas, Marta

    2011-12-01

    Event-related potentials (ERPs) and magnetic fields (ERFs) are typically analyzed via ANOVAs on mean activity in a priori windows. Advances in computing power and statistics have produced an alternative, mass univariate analyses consisting of thousands of statistical tests and powerful corrections for multiple comparisons. Such analyses are most useful when one has little a priori knowledge of effect locations or latencies, and for delineating effect boundaries. Mass univariate analyses complement and, at times, obviate traditional analyses. Here we review this approach as applied to ERP/ERF data and four methods for multiple comparison correction: strong control of the familywise error rate (FWER) via permutation tests, weak control of FWER via cluster-based permutation tests, false discovery rate control, and control of the generalized FWER. We end with recommendations for their use and introduce free MATLAB software for their implementation. Copyright © 2011 Society for Psychophysiological Research.

  7. Impact of ontology evolution on functional analyses.

    PubMed

    Groß, Anika; Hartung, Michael; Prüfer, Kay; Kelso, Janet; Rahm, Erhard

    2012-10-15

    Ontologies are used in the annotation and analysis of biological data. As knowledge accumulates, ontologies and annotation undergo constant modifications to reflect this new knowledge. These modifications may influence the results of statistical applications such as functional enrichment analyses that describe experimental data in terms of ontological groupings. Here, we investigate to what degree modifications of the Gene Ontology (GO) impact these statistical analyses for both experimental and simulated data. The analysis is based on new measures for the stability of result sets and considers different ontology and annotation changes. Our results show that past changes in the GO are non-uniformly distributed over different branches of the ontology. Considering the semantic relatedness of significant categories in analysis results allows a more realistic stability assessment for functional enrichment studies. We observe that the results of term-enrichment analyses tend to be surprisingly stable despite changes in ontology and annotation.

  8. Predicting Subsequent Myopia in Initially Pilot-Qualified USAFA Cadets.

    DTIC Science & Technology

    1985-12-27

    Refraction Measurement 14 Accesion For . 4.0 RESULTS NTIS CRA&I 15 4.1 Descriptive Statistics DTIC TAB 0 15i ~ ~Unannoutwced [ 4.2 Predictive Statistics ...mentioned), and three were missing a status. The data of the subject who was commissionable were dropped from the statistical analyses. Of the 91...relatively equal numbers of participants from all classes will become obvious ’’" - within the results. J 4.1 Descriptive Statistics In the original plan

  9. Statistical reporting of clinical pharmacology research.

    PubMed

    Ring, Arne; Schall, Robert; Loke, Yoon K; Day, Simon

    2017-06-01

    Research in clinical pharmacology covers a wide range of experiments, trials and investigations: clinical trials, systematic reviews and meta-analyses of drug usage after market approval, the investigation of pharmacokinetic-pharmacodynamic relationships, the search for mechanisms of action or for potential signals for efficacy and safety using biomarkers. Often these investigations are exploratory in nature, which has implications for the way the data should be analysed and presented. Here we summarize some of the statistical issues that are of particular importance in clinical pharmacology research. © 2017 The British Pharmacological Society.

  10. Stata companion.

    PubMed

    Brennan, Jennifer Sousa

    2010-01-01

    This chapter is an introductory reference guide highlighting some of the most common statistical topics, broken down into both command-line syntax and graphical interface point-and-click commands. This chapter serves to supplement more formal statistics lessons and expedite using Stata to compute basic analyses.

  11. A review of geographic variation and Geographic Information Systems (GIS) applications in prescription drug use research.

    PubMed

    Wangia, Victoria; Shireman, Theresa I

    2013-01-01

    While understanding geography's role in healthcare has been an area of research for over 40 years, the application of geography-based analyses to prescription medication use is limited. The body of literature was reviewed to assess the current state of such studies to demonstrate the scale and scope of projects in order to highlight potential research opportunities. To review systematically how researchers have applied geography-based analyses to medication use data. Empiric, English language research articles were identified through PubMed and bibliographies. Original research articles were independently reviewed as to the medications or classes studied, data sources, measures of medication exposure, geographic units of analysis, geospatial measures, and statistical approaches. From 145 publications matching key search terms, forty publications met the inclusion criteria. Cardiovascular and psychotropic classes accounted for the largest proportion of studies. Prescription drug claims were the primary source, and medication exposure was frequently captured as period prevalence. Medication exposure was documented across a variety of geopolitical units such as countries, provinces, regions, states, and postal codes. Most results were descriptive and formal statistical modeling capitalizing on geospatial techniques was rare. Despite the extensive research on small area variation analysis in healthcare, there are a limited number of studies that have examined geographic variation in medication use. Clearly, there is opportunity to collaborate with geographers and GIS professionals to harness the power of GIS technologies and to strengthen future medication studies by applying more robust geospatial statistical methods. Copyright © 2013 Elsevier Inc. All rights reserved.

  12. Statistical parameters of random heterogeneity estimated by analysing coda waves based on finite difference method

    NASA Astrophysics Data System (ADS)

    Emoto, K.; Saito, T.; Shiomi, K.

    2017-12-01

    Short-period (<1 s) seismograms are strongly affected by small-scale (<10 km) heterogeneities in the lithosphere. In general, short-period seismograms are analysed based on the statistical method by considering the interaction between seismic waves and randomly distributed small-scale heterogeneities. Statistical properties of the random heterogeneities have been estimated by analysing short-period seismograms. However, generally, the small-scale random heterogeneity is not taken into account for the modelling of long-period (>2 s) seismograms. We found that the energy of the coda of long-period seismograms shows a spatially flat distribution. This phenomenon is well known in short-period seismograms and results from the scattering by small-scale heterogeneities. We estimate the statistical parameters that characterize the small-scale random heterogeneity by modelling the spatiotemporal energy distribution of long-period seismograms. We analyse three moderate-size earthquakes that occurred in southwest Japan. We calculate the spatial distribution of the energy density recorded by a dense seismograph network in Japan at the period bands of 8-16 s, 4-8 s and 2-4 s and model them by using 3-D finite difference (FD) simulations. Compared to conventional methods based on statistical theories, we can calculate more realistic synthetics by using the FD simulation. It is not necessary to assume a uniform background velocity, body or surface waves and scattering properties considered in general scattering theories. By taking the ratio of the energy of the coda area to that of the entire area, we can separately estimate the scattering and the intrinsic absorption effects. Our result reveals the spectrum of the random inhomogeneity in a wide wavenumber range including the intensity around the corner wavenumber as P(m) = 8πε2a3/(1 + a2m2)2, where ε = 0.05 and a = 3.1 km, even though past studies analysing higher-frequency records could not detect the corner. Finally, we estimate the intrinsic attenuation by modelling the decay rate of the energy. The method proposed in this study is suitable for quantifying the statistical properties of long-wavelength subsurface random inhomogeneity, which leads the way to characterizing a wider wavenumber range of spectra, including the corner wavenumber.

  13. Proliferative changes in the bronchial epithelium of former smokers treated with retinoids.

    PubMed

    Hittelman, Walter N; Liu, Diane D; Kurie, Jonathan M; Lotan, Reuben; Lee, Jin Soo; Khuri, Fadlo; Ibarguen, Heladio; Morice, Rodolfo C; Walsh, Garrett; Roth, Jack A; Minna, John; Ro, Jae Y; Broxson, Anita; Hong, Waun Ki; Lee, J Jack

    2007-11-07

    Retinoids have shown antiproliferative and chemopreventive activity. We analyzed data from a randomized, placebo-controlled chemoprevention trial to determine whether a 3-month treatment with either 9-cis-retinoic acid (RA) or 13-cis-RA and alpha-tocopherol reduced Ki-67, a proliferation biomarker, in the bronchial epithelium. Former smokers (n = 225) were randomly assigned to receive 3 months of daily oral 9-cis-RA (100 mg), 13-cis-RA (1 mg/kg) and alpha-tocopherol (1200 IU), or placebo. Bronchoscopic biopsy specimens obtained before and after treatment were immunohistochemically assessed for changes in the Ki-67 proliferative index (i.e., percentage of cells with Ki-67-positive nuclear staining) in the basal and parabasal layers of the bronchial epithelium. Per-subject and per-biopsy site analyses were conducted. Multicovariable analyses, including a mixed-effects model and a generalized estimating equations model, were used to investigate the treatment effect (Ki-67 labeling index and percentage of bronchial epithelial biopsy sites with a Ki-67 index > or = 5%) with adjustment for multiple covariates, such as smoking history and metaplasia. Coefficient estimates and 95% confidence intervals (CIs) were obtained from the models. All statistical tests were two-sided. In per-subject analyses, Ki-67 labeling in the basal layer was not changed by any treatment; the percentage of subjects with a high Ki-67 labeling in the parabasal layer dropped statistically significantly after treatment with 13-cis-RA and alpha-tocopherol treatment (P = .04) compared with placebo, but the drop was not statistically significant after 9-cis-RA treatment (P = .17). A similar effect was observed in the parabasal layer in a per-site analysis; the percentage of sites with high Ki-67 labeling dropped statistically significantly after 9-cis-RA treatment (coefficient estimate = -0.72, 95% CI = -1.24 to -0.20; P = .007) compared with placebo, and after 13-cis-RA and alpha-tocopherol treatment (coefficient estimate = -0.66, 95% CI = -1.15 to -0.17; P = .008). In per-subject analyses, treatment with 13-cis-RA and alpha-tocopherol, compared with placebo, was statistically significantly associated with reduced bronchial epithelial cell proliferation; treatment with 9-cis-RA was not. In per-site analyses, statistically significant associations were obtained with both treatments.

  14. Proliferative Changes in the Bronchial Epithelium of Former Smokers Treated With Retinoids

    PubMed Central

    Hittelman, Walter N.; Liu, Diane D.; Kurie, Jonathan M.; Lotan, Reuben; Lee, Jin Soo; Khuri, Fadlo; Ibarguen, Heladio; Morice, Rodolfo C.; Walsh, Garrett; Roth, Jack A.; Minna, John; Ro, Jae Y.; Broxson, Anita; Hong, Waun Ki; Lee, J. Jack

    2012-01-01

    Background Retinoids have shown antiproliferative and chemopreventive activity. We analyzed data from a randomized, placebo-controlled chemoprevention trial to determine whether a 3-month treatment with either 9-cis-retinoic acid (RA) or 13-cis-RA and α-tocopherol reduced Ki-67, a proliferation biomarker, in the bronchial epithelium. Methods Former smokers (n = 225) were randomly assigned to receive 3 months of daily oral 9-cis-RA (100 mg), 13-cis-RA (1 mg/kg) and α-tocopherol (1200 IU), or placebo. Bronchoscopic biopsy specimens obtained before and after treatment were immunohistochemically assessed for changes in the Ki-67 proliferative index (i.e., percentage of cells with Ki-67–positive nuclear staining) in the basal and parabasal layers of the bronchial epithelium. Per-subject and per–biopsy site analyses were conducted. Multicovariable analyses, including a mixed-effects model and a generalized estimating equations model, were used to investigate the treatment effect (Ki-67 labeling index and percentage of bronchial epithelial biopsy sites with a Ki-67 index ≥ 5%) with adjustment for multiple covariates, such as smoking history and metaplasia. Coefficient estimates and 95% confidence intervals (CIs) were obtained from the models. All statistical tests were two-sided. Results In per-subject analyses, Ki-67 labeling in the basal layer was not changed by any treatment; the percentage of subjects with a high Ki-67 labeling in the parabasal layer dropped statistically significantly after treatment with 13-cis-RA and α-tocopherol treatment (P = .04) compared with placebo, but the drop was not statistically significant after 9-cis-RA treatment (P = .17). A similar effect was observed in the parabasal layer in a per-site analysis; the percentage of sites with high Ki-67 labeling dropped statistically significantly after 9-cis-RA treatment (coefficient estimate = −0.72, 95% CI = −1.24 to −0.20; P = .007) compared with placebo, and after 13-cis-RA and α-tocopherol treatment (coefficient estimate = −0.66, 95% CI = −1.15 to −0.17; P = .008). Conclusions In per-subject analyses, treatment with 13-cis-RA and α-tocopherol, compared with placebo, was statistically significantly associated with reduced bronchial epithelial cell proliferation; treatment with 9-cis-RA was not. In per-site analyses, statistically significant associations were obtained with both treatments. PMID:17971525

  15. CMG-Biotools, a Free Workbench for Basic Comparative Microbial Genomics

    PubMed Central

    Vesth, Tammi; Lagesen, Karin; Acar, Öncel; Ussery, David

    2013-01-01

    Background Today, there are more than a hundred times as many sequenced prokaryotic genomes than were present in the year 2000. The economical sequencing of genomic DNA has facilitated a whole new approach to microbial genomics. The real power of genomics is manifested through comparative genomics that can reveal strain specific characteristics, diversity within species and many other aspects. However, comparative genomics is a field not easily entered into by scientists with few computational skills. The CMG-biotools package is designed for microbiologists with limited knowledge of computational analysis and can be used to perform a number of analyses and comparisons of genomic data. Results The CMG-biotools system presents a stand-alone interface for comparative microbial genomics. The package is a customized operating system, based on Xubuntu 10.10, available through the open source Ubuntu project. The system can be installed on a virtual computer, allowing the user to run the system alongside any other operating system. Source codes for all programs are provided under GNU license, which makes it possible to transfer the programs to other systems if so desired. We here demonstrate the package by comparing and analyzing the diversity within the class Negativicutes, represented by 31 genomes including 10 genera. The analyses include 16S rRNA phylogeny, basic DNA and codon statistics, proteome comparisons using BLAST and graphical analyses of DNA structures. Conclusion This paper shows the strength and diverse use of the CMG-biotools system. The system can be installed on a vide range of host operating systems and utilizes as much of the host computer as desired. It allows the user to compare multiple genomes, from various sources using standardized data formats and intuitive visualizations of results. The examples presented here clearly shows that users with limited computational experience can perform complicated analysis without much training. PMID:23577086

  16. A critical evaluation of ecological indices for the comparative analysis of microbial communities based on molecular datasets.

    PubMed

    Lucas, Rico; Groeneveld, Jürgen; Harms, Hauke; Johst, Karin; Frank, Karin; Kleinsteuber, Sabine

    2017-01-01

    In times of global change and intensified resource exploitation, advanced knowledge of ecophysiological processes in natural and engineered systems driven by complex microbial communities is crucial for both safeguarding environmental processes and optimising rational control of biotechnological processes. To gain such knowledge, high-throughput molecular techniques are routinely employed to investigate microbial community composition and dynamics within a wide range of natural or engineered environments. However, for molecular dataset analyses no consensus about a generally applicable alpha diversity concept and no appropriate benchmarking of corresponding statistical indices exist yet. To overcome this, we listed criteria for the appropriateness of an index for such analyses and systematically scrutinised commonly employed ecological indices describing diversity, evenness and richness based on artificial and real molecular datasets. We identified appropriate indices warranting interstudy comparability and intuitive interpretability. The unified diversity concept based on 'effective numbers of types' provides the mathematical framework for describing community composition. Additionally, the Bray-Curtis dissimilarity as a beta-diversity index was found to reflect compositional changes. The employed statistical procedure is presented comprising commented R-scripts and example datasets for user-friendly trial application. © FEMS 2016. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  17. Spatial-temporal clustering of companion animal enteric syndrome: detection and investigation through the use of electronic medical records from participating private practices.

    PubMed

    Anholt, R M; Berezowski, J; Robertson, C; Stephen, C

    2015-09-01

    There is interest in the potential of companion animal surveillance to provide data to improve pet health and to provide early warning of environmental hazards to people. We implemented a companion animal surveillance system in Calgary, Alberta and the surrounding communities. Informatics technologies automatically extracted electronic medical records from participating veterinary practices and identified cases of enteric syndrome in the warehoused records. The data were analysed using time-series analyses and a retrospective space-time permutation scan statistic. We identified a seasonal pattern of reports of occurrences of enteric syndromes in companion animals and four statistically significant clusters of enteric syndrome cases. The cases within each cluster were examined and information about the animals involved (species, age, sex), their vaccination history, possible exposure or risk behaviour history, information about disease severity, and the aetiological diagnosis was collected. We then assessed whether the cases within the cluster were unusual and if they represented an animal or public health threat. There was often insufficient information recorded in the medical record to characterize the clusters by aetiology or exposures. Space-time analysis of companion animal enteric syndrome cases found evidence of clustering. Collection of more epidemiologically relevant data would enhance the utility of practice-based companion animal surveillance.

  18. Experimental design of an interlaboratory study for trace metal analysis of liquid fluids. [for aerospace vehicles

    NASA Technical Reports Server (NTRS)

    Greenbauer-Seng, L. A.

    1983-01-01

    The accurate determination of trace metals and fuels is an important requirement in much of the research into and development of alternative fuels for aerospace applications. Recognizing the detrimental effects of certain metals on fuel performance and fuel systems at the part per million and in some cases part per billion levels requires improved accuracy in determining these low concentration elements. Accurate analyses are also required to ensure interchangeability of analysis results between vendor, researcher, and end use for purposes of quality control. Previous interlaboratory studies have demonstrated the inability of different laboratories to agree on the results of metal analysis, particularly at low concentration levels, yet typically good precisions are reported within a laboratory. An interlaboratory study was designed to gain statistical information about the sources of variation in the reported concentrations. Five participant laboratories were used on a fee basis and were not informed of the purpose of the analyses. The effects of laboratory, analytical technique, concentration level, and ashing additive were studied in four fuel types for 20 elements of interest. The prescribed sample preparation schemes (variations of dry ashing) were used by all of the laboratories. The analytical data were statistically evaluated using a computer program for the analysis of variance technique.

  19. Is there a genetic cause for cancer cachexia? – a clinical validation study in 1797 patients

    PubMed Central

    Solheim, T S; Fayers, P M; Fladvad, T; Tan, B; Skorpen, F; Fearon, K; Baracos, V E; Klepstad, P; Strasser, F; Kaasa, S

    2011-01-01

    Background: Cachexia has major impact on cancer patients' morbidity and mortality. Future development of cachexia treatment needs methods for early identification of patients at risk. The aim of the study was to validate nine single-nucleotide polymorphisms (SNPs) previously associated with cachexia, and to explore 182 other candidate SNPs with the potential to be involved in the pathophysiology. Method: A total of 1797 cancer patients, classified as either having severe cachexia, mild cachexia or no cachexia, were genotyped. Results: After allowing for multiple testing, there was no statistically significant association between any of the SNPs analysed and the cachexia groups. However, consistent with prior reports, two SNPs from the acylpeptide hydrolase (APEH) gene showed suggestive statistical significance (P=0.02; OR, 0.78). Conclusion: This study failed to detect any significant association between any of the SNPs analysed and cachexia; although two SNPs from the APEH gene had a trend towards significance. The APEH gene encodes the enzyme APEH, postulated to be important in the endpoint of the ubiquitin system and thus the breakdown of proteins into free amino acids. In cachexia, there is an extensive breakdown of muscle proteins and an increase in the production of acute phase proteins in the liver. PMID:21934689

  20. Spatio-temporal surveillance of water based infectious disease (malaria) in Rawalpindi, Pakistan using geostatistical modeling techniques.

    PubMed

    Ahmad, Sheikh Saeed; Aziz, Neelam; Butt, Amna; Shabbir, Rabia; Erum, Summra

    2015-09-01

    One of the features of medical geography that has made it so useful in health research is statistical spatial analysis, which enables the quantification and qualification of health events. The main objective of this research was to study the spatial distribution patterns of malaria in Rawalpindi district using spatial statistical techniques to identify the hot spots and the possible risk factor. Spatial statistical analyses were done in ArcGIS, and satellite images for land use classification were processed in ERDAS Imagine. Four hundred and fifty water samples were also collected from the study area to identify the presence or absence of any microbial contamination. The results of this study indicated that malaria incidence varied according to geographical location, with eco-climatic condition and showing significant positive spatial autocorrelation. Hotspots or location of clusters were identified using Getis-Ord Gi* statistic. Significant clustering of malaria incidence occurred in rural central part of the study area including Gujar Khan, Kaller Syedan, and some part of Kahuta and Rawalpindi Tehsil. Ordinary least square (OLS) regression analysis was conducted to analyze the relationship of risk factors with the disease cases. Relationship of different land cover with the disease cases indicated that malaria was more related with agriculture, low vegetation, and water class. Temporal variation of malaria cases showed significant positive association with the meteorological variables including average monthly rainfall and temperature. The results of the study further suggested that water supply and sewage system and solid waste collection system needs a serious attention to prevent any outbreak in the study area.

  1. Examining the Reproducibility of 6 Published Studies in Public Health Services and Systems Research.

    PubMed

    Harris, Jenine K; B Wondmeneh, Sarah; Zhao, Yiqiang; Leider, Jonathon P

    2018-02-23

    Research replication, or repeating a study de novo, is the scientific standard for building evidence and identifying spurious results. While replication is ideal, it is often expensive and time consuming. Reproducibility, or reanalysis of data to verify published findings, is one proposed minimum alternative standard. While a lack of research reproducibility has been identified as a serious and prevalent problem in biomedical research and a few other fields, little work has been done to examine the reproducibility of public health research. We examined reproducibility in 6 studies from the public health services and systems research subfield of public health research. Following the methods described in each of the 6 papers, we computed the descriptive and inferential statistics for each study. We compared our results with the original study results and examined the percentage differences in descriptive statistics and differences in effect size, significance, and precision of inferential statistics. All project work was completed in 2017. We found consistency between original and reproduced results for each paper in at least 1 of the 4 areas examined. However, we also found some inconsistency. We identified incorrect transcription of results and omitting detail about data management and analyses as the primary contributors to the inconsistencies. Increasing reproducibility, or reanalysis of data to verify published results, can improve the quality of science. Researchers, journals, employers, and funders can all play a role in improving the reproducibility of science through several strategies including publishing data and statistical code, using guidelines to write clear and complete methods sections, conducting reproducibility reviews, and incentivizing reproducible science.

  2. Cocaine profiling for strategic intelligence, a cross-border project between France and Switzerland: part II. Validation of the statistical methodology for the profiling of cocaine.

    PubMed

    Lociciro, S; Esseiva, P; Hayoz, P; Dujourdy, L; Besacier, F; Margot, P

    2008-05-20

    Harmonisation and optimization of analytical and statistical methodologies were carried out between two forensic laboratories (Lausanne, Switzerland and Lyon, France) in order to provide drug intelligence for cross-border cocaine seizures. Part I dealt with the optimization of the analytical method and its robustness. This second part investigates statistical methodologies that will provide reliable comparison of cocaine seizures analysed on two different gas chromatographs interfaced with a flame ionisation detectors (GC-FIDs) in two distinct laboratories. Sixty-six statistical combinations (ten data pre-treatments followed by six different distance measurements and correlation coefficients) were applied. One pre-treatment (N+S: area of each peak is divided by its standard deviation calculated from the whole data set) followed by the Cosine or Pearson correlation coefficients were found to be the best statistical compromise for optimal discrimination of linked and non-linked samples. The centralisation of the analyses in one single laboratory is not a required condition anymore to compare samples seized in different countries. This allows collaboration, but also, jurisdictional control over data.

  3. The effect of berberine on insulin resistance in women with polycystic ovary syndrome: detailed statistical analysis plan (SAP) for a multicenter randomized controlled trial.

    PubMed

    Zhang, Ying; Sun, Jin; Zhang, Yun-Jiao; Chai, Qian-Yun; Zhang, Kang; Ma, Hong-Li; Wu, Xiao-Ke; Liu, Jian-Ping

    2016-10-21

    Although Traditional Chinese Medicine (TCM) has been widely used in clinical settings, a major challenge that remains in TCM is to evaluate its efficacy scientifically. This randomized controlled trial aims to evaluate the efficacy and safety of berberine in the treatment of patients with polycystic ovary syndrome. In order to improve the transparency and research quality of this clinical trial, we prepared this statistical analysis plan (SAP). The trial design, primary and secondary outcomes, and safety outcomes were declared to reduce selection biases in data analysis and result reporting. We specified detailed methods for data management and statistical analyses. Statistics in corresponding tables, listings, and graphs were outlined. The SAP provided more detailed information than trial protocol on data management and statistical analysis methods. Any post hoc analyses could be identified via referring to this SAP, and the possible selection bias and performance bias will be reduced in the trial. This study is registered at ClinicalTrials.gov, NCT01138930 , registered on 7 June 2010.

  4. Web-based documentation system with exchange of DICOM RT for multicenter clinical studies in particle therapy

    NASA Astrophysics Data System (ADS)

    Kessel, Kerstin A.; Bougatf, Nina; Bohn, Christian; Engelmann, Uwe; Oetzel, Dieter; Bendl, Rolf; Debus, Jürgen; Combs, Stephanie E.

    2012-02-01

    Conducting clinical studies is rather difficult because of the large variety of voluminous datasets, different documentation styles, and various information systems, especially in radiation oncology. In this paper, we describe our development of a web-based documentation system with first approaches of automatic statistical analyses for transnational and multicenter clinical studies in particle therapy. It is possible to have immediate access to all patient information and exchange, store, process, and visualize text data, all types of DICOM images, especially DICOM RT, and any other multimedia data. Accessing the documentation system and submitting clinical data is possible for internal and external users (e.g. referring physicians from abroad, who are seeking the new technique of particle therapy for their patients). Thereby, security and privacy protection is ensured with the encrypted https protocol, client certificates, and an application gateway. Furthermore, all data can be pseudonymized. Integrated into the existing hospital environment, patient data is imported via various interfaces over HL7-messages and DICOM. Several further features replace manual input wherever possible and ensure data quality and entirety. With a form generator, studies can be individually designed to fit specific needs. By including all treated patients (also non-study patients), we gain the possibility for overall large-scale, retrospective analyses. Having recently begun documentation of our first six clinical studies, it has become apparent that the benefits lie in the simplification of research work, better study analyses quality and ultimately, the improvement of treatment concepts by evaluating the effectiveness of particle therapy.

  5. Differences between Subjective Balanced Occlusion and Measurements Reported With T-Scan III

    PubMed Central

    Lila-Krasniqi, Zana; Shala, Kujtim; Krasniqi, Teuta Pustina; Bicaj, Teuta; Ahmedi, Enis; Dula, Linda; Dragusha, Arlinda Tmava; Guguvcevski, Ljuben

    2017-01-01

    BACKGROUND: The aetiology of Temporomandibular disorder is multifactorial, and numerous studies have addressed that occlusion may be of great importance in the pathogenesis of Temporomandibular disorder. AIM: The aim of this study is to determine if any direct relationship exists between balanced occlusion and Temporomandibular disorder and to evaluate the differences between subjective balanced occlusion and measurements reported with T-scan III electronic system. MATERIAL AND METHODS: A total of 54 subjects were divided into three groups, selection based on anamnesis-responded to a Fonseca questionnaire and clinical measurements analysed with electronic system T-scan III. In the I study group were participants with fixed dentures with prosthetic ceramic restorations. In the II study group were symptomatic participants with TMD. In the third control group were healthy participants with full arch dentition that completed a subjective questionnaire that documented the absence of jaw pain, joint noise, locking and subjects without a history of TMD. The occlusal balance was reported subjectively through Fonseca questionnaire and compared with occlusion analysed with electronic system T-scan III. RESULTS: For attributive data were used percentage of the structure. Differences in P < 0.05 were considered significant. After distributing attributive data of occlusal balance subjectively reported and compared with measurements analysed with electronic system T-scan III were found significant difference P < 0.001 in all three groups. CONCLUSION: In our study, it was concluded that there were statistically significant differences of balanced occlusion in all three groups. Also it was concluded that subjective data are not exact with measurements reported with electronic device T-scan III. PMID:28932311

  6. Biomechanical analysis using Kinovea for sports application

    NASA Astrophysics Data System (ADS)

    Muaza Nor Adnan, Nor; Patar, Mohd Nor Azmi Ab; Lee, Hokyoo; Yamamoto, Shin-Ichiroh; Jong-Young, Lee; Mahmud, Jamaluddin

    2018-04-01

    This paper assesses the reliability of HD VideoCam–Kinovea as an alternative tool in conducting motion analysis and measuring knee relative angle of drop jump movement. The motion capture and analysis procedure were conducted in the Biomechanics Lab, Shibaura Institute of Technology, Omiya Campus, Japan. A healthy subject without any gait disorder (BMI of 28.60 ± 1.40) was recruited. The volunteered subject was asked to per the drop jump movement on preset platform and the motion was simultaneously recorded using an established infrared motion capture system (Hawk–Cortex) and a HD VideoCam in the sagittal plane only. The capture was repeated for 5 times. The outputs (video recordings) from the HD VideoCam were input into Kinovea (an open-source software) and the drop jump pattern was tracked and analysed. These data are compared with the drop jump pattern tracked and analysed earlier using the Hawk–Cortex system. In general, the results obtained (drop jump pattern) using the HD VideoCam–Kinovea are close to the results obtained using the established motion capture system. Basic statistical analyses show that most average variances are less than 10%, thus proving the repeatability of the protocol and the reliability of the results. It can be concluded that the integration of HD VideoCam–Kinovea has the potential to become a reliable motion capture–analysis system. Moreover, it is low cost, portable and easy to use. As a conclusion, the current study and its findings are found useful and has contributed to enhance significant knowledge pertaining to motion capture-analysis, drop jump movement and HD VideoCam–Kinovea integration.

  7. Further characterisation of the functional neuroanatomy associated with prosodic emotion decoding.

    PubMed

    Mitchell, Rachel L C

    2013-06-01

    Current models of prosodic emotion comprehension propose a three stage cognition mediated by temporal lobe auditory regions through to inferior and orbitofrontal regions. Cumulative evidence suggests that its mediation may be more flexible though, with a facility to respond in a graded manner based on the need for executive control. The location of this fine-tuning system is unclear, as is its similarity to the cognitive control system. In the current study, need for executive control was manipulated in a block-design functional MRI study by systematically altering the proportion of incongruent trials across time, i.e., trials for which participants identified prosodic emotions in the face of conflicting lexico-semantic emotion cues. Resultant Blood Oxygenation Level Dependent contrast data were analysed according to standard procedures using Statistical Parametric Mapping v8 (Ashburner et al., 2009). In the parametric analyses, superior (medial) frontal gyrus activity increased linearly with increased need for executive control. In the separate analyses of each level of incongruity, results suggested that the baseline prosodic emotion comprehension system was sufficient to deal with low proportions of incongruent trials, whereas a more widespread frontal lobe network was required for higher proportions. These results suggest an executive control system for prosodic emotion comprehension exists which has the capability to recruit superior (medial) frontal gyrus in a graded manner and other frontal regions once demand exceeds a certain threshold. The need to revise current models of prosodic emotion comprehension and add a fourth processing stage are discussed. Copyright © 2012 Elsevier Ltd. All rights reserved.

  8. Quasi-experimental study designs series-paper 10: synthesizing evidence for effects collected from quasi-experimental studies presents surmountable challenges.

    PubMed

    Becker, Betsy Jane; Aloe, Ariel M; Duvendack, Maren; Stanley, T D; Valentine, Jeffrey C; Fretheim, Atle; Tugwell, Peter

    2017-09-01

    To outline issues of importance to analytic approaches to the synthesis of quasi-experiments (QEs) and to provide a statistical model for use in analysis. We drew on studies of statistics, epidemiology, and social-science methodology to outline methods for synthesis of QE studies. The design and conduct of QEs, effect sizes from QEs, and moderator variables for the analysis of those effect sizes were discussed. Biases, confounding, design complexities, and comparisons across designs offer serious challenges to syntheses of QEs. Key components of meta-analyses of QEs were identified, including the aspects of QE study design to be coded and analyzed. Of utmost importance are the design and statistical controls implemented in the QEs. Such controls and any potential sources of bias and confounding must be modeled in analyses, along with aspects of the interventions and populations studied. Because of such controls, effect sizes from QEs are more complex than those from randomized experiments. A statistical meta-regression model that incorporates important features of the QEs under review was presented. Meta-analyses of QEs provide particular challenges, but thorough coding of intervention characteristics and study methods, along with careful analysis, should allow for sound inferences. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. Accounting for Multiple Births in Neonatal and Perinatal Trials: Systematic Review and Case Study

    PubMed Central

    Hibbs, Anna Maria; Black, Dennis; Palermo, Lisa; Cnaan, Avital; Luan, Xianqun; Truog, William E; Walsh, Michele C; Ballard, Roberta A

    2010-01-01

    Objectives To determine the prevalence in the neonatal literature of statistical approaches accounting for the unique clustering patterns of multiple births. To explore the sensitivity of an actual trial to several analytic approaches to multiples. Methods A systematic review of recent perinatal trials assessed the prevalence of studies accounting for clustering of multiples. The NO CLD trial served as a case study of the sensitivity of the outcome to several statistical strategies. We calculated odds ratios using non-clustered (logistic regression) and clustered (generalized estimating equations, multiple outputation) analyses. Results In the systematic review, most studies did not describe the randomization of twins and did not account for clustering. Of those studies that did, exclusion of multiples and generalized estimating equations were the most common strategies. The NO CLD study included 84 infants with a sibling enrolled in the study. Multiples were more likely than singletons to be white and were born to older mothers (p<0.01). Analyses that accounted for clustering were statistically significant; analyses assuming independence were not. Conclusions The statistical approach to multiples can influence the odds ratio and width of confidence intervals, thereby affecting the interpretation of a study outcome. A minority of perinatal studies address this issue. PMID:19969305

  10. Accounting for multiple births in neonatal and perinatal trials: systematic review and case study.

    PubMed

    Hibbs, Anna Maria; Black, Dennis; Palermo, Lisa; Cnaan, Avital; Luan, Xianqun; Truog, William E; Walsh, Michele C; Ballard, Roberta A

    2010-02-01

    To determine the prevalence in the neonatal literature of statistical approaches accounting for the unique clustering patterns of multiple births and to explore the sensitivity of an actual trial to several analytic approaches to multiples. A systematic review of recent perinatal trials assessed the prevalence of studies accounting for clustering of multiples. The Nitric Oxide to Prevent Chronic Lung Disease (NO CLD) trial served as a case study of the sensitivity of the outcome to several statistical strategies. We calculated odds ratios using nonclustered (logistic regression) and clustered (generalized estimating equations, multiple outputation) analyses. In the systematic review, most studies did not describe the random assignment of twins and did not account for clustering. Of those studies that did, exclusion of multiples and generalized estimating equations were the most common strategies. The NO CLD study included 84 infants with a sibling enrolled in the study. Multiples were more likely than singletons to be white and were born to older mothers (P < .01). Analyses that accounted for clustering were statistically significant; analyses assuming independence were not. The statistical approach to multiples can influence the odds ratio and width of confidence intervals, thereby affecting the interpretation of a study outcome. A minority of perinatal studies address this issue. Copyright 2010 Mosby, Inc. All rights reserved.

  11. Analysing the teleconnection systems affecting the climate of the Carpathian Basin

    NASA Astrophysics Data System (ADS)

    Kristóf, Erzsébet; Bartholy, Judit; Pongrácz, Rita

    2017-04-01

    Nowadays, the increase of the global average near-surface air temperature is unequivocal. Atmospheric low-frequency variabilities have substantial impacts on climate variables such as air temperature and precipitation. Therefore, assessing their effects is essential to improve global and regional climate model simulations for the 21st century. The North Atlantic Oscillation (NAO) is one of the best-known atmospheric teleconnection patterns affecting the Carpathian Basin in Central Europe. Besides NAO, we aim to analyse other interannual-to-decadal teleconnection patterns, which might have significant impacts on the Carpathian Basin, namely, the East Atlantic/West Russia pattern, the Scandinavian pattern, the Mediterranean Oscillation, and the North-Sea Caspian Pattern. For this purpose primarily the European Centre for Medium-Range Weather Forecasts' (ECMWF) ERA-20C atmospheric reanalysis dataset and multivariate statistical methods are used. The indices of each teleconnection pattern and their correlations with temperature and precipitation will be calculated for the period of 1961-1990. On the basis of these data first the long range (i. e. seasonal and/or annual scale) forecast ability is evaluated. Then, we aim to calculate the same indices of the relevant teleconnection patterns for the historical and future simulations of Coupled Model Intercomparison Project Phase 5 (CMIP5) models and compare them against each other using statistical methods. Our ultimate goal is to examine all available CMIP5 models and evaluate their abilities to reproduce the selected teleconnection systems. Thus, climate predictions for the 21st century for the Carpathian Basin may be improved using the best-performing models among all CMIP5 model simulations.

  12. Ethnicity and mortality from systemic lupus erythematosus in the US.

    PubMed

    Krishnan, E; Hubert, H B

    2006-11-01

    To study ethnic differences in mortality from systemic lupus erythematosus (lupus) in two large, population-based datasets. We analysed the national death data (1979-98) from the National Center for Health Statistics (Hyattsville, Maryland, USA) and hospitalisation data (1993-2002) from the Nationwide Inpatient Sample (NIS), the largest hospitalisation database in the US. The overall, unadjusted, lupus mortality in the National Center for Health Statistics data was 4.6 per million, whereas the proportion of in-hospital mortality from the NIS was 2.9%. African-Americans had disproportionately higher mortality risk than Caucasians (all-cause mortality relative risk adjusted for age = 1.24 (women), 1.36 (men); lupus mortality relative risk = 3.91 (women), 2.40 (men)). Excess risk was found among in-hospital deaths (odds ratio adjusted for age = 1.4 (women), 1.3 (men)). Lupus death rates increased overall from 1979 to 98 (p<0.001). The proportional increase was greatest among African-Americans. Among Caucasian men, death rates declined significantly (p<0.001), but rates did not change substantially for African-American men. The African-American:Caucasian mortality ratio rose with time among men, but there was little change among women. In analyses of the NIS data adjusted for age, the in-hospital mortality risk decreased with time among Caucasian women (p<0.001). African-Americans with lupus have 2-3-fold higher lupus mortality risk than Caucasians. The magnitude of the risk disparity is disproportionately higher than the disparity in all-cause mortality. A lupus-specific biological factor, as opposed to socioeconomic and access-to-care factors, may be responsible for this phenomenon.

  13. Power-up: A Reanalysis of 'Power Failure' in Neuroscience Using Mixture Modeling

    PubMed Central

    Wood, John

    2017-01-01

    Recently, evidence for endemically low statistical power has cast neuroscience findings into doubt. If low statistical power plagues neuroscience, then this reduces confidence in the reported effects. However, if statistical power is not uniformly low, then such blanket mistrust might not be warranted. Here, we provide a different perspective on this issue, analyzing data from an influential study reporting a median power of 21% across 49 meta-analyses (Button et al., 2013). We demonstrate, using Gaussian mixture modeling, that the sample of 730 studies included in that analysis comprises several subcomponents so the use of a single summary statistic is insufficient to characterize the nature of the distribution. We find that statistical power is extremely low for studies included in meta-analyses that reported a null result and that it varies substantially across subfields of neuroscience, with particularly low power in candidate gene association studies. Therefore, whereas power in neuroscience remains a critical issue, the notion that studies are systematically underpowered is not the full story: low power is far from a universal problem. SIGNIFICANCE STATEMENT Recently, researchers across the biomedical and psychological sciences have become concerned with the reliability of results. One marker for reliability is statistical power: the probability of finding a statistically significant result given that the effect exists. Previous evidence suggests that statistical power is low across the field of neuroscience. Our results present a more comprehensive picture of statistical power in neuroscience: on average, studies are indeed underpowered—some very seriously so—but many studies show acceptable or even exemplary statistical power. We show that this heterogeneity in statistical power is common across most subfields in neuroscience. This new, more nuanced picture of statistical power in neuroscience could affect not only scientific understanding, but potentially policy and funding decisions for neuroscience research. PMID:28706080

  14. Assessing the significance of pedobarographic signals using random field theory.

    PubMed

    Pataky, Todd C

    2008-08-07

    Traditional pedobarographic statistical analyses are conducted over discrete regions. Recent studies have demonstrated that regionalization can corrupt pedobarographic field data through conflation when arbitrary dividing lines inappropriately delineate smooth field processes. An alternative is to register images such that homologous structures optimally overlap and then conduct statistical tests at each pixel to generate statistical parametric maps (SPMs). The significance of SPM processes may be assessed within the framework of random field theory (RFT). RFT is ideally suited to pedobarographic image analysis because its fundamental data unit is a lattice sampling of a smooth and continuous spatial field. To correct for the vast number of multiple comparisons inherent in such data, recent pedobarographic studies have employed a Bonferroni correction to retain a constant family-wise error rate. This approach unfortunately neglects the spatial correlation of neighbouring pixels, so provides an overly conservative (albeit valid) statistical threshold. RFT generally relaxes the threshold depending on field smoothness and on the geometry of the search area, but it also provides a framework for assigning p values to suprathreshold clusters based on their spatial extent. The current paper provides an overview of basic RFT concepts and uses simulated and experimental data to validate both RFT-relevant field smoothness estimations and RFT predictions regarding the topological characteristics of random pedobarographic fields. Finally, previously published experimental data are re-analysed using RFT inference procedures to demonstrate how RFT yields easily understandable statistical results that may be incorporated into routine clinical and laboratory analyses.

  15. Training in metabolomics research. II. Processing and statistical analysis of metabolomics data, metabolite identification, pathway analysis, applications of metabolomics and its future

    PubMed Central

    Barnes, Stephen; Benton, H. Paul; Casazza, Krista; Cooper, Sara; Cui, Xiangqin; Du, Xiuxia; Engler, Jeffrey; Kabarowski, Janusz H.; Li, Shuzhao; Pathmasiri, Wimal; Prasain, Jeevan K.; Renfrow, Matthew B.; Tiwari, Hemant K.

    2017-01-01

    Metabolomics, a systems biology discipline representing analysis of known and unknown pathways of metabolism, has grown tremendously over the past 20 years. Because of its comprehensive nature, metabolomics requires careful consideration of the question(s) being asked, the scale needed to answer the question(s), collection and storage of the sample specimens, methods for extraction of the metabolites from biological matrices, the analytical method(s) to be employed and the quality control of the analyses, how collected data are correlated, the statistical methods to determine metabolites undergoing significant change, putative identification of metabolites, and the use of stable isotopes to aid in verifying metabolite identity and establishing pathway connections and fluxes. This second part of a comprehensive description of the methods of metabolomics focuses on data analysis, emerging methods in metabolomics and the future of this discipline. PMID:28239968

  16. Keywords and Co-Occurrence Patterns in the Voynich Manuscript: An Information-Theoretic Analysis

    PubMed Central

    Montemurro, Marcelo A.; Zanette, Damián H.

    2013-01-01

    The Voynich manuscript has remained so far as a mystery for linguists and cryptologists. While the text written on medieval parchment -using an unknown script system- shows basic statistical patterns that bear resemblance to those from real languages, there are features that suggested to some researches that the manuscript was a forgery intended as a hoax. Here we analyse the long-range structure of the manuscript using methods from information theory. We show that the Voynich manuscript presents a complex organization in the distribution of words that is compatible with those found in real language sequences. We are also able to extract some of the most significant semantic word-networks in the text. These results together with some previously known statistical features of the Voynich manuscript, give support to the presence of a genuine message inside the book. PMID:23805215

  17. Earth system feedback statistically extracted from the Indian Ocean deep-sea sediments recording Eocene hyperthermals.

    PubMed

    Yasukawa, Kazutaka; Nakamura, Kentaro; Fujinaga, Koichiro; Ikehara, Minoru; Kato, Yasuhiro

    2017-09-12

    Multiple transient global warming events occurred during the early Palaeogene. Although these events, called hyperthermals, have been reported from around the globe, geologic records for the Indian Ocean are limited. In addition, the recovery processes from relatively modest hyperthermals are less constrained than those from the severest and well-studied hothouse called the Palaeocene-Eocene Thermal Maximum. In this study, we constructed a new and high-resolution geochemical dataset of deep-sea sediments clearly recording multiple Eocene hyperthermals in the Indian Ocean. We then statistically analysed the high-dimensional data matrix and extracted independent components corresponding to the biogeochemical responses to the hyperthermals. The productivity feedback commonly controls and efficiently sequesters the excess carbon in the recovery phases of the hyperthermals via an enhanced biological pump, regardless of the magnitude of the events. Meanwhile, this negative feedback is independent of nannoplankton assemblage changes generally recognised in relatively large environmental perturbations.

  18. Identification of the isomers using principal component analysis (PCA) method

    NASA Astrophysics Data System (ADS)

    Kepceoǧlu, Abdullah; Gündoǧdu, Yasemin; Ledingham, Kenneth William David; Kilic, Hamdi Sukur

    2016-03-01

    In this work, we have carried out a detailed statistical analysis for experimental data of mass spectra from xylene isomers. Principle Component Analysis (PCA) was used to identify the isomers which cannot be distinguished using conventional statistical methods for interpretation of their mass spectra. Experiments have been carried out using a linear TOF-MS coupled to a femtosecond laser system as an energy source for the ionisation processes. We have performed experiments and collected data which has been analysed and interpreted using PCA as a multivariate analysis of these spectra. This demonstrates the strength of the method to get an insight for distinguishing the isomers which cannot be identified using conventional mass analysis obtained through dissociative ionisation processes on these molecules. The PCA results dependending on the laser pulse energy and the background pressure in the spectrometers have been presented in this work.

  19. Selected Streamflow Statistics and Regression Equations for Predicting Statistics at Stream Locations in Monroe County, Pennsylvania

    USGS Publications Warehouse

    Thompson, Ronald E.; Hoffman, Scott A.

    2006-01-01

    A suite of 28 streamflow statistics, ranging from extreme low to high flows, was computed for 17 continuous-record streamflow-gaging stations and predicted for 20 partial-record stations in Monroe County and contiguous counties in north-eastern Pennsylvania. The predicted statistics for the partial-record stations were based on regression analyses relating inter-mittent flow measurements made at the partial-record stations indexed to concurrent daily mean flows at continuous-record stations during base-flow conditions. The same statistics also were predicted for 134 ungaged stream locations in Monroe County on the basis of regression analyses relating the statistics to GIS-determined basin characteristics for the continuous-record station drainage areas. The prediction methodology for developing the regression equations used to estimate statistics was developed for estimating low-flow frequencies. This study and a companion study found that the methodology also has application potential for predicting intermediate- and high-flow statistics. The statistics included mean monthly flows, mean annual flow, 7-day low flows for three recurrence intervals, nine flow durations, mean annual base flow, and annual mean base flows for two recurrence intervals. Low standard errors of prediction and high coefficients of determination (R2) indicated good results in using the regression equations to predict the statistics. Regression equations for the larger flow statistics tended to have lower standard errors of prediction and higher coefficients of determination (R2) than equations for the smaller flow statistics. The report discusses the methodologies used in determining the statistics and the limitations of the statistics and the equations used to predict the statistics. Caution is indicated in using the predicted statistics for small drainage area situations. Study results constitute input needed by water-resource managers in Monroe County for planning purposes and evaluation of water-resources availability.

  20. Technical Basis Document: A Statistical Basis for Interpreting Urinary Excretion of Plutonium Based on Accelerator Mass Spectrometry (AMS) for Selected Atoll Populations in the Marshall Islands

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bogen, K; Hamilton, T F; Brown, T A

    2007-05-01

    We have developed refined statistical and modeling techniques to assess low-level uptake and urinary excretion of plutonium from different population group in the northern Marshall Islands. Urinary excretion rates of plutonium from the resident population on Enewetak Atoll and from resettlement workers living on Rongelap Atoll range from <1 to 8 {micro}Bq per day and are well below action levels established under the latest Department regulation 10 CFR 835 in the United States for in vitro bioassay monitoring of {sup 239}Pu. However, our statistical analyses show that urinary excretion of plutonium-239 ({sup 239}Pu) from both cohort groups is significantly positivelymore » associated with volunteer age, especially for the resident population living on Enewetak Atoll. Urinary excretion of {sup 239}Pu from the Enewetak cohort was also found to be positively associated with estimates of cumulative exposure to worldwide fallout. Consequently, the age-related trends in urinary excretion of plutonium from Marshallese populations can be described by either a long-term component from residual systemic burdens acquired from previous exposures to worldwide fallout or a prompt (and eventual long-term) component acquired from low-level systemic intakes of plutonium associated with resettlement of the northern Marshall Islands, or some combination of both.« less

  1. An improved approach for flight readiness certification: Methodology for failure risk assessment and application examples. Volume 3: Structure and listing of programs

    NASA Technical Reports Server (NTRS)

    Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.

    1992-01-01

    An improved methodology for quantitatively evaluating failure risk of spaceflight systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with engineering analysis to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in engineering analyses of failure phenomena, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which engineering analysis models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes. These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. Conventional engineering analysis models currently employed for design of failure prediction are used in this methodology. The PFA methodology is described and examples of its application are presented. Conventional approaches to failure risk evaluation for spaceflight systems are discussed, and the rationale for the approach taken in the PFA methodology is presented. The statistical methods, engineering models, and computer software used in fatigue failure mode applications are thoroughly documented.

  2. Influence of bisphosphonates on alveolar bone loss around osseointegrated implants.

    PubMed

    Zahid, Talal M; Wang, Bing-Yan; Cohen, Robert E

    2011-06-01

    The relationship between bisphosphonates (BP) and dental implant failure has not been fully elucidated. The purpose of this retrospective radiographic study was to examine whether patients who take BP are at greater risk of implant failure than patients not using those agents. Treatment records of 362 consecutively treated patients receiving endosseous dental implants were reviewed. The patient population consisted of 227 women and 135 men with a mean age of 56 years (range: 17-87 years), treated in the University at Buffalo Postgraduate Clinic from 1997-2008. Demographic information collected included age, gender, smoking status, as well as systemic conditions and medication use. Implant characteristics reviewed included system, date of placement, date of follow-up radiographs, surgical complications, number of exposed threads, and implant failure. The relationship between BP and implant failure was analyzed using generalized estimating equation (GEE) analysis. Twenty-six patients using BP received a total of 51 dental implants. Three implants failed, yielding success rates of 94.11% and 88.46% for the implant-based and subject-based analyses, respectively. Using the GEE statistical method we found a statistically significant (P  =  .001; OR  =  3.25) association between the use of BP and implant thread exposure. None of the other variables studied were statistically associated with implant failure or thread exposure. In conclusion, patients taking BP may be at higher risk for implant thread exposure.

  3. An automated, broad-based, near real-time public health surveillance system using presentations to hospital Emergency Departments in New South Wales, Australia.

    PubMed

    Muscatello, David J; Churches, Tim; Kaldor, Jill; Zheng, Wei; Chiu, Clayton; Correll, Patricia; Jorm, Louisa

    2005-12-22

    In a climate of concern over bioterrorism threats and emergent diseases, public health authorities are trialling more timely surveillance systems. The 2003 Rugby World Cup (RWC) provided an opportunity to test the viability of a near real-time syndromic surveillance system in metropolitan Sydney, Australia. We describe the development and early results of this largely automated system that used data routinely collected in Emergency Departments (EDs). Twelve of 49 EDs in the Sydney metropolitan area automatically transmitted surveillance data from their existing information systems to a central database in near real-time. Information captured for each ED visit included patient demographic details, presenting problem and nursing assessment entered as free-text at triage time, physician-assigned provisional diagnosis codes, and status at departure from the ED. Both diagnoses from the EDs and triage text were used to assign syndrome categories. The text information was automatically classified into one or more of 26 syndrome categories using automated "naïve Bayes" text categorisation techniques. Automated processes were used to analyse both diagnosis and free text-based syndrome data and to produce web-based statistical summaries for daily review. An adjusted cumulative sum (cusum) was used to assess the statistical significance of trends. During the RWC the system did not identify any major public health threats associated with the tournament, mass gatherings or the influx of visitors. This was consistent with evidence from other sources, although two known outbreaks were already in progress before the tournament. Limited baseline in early monitoring prevented the system from automatically identifying these ongoing outbreaks. Data capture was invisible to clinical staff in EDs and did not add to their workload. We have demonstrated the feasibility and potential utility of syndromic surveillance using routinely collected data from ED information systems. Key features of our system are its nil impact on clinical staff, and its use of statistical methods to assign syndrome categories based on clinical free text information. The system is ongoing, and has expanded to cover 30 EDs. Results of formal evaluations of both the technical efficiency and the public health impacts of the system will be described subsequently.

  4. Within What Distance Does “Greenness” Best Predict Physical Health? A Systematic Review of Articles with GIS Buffer Analyses across the Lifespan

    PubMed Central

    2017-01-01

    Is the amount of “greenness” within a 250-m, 500-m, 1000-m or a 2000-m buffer surrounding a person’s home a good predictor of their physical health? The evidence is inconclusive. We reviewed Web of Science articles that used geographic information system buffer analyses to identify trends between physical health, greenness, and distance within which greenness is measured. Our inclusion criteria were: (1) use of buffers to estimate residential greenness; (2) statistical analyses that calculated significance of the greenness-physical health relationship; and (3) peer-reviewed articles published in English between 2007 and 2017. To capture multiple findings from a single article, we selected our unit of inquiry as the analysis, not the article. Our final sample included 260 analyses in 47 articles. All aspects of the review were in accordance with PRISMA guidelines. Analyses were independently judged as more, less, or least likely to be biased based on the inclusion of objective health measures and income/education controls. We found evidence that larger buffer sizes, up to 2000 m, better predicted physical health than smaller ones. We recommend that future analyses use nested rather than overlapping buffers to evaluate to what extent greenness not immediately around a person’s home (i.e., within 1000–2000 m) predicts physical health. PMID:28644420

  5. Survey of the Methods and Reporting Practices in Published Meta-analyses of Test Performance: 1987 to 2009

    ERIC Educational Resources Information Center

    Dahabreh, Issa J.; Chung, Mei; Kitsios, Georgios D.; Terasawa, Teruhiko; Raman, Gowri; Tatsioni, Athina; Tobar, Annette; Lau, Joseph; Trikalinos, Thomas A.; Schmid, Christopher H.

    2013-01-01

    We performed a survey of meta-analyses of test performance to describe the evolution in their methods and reporting. Studies were identified through MEDLINE (1966-2009), reference lists, and relevant reviews. We extracted information on clinical topics, literature review methods, quality assessment, and statistical analyses. We reviewed 760…

  6. Aerocapture Performance Analysis for a Neptune-Triton Exploration Mission

    NASA Technical Reports Server (NTRS)

    Starr, Brett R.; Westhelle, Carlos H.; Masciarelli, James P.

    2004-01-01

    A systems analysis has been conducted for a Neptune-Triton Exploration Mission in which aerocapture is used to capture a spacecraft at Neptune. Aerocapture uses aerodynamic drag instead of propulsion to decelerate from the interplanetary approach trajectory to a captured orbit during a single pass through the atmosphere. After capture, propulsion is used to move the spacecraft from the initial captured orbit to the desired science orbit. A preliminary assessment identified that a spacecraft with a lift to drag ratio of 0.8 was required for aerocapture. Performance analyses of the 0.8 L/D vehicle were performed using a high fidelity flight simulation within a Monte Carlo executive to determine mission success statistics. The simulation was the Program to Optimize Simulated Trajectories (POST) modified to include Neptune specific atmospheric and planet models, spacecraft aerodynamic characteristics, and interplanetary trajectory models. To these were added autonomous guidance and pseudo flight controller models. The Monte Carlo analyses incorporated approach trajectory delivery errors, aerodynamic characteristics uncertainties, and atmospheric density variations. Monte Carlo analyses were performed for a reference set of uncertainties and sets of uncertainties modified to produce increased and reduced atmospheric variability. For the reference uncertainties, the 0.8 L/D flatbottom ellipsled vehicle achieves 100% successful capture and has a 99.87 probability of attaining the science orbit with a 360 m/s V budget for apoapsis and periapsis adjustment. Monte Carlo analyses were also performed for a guidance system that modulates both bank angle and angle of attack with the reference set of uncertainties. An alpha and bank modulation guidance system reduces the 99.87 percentile DELTA V 173 m/s (48%) to 187 m/s for the reference set of uncertainties.

  7. Crop identification technology assessment for remote sensing. (CITARS) Volume 9: Statistical analysis of results

    NASA Technical Reports Server (NTRS)

    Davis, B. J.; Feiveson, A. H.

    1975-01-01

    Results are presented of CITARS data processing in raw form. Tables of descriptive statistics are given along with descriptions and results of inferential analyses. The inferential results are organized by questions which CITARS was designed to answer.

  8. One-dimensional statistical parametric mapping in Python.

    PubMed

    Pataky, Todd C

    2012-01-01

    Statistical parametric mapping (SPM) is a topological methodology for detecting field changes in smooth n-dimensional continua. Many classes of biomechanical data are smooth and contained within discrete bounds and as such are well suited to SPM analyses. The current paper accompanies release of 'SPM1D', a free and open-source Python package for conducting SPM analyses on a set of registered 1D curves. Three example applications are presented: (i) kinematics, (ii) ground reaction forces and (iii) contact pressure distribution in probabilistic finite element modelling. In addition to offering a high-level interface to a variety of common statistical tests like t tests, regression and ANOVA, SPM1D also emphasises fundamental concepts of SPM theory through stand-alone example scripts. Source code and documentation are available at: www.tpataky.net/spm1d/.

  9. Football goal distributions and extremal statistics

    NASA Astrophysics Data System (ADS)

    Greenhough, J.; Birch, P. C.; Chapman, S. C.; Rowlands, G.

    2002-12-01

    We analyse the distributions of the number of goals scored by home teams, away teams, and the total scored in the match, in domestic football games from 169 countries between 1999 and 2001. The probability density functions (PDFs) of goals scored are too heavy-tailed to be fitted over their entire ranges by Poisson or negative binomial distributions which would be expected for uncorrelated processes. Log-normal distributions cannot include zero scores and here we find that the PDFs are consistent with those arising from extremal statistics. In addition, we show that it is sufficient to model English top division and FA Cup matches in the seasons of 1970/71-2000/01 on Poisson or negative binomial distributions, as reported in analyses of earlier seasons, and that these are not consistent with extremal statistics.

  10. Logistic regression applied to natural hazards: rare event logistic regression with replications

    NASA Astrophysics Data System (ADS)

    Guns, M.; Vanacker, V.

    2012-06-01

    Statistical analysis of natural hazards needs particular attention, as most of these phenomena are rare events. This study shows that the ordinary rare event logistic regression, as it is now commonly used in geomorphologic studies, does not always lead to a robust detection of controlling factors, as the results can be strongly sample-dependent. In this paper, we introduce some concepts of Monte Carlo simulations in rare event logistic regression. This technique, so-called rare event logistic regression with replications, combines the strength of probabilistic and statistical methods, and allows overcoming some of the limitations of previous developments through robust variable selection. This technique was here developed for the analyses of landslide controlling factors, but the concept is widely applicable for statistical analyses of natural hazards.

  11. Inappropriate Fiddling with Statistical Analyses to Obtain a Desirable P-value: Tests to Detect its Presence in Published Literature

    PubMed Central

    Gadbury, Gary L.; Allison, David B.

    2012-01-01

    Much has been written regarding p-values below certain thresholds (most notably 0.05) denoting statistical significance and the tendency of such p-values to be more readily publishable in peer-reviewed journals. Intuition suggests that there may be a tendency to manipulate statistical analyses to push a “near significant p-value” to a level that is considered significant. This article presents a method for detecting the presence of such manipulation (herein called “fiddling”) in a distribution of p-values from independent studies. Simulations are used to illustrate the properties of the method. The results suggest that the method has low type I error and that power approaches acceptable levels as the number of p-values being studied approaches 1000. PMID:23056287

  12. Inappropriate fiddling with statistical analyses to obtain a desirable p-value: tests to detect its presence in published literature.

    PubMed

    Gadbury, Gary L; Allison, David B

    2012-01-01

    Much has been written regarding p-values below certain thresholds (most notably 0.05) denoting statistical significance and the tendency of such p-values to be more readily publishable in peer-reviewed journals. Intuition suggests that there may be a tendency to manipulate statistical analyses to push a "near significant p-value" to a level that is considered significant. This article presents a method for detecting the presence of such manipulation (herein called "fiddling") in a distribution of p-values from independent studies. Simulations are used to illustrate the properties of the method. The results suggest that the method has low type I error and that power approaches acceptable levels as the number of p-values being studied approaches 1000.

  13. Variational Continuous Assimilation of TMI and SSM/I Rain Rates: Impact on GEOS-3 Hurricane Analyses and Forecasts

    NASA Technical Reports Server (NTRS)

    Hou, Arthur Y.; Zhang, Sara Q.; Reale, Oreste

    2003-01-01

    We describe a variational continuous assimilation (VCA) algorithm for assimilating tropical rainfall data using moisture and temperature tendency corrections as the control variable to offset model deficiencies. For rainfall assimilation, model errors are of special concern since model-predicted precipitation is based on parameterized moist physics, which can have substantial systematic errors. This study examines whether a VCA scheme using the forecast model as a weak constraint offers an effective pathway to precipitation assimilation. The particular scheme we exarnine employs a '1+1' dimension precipitation observation operator based on a 6-h integration of a column model of moist physics from the Goddard Earth Observing System (GEOS) global data assimilation system DAS). In earlier studies, we tested a simplified version of this scheme and obtained improved monthly-mean analyses and better short-range forecast skills. This paper describes the full implementation ofthe 1+1D VCA scheme using background and observation error statistics, and examines how it may improve GEOS analyses and forecasts of prominent tropical weather systems such as hurricanes. Parallel assimilation experiments with and without rainfall data for Hurricanes Bonnie and Floyd show that assimilating 6-h TMI and SSM/I surfice rain rates leads to more realistic storm features in the analysis, which, in turn, provide better initial conditions for 5-day storm track prediction and precipitation forecast. These results provide evidence that addressing model deficiencies in moisture tendency may be crucial to making effective use of precipitation information in data assimilation.

  14. A flooding induced station blackout analysis for a pressurized water reactor using the RISMC toolkit

    DOE PAGES

    Mandelli, Diego; Prescott, Steven; Smith, Curtis; ...

    2015-05-17

    In this paper we evaluate the impact of a power uprate on a pressurized water reactor (PWR) for a tsunami-induced flooding test case. This analysis is performed using the RISMC toolkit: the RELAP-7 and RAVEN codes. RELAP-7 is the new generation of system analysis codes that is responsible for simulating the thermal-hydraulic dynamics of PWR and boiling water reactor systems. RAVEN has two capabilities: to act as a controller of the RELAP-7 simulation (e.g., component/system activation) and to perform statistical analyses. In our case, the simulation of the flooding is performed by using an advanced smooth particle hydrodynamics code calledmore » NEUTRINO. The obtained results allow the user to investigate and quantify the impact of timing and sequencing of events on system safety. The impact of power uprate is determined in terms of both core damage probability and safety margins.« less

  15. The importance of waterborne disease outbreak surveillance in the United States.

    PubMed

    Craun, Gunther Franz

    2012-01-01

    Analyses of the causes of disease outbreaks associated with contaminated drinking water in the United States have helped inform prevention efforts at the national, state, and local levels. This article describes the changing nature of disease outbreaks in public water systems during 1971-2008 and discusses the importance of a collaborative waterborne outbreak surveillance system established in 1971. Increasing reports of outbreaks throughout the early 1980s emphasized that microbial contaminants remained a health-risk challenge for suppliers of drinking water. Outbreak investigations identified the responsible etiologic agents and deficiencies in the treatment and distribution of drinking water, especially the high risk associated with unfiltered surface water systems. Surveillance information was important in establishing an effective research program that guided government regulations and industry actions to improve drinking water quality. Recent surveillance statistics suggest that prevention efforts based on these research findings have been effective in reducing outbreak risks especially for surface water systems.

  16. Prison Radicalization: The New Extremist Training Grounds?

    DTIC Science & Technology

    2007-09-01

    distributing and collecting survey data , and the data analysis. The analytical methodology includes descriptive and inferential statistical methods, in... statistical analysis of the responses to identify significant correlations and relationships. B. SURVEY DATA COLLECTION To effectively access a...Q18, Q19, Q20, and Q21. Due to the exploratory nature of this small survey, data analyses were confined mostly to descriptive statistics and

  17. Identification of key micro-organisms involved in Douchi fermentation by statistical analysis and their use in an experimental fermentation.

    PubMed

    Chen, C; Xiang, J Y; Hu, W; Xie, Y B; Wang, T J; Cui, J W; Xu, Y; Liu, Z; Xiang, H; Xie, Q

    2015-11-01

    To screen and identify safe micro-organisms used during Douchi fermentation, and verify the feasibility of producing high-quality Douchi using these identified micro-organisms. PCR-denaturing gradient gel electrophoresis (DGGE) and automatic amino-acid analyser were used to investigate the microbial diversity and free amino acids (FAAs) content of 10 commercial Douchi samples. The correlations between microbial communities and FAAs were analysed by statistical analysis. Ten strains with significant positive correlation were identified. Then an experiment on Douchi fermentation by identified strains was carried out, and the nutritional composition in Douchi was analysed. Results showed that FAAs and relative content of isoflavone aglycones in verification Douchi samples were generally higher than those in commercial Douchi samples. Our study indicated that fungi, yeasts, Bacillus and lactic acid bacteria were the key players in Douchi fermentation, and with identified probiotic micro-organisms participating in fermentation, a higher quality Douchi product was produced. This is the first report to analyse and confirm the key micro-organisms during Douchi fermentation by statistical analysis. This work proves fermentation micro-organisms to be the key influencing factor of Douchi quality, and demonstrates the feasibility of fermenting Douchi using identified starter micro-organisms. © 2015 The Society for Applied Microbiology.

  18. Novel public health risk assessment process developed to support syndromic surveillance for the 2012 Olympic and Paralympic Games.

    PubMed

    Smith, Gillian E; Elliot, Alex J; Ibbotson, Sue; Morbey, Roger; Edeghere, Obaghe; Hawker, Jeremy; Catchpole, Mike; Endericks, Tina; Fisher, Paul; McCloskey, Brian

    2017-09-01

    Syndromic surveillance aims to provide early warning and real time estimates of the extent of incidents; and reassurance about lack of impact of mass gatherings. We describe a novel public health risk assessment process to ensure those leading the response to the 2012 Olympic Games were alerted to unusual activity that was of potential public health importance, and not inundated with multiple statistical 'alarms'. Statistical alarms were assessed to identify those which needed to result in 'alerts' as reliably as possible. There was no previously developed method for this. We identified factors that increased our concern about an alarm suggesting that an 'alert' should be made. Between 2 July and 12 September 2012, 350 674 signals were analysed resulting in 4118 statistical alarms. Using the risk assessment process, 122 'alerts' were communicated to Olympic incident directors. Use of a novel risk assessment process enabled the interpretation of large number of statistical alarms in a manageable way for the period of a sustained mass gathering. This risk assessment process guided the prioritization and could be readily adapted to other surveillance systems. The process, which is novel to our knowledge, continues as a legacy of the Games. © Crown copyright 2016.

  19. Study Designs and Statistical Analyses for Biomarker Research

    PubMed Central

    Gosho, Masahiko; Nagashima, Kengo; Sato, Yasunori

    2012-01-01

    Biomarkers are becoming increasingly important for streamlining drug discovery and development. In addition, biomarkers are widely expected to be used as a tool for disease diagnosis, personalized medication, and surrogate endpoints in clinical research. In this paper, we highlight several important aspects related to study design and statistical analysis for clinical research incorporating biomarkers. We describe the typical and current study designs for exploring, detecting, and utilizing biomarkers. Furthermore, we introduce statistical issues such as confounding and multiplicity for statistical tests in biomarker research. PMID:23012528

  20. Informal Statistics Help Desk

    NASA Technical Reports Server (NTRS)

    Young, M.; Koslovsky, M.; Schaefer, Caroline M.; Feiveson, A. H.

    2017-01-01

    Back by popular demand, the JSC Biostatistics Laboratory and LSAH statisticians are offering an opportunity to discuss your statistical challenges and needs. Take the opportunity to meet the individuals offering expert statistical support to the JSC community. Join us for an informal conversation about any questions you may have encountered with issues of experimental design, analysis, or data visualization. Get answers to common questions about sample size, repeated measures, statistical assumptions, missing data, multiple testing, time-to-event data, and when to trust the results of your analyses.

Top