Sample records for statistically relevant differences

  1. Clinical relevance vs. statistical significance: Using neck outcomes in patients with temporomandibular disorders as an example.

    PubMed

    Armijo-Olivo, Susan; Warren, Sharon; Fuentes, Jorge; Magee, David J

    2011-12-01

    Statistical significance has been used extensively to evaluate the results of research studies. Nevertheless, it offers only limited information to clinicians. The assessment of clinical relevance can facilitate the interpretation of the research results into clinical practice. The objective of this study was to explore different methods to evaluate the clinical relevance of the results using a cross-sectional study as an example comparing different neck outcomes between subjects with temporomandibular disorders and healthy controls. Subjects were compared for head and cervical posture, maximal cervical muscle strength, endurance of the cervical flexor and extensor muscles, and electromyographic activity of the cervical flexor muscles during the CranioCervical Flexion Test (CCFT). The evaluation of clinical relevance of the results was performed based on the effect size (ES), minimal important difference (MID), and clinical judgement. The results of this study show that it is possible to have statistical significance without having clinical relevance, to have both statistical significance and clinical relevance, to have clinical relevance without having statistical significance, or to have neither statistical significance nor clinical relevance. The evaluation of clinical relevance in clinical research is crucial to simplify the transfer of knowledge from research into practice. Clinical researchers should present the clinical relevance of their results. Copyright © 2011 Elsevier Ltd. All rights reserved.

  2. Accuracy of Orthognathic Surgical Outcomes Using 2- and 3-Dimensional Landmarks-The Case for Apples and Oranges?

    PubMed

    Borba, Alexandre Meireles; José da Silva, Everton; Fernandes da Silva, André Luis; Han, Michael D; da Graça Naclério-Homem, Maria; Miloro, Michael

    2018-01-12

    To verify predicted versus obtained surgical movements in 2-dimensional (2D) and 3-dimensional (3D) measurements and compare the equivalence between these methods. A retrospective observational study of bimaxillary orthognathic surgeries was performed. Postoperative cone-beam computed tomographic (CBCT) scans were superimposed on preoperative scans and a lateral cephalometric radiograph was generated from each CBCT scan. After identification of the sella, nasion, and upper central incisor tip landmarks on 2D and 3D images, actual and planned movements were compared by cephalometric measurements. One-sample t test was used to statistically evaluate results, with expected mean discrepancy values ranging from 0 to 2 mm. Equivalence of 2D and 3D values was compared using paired t test. The final sample of 46 cases showed by 2D cephalometry that differences between actual and planned movements in the horizontal axis were statistically relevant for expected means of 0, 0.5, and 2 mm without relevance for expected means of 1 and 1.5 mm; vertical movements were statistically relevant for expected means of 0 and 0.5 mm without relevance for expected means of 1, 1.5, and 2 mm. For 3D cephalometry in the horizontal axis, there were statistically relevant differences for expected means of 0, 1.5, and 2 mm without relevance for expected means of 0.5 and 1 mm; vertical movements showed statistically relevant differences for expected means of 0, 0.5, 1.5 and 2 mm without relevance for the expected mean of 1 mm. Comparison of 2D and 3D values displayed statistical differences for the horizontal and vertical axes. Comparison of 2D and 3D surgical outcome assessments should be performed with caution because there seems to be a difference in acceptable levels of accuracy between these 2 methods of evaluation. Moreover, 3D accuracy studies should no longer rely on a 2-mm level of discrepancy but on a 1-mm level. Copyright © 2018 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.

  3. Comparative evaluation of topographical data of dental implant surfaces applying optical interferometry and scanning electron microscopy.

    PubMed

    Kournetas, N; Spintzyk, S; Schweizer, E; Sawada, T; Said, F; Schmid, P; Geis-Gerstorfer, J; Eliades, G; Rupp, F

    2017-08-01

    Comparability of topographical data of implant surfaces in literature is low and their clinical relevance often equivocal. The aim of this study was to investigate the ability of scanning electron microscopy and optical interferometry to assess statistically similar 3-dimensional roughness parameter results and to evaluate these data based on predefined criteria regarded relevant for a favorable biological response. Four different commercial dental screw-type implants (NanoTite Certain Prevail, TiUnite Brånemark Mk III, XiVE S Plus and SLA Standard Plus) were analyzed by stereo scanning electron microscopy and white light interferometry. Surface height, spatial and hybrid roughness parameters (Sa, Sz, Ssk, Sku, Sal, Str, Sdr) were assessed from raw and filtered data (Gaussian 50μm and 5μm cut-off-filters), respectively. Data were statistically compared by one-way ANOVA and Tukey-Kramer post-hoc test. For a clinically relevant interpretation, a categorizing evaluation approach was used based on predefined threshold criteria for each roughness parameter. The two methods exhibited predominantly statistical differences. Dependent on roughness parameters and filter settings, both methods showed variations in rankings of the implant surfaces and differed in their ability to discriminate the different topographies. Overall, the analyses revealed scale-dependent roughness data. Compared to the pure statistical approach, the categorizing evaluation resulted in much more similarities between the two methods. This study suggests to reconsider current approaches for the topographical evaluation of implant surfaces and to further seek after proper experimental settings. Furthermore, the specific role of different roughness parameters for the bioresponse has to be studied in detail in order to better define clinically relevant, scale-dependent and parameter-specific thresholds and ranges. Copyright © 2017 The Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.

  4. [Clinical research IV. Relevancy of the statistical test chosen].

    PubMed

    Talavera, Juan O; Rivas-Ruiz, Rodolfo

    2011-01-01

    When we look at the difference between two therapies or the association of a risk factor or prognostic indicator with its outcome, we need to evaluate the accuracy of the result. This assessment is based on a judgment that uses information about the study design and statistical management of the information. This paper specifically mentions the relevance of the statistical test selected. Statistical tests are chosen mainly from two characteristics: the objective of the study and type of variables. The objective can be divided into three test groups: a) those in which you want to show differences between groups or inside a group before and after a maneuver, b) those that seek to show the relationship (correlation) between variables, and c) those that aim to predict an outcome. The types of variables are divided in two: quantitative (continuous and discontinuous) and qualitative (ordinal and dichotomous). For example, if we seek to demonstrate differences in age (quantitative variable) among patients with systemic lupus erythematosus (SLE) with and without neurological disease (two groups), the appropriate test is the "Student t test for independent samples." But if the comparison is about the frequency of females (binomial variable), then the appropriate statistical test is the χ(2).

  5. A note on generalized Genome Scan Meta-Analysis statistics

    PubMed Central

    Koziol, James A; Feng, Anne C

    2005-01-01

    Background Wise et al. introduced a rank-based statistical technique for meta-analysis of genome scans, the Genome Scan Meta-Analysis (GSMA) method. Levinson et al. recently described two generalizations of the GSMA statistic: (i) a weighted version of the GSMA statistic, so that different studies could be ascribed different weights for analysis; and (ii) an order statistic approach, reflecting the fact that a GSMA statistic can be computed for each chromosomal region or bin width across the various genome scan studies. Results We provide an Edgeworth approximation to the null distribution of the weighted GSMA statistic, and, we examine the limiting distribution of the GSMA statistics under the order statistic formulation, and quantify the relevance of the pairwise correlations of the GSMA statistics across different bins on this limiting distribution. We also remark on aggregate criteria and multiple testing for determining significance of GSMA results. Conclusion Theoretical considerations detailed herein can lead to clarification and simplification of testing criteria for generalizations of the GSMA statistic. PMID:15717930

  6. Second Language Experience Facilitates Statistical Learning of Novel Linguistic Materials.

    PubMed

    Potter, Christine E; Wang, Tianlin; Saffran, Jenny R

    2017-04-01

    Recent research has begun to explore individual differences in statistical learning, and how those differences may be related to other cognitive abilities, particularly their effects on language learning. In this research, we explored a different type of relationship between language learning and statistical learning: the possibility that learning a new language may also influence statistical learning by changing the regularities to which learners are sensitive. We tested two groups of participants, Mandarin Learners and Naïve Controls, at two time points, 6 months apart. At each time point, participants performed two different statistical learning tasks: an artificial tonal language statistical learning task and a visual statistical learning task. Only the Mandarin-learning group showed significant improvement on the linguistic task, whereas both groups improved equally on the visual task. These results support the view that there are multiple influences on statistical learning. Domain-relevant experiences may affect the regularities that learners can discover when presented with novel stimuli. Copyright © 2016 Cognitive Science Society, Inc.

  7. Second language experience facilitates statistical learning of novel linguistic materials

    PubMed Central

    Potter, Christine E.; Wang, Tianlin; Saffran, Jenny R.

    2016-01-01

    Recent research has begun to explore individual differences in statistical learning, and how those differences may be related to other cognitive abilities, particularly their effects on language learning. In the present research, we explored a different type of relationship between language learning and statistical learning: the possibility that learning a new language may also influence statistical learning by changing the regularities to which learners are sensitive. We tested two groups of participants, Mandarin Learners and Naïve Controls, at two time points, six months apart. At each time point, participants performed two different statistical learning tasks: an artificial tonal language statistical learning task and a visual statistical learning task. Only the Mandarin-learning group showed significant improvement on the linguistic task, while both groups improved equally on the visual task. These results support the view that there are multiple influences on statistical learning. Domain-relevant experiences may affect the regularities that learners can discover when presented with novel stimuli. PMID:27988939

  8. A Bifactor Approach to Model Multifaceted Constructs in Statistical Mediation Analysis

    ERIC Educational Resources Information Center

    Gonzalez, Oscar; MacKinnon, David P.

    2018-01-01

    Statistical mediation analysis allows researchers to identify the most important mediating constructs in the causal process studied. Identifying specific mediators is especially relevant when the hypothesized mediating construct consists of multiple related facets. The general definition of the construct and its facets might relate differently to…

  9. The Importance of Introductory Statistics Students Understanding Appropriate Sampling Techniques

    ERIC Educational Resources Information Center

    Menil, Violeta C.

    2005-01-01

    In this paper the author discusses the meaning of sampling, the reasons for sampling, the Central Limit Theorem, and the different techniques of sampling. Practical and relevant examples are given to make the appropriate sampling techniques understandable to students of Introductory Statistics courses. With a thorough knowledge of sampling…

  10. Confirmation of ovarian homogeneity in post-vitellogenic cultured white sturgeon, Acipenser transmontanus.

    PubMed

    Talbott, Mariah J; Servid, Sarah A; Cavinato, Anna G; Van Eenennaam, Joel P; Doroshov, Serge I; Struffenegger, Peter; Webb, Molly A H

    2014-02-01

    Assessing stage of oocyte maturity in female sturgeon by calculating oocyte polarization index (PI) is a necessary tool for both conservation propagation managers and caviar producers to know when to hormonally induce spawning. We tested the assumption that sampling ovarian follicles from one section of one ovary is sufficient for calculating an oocyte PI representative of oocyte maturity for an individual animal. Short-wavelength near-infrared spectroscopy (SW-NIR) scans were performed on three positions per ovary for five fish prior to caviar harvest. Samples of ovarian follicles were subsequently taken from the exact location of the SW-NIR scans for calculation of oocyte PI and follicle diameter. Oocyte PI was statistically different though not biologically relevant within an ovary and between ovaries in four of five fish. Follicle diameter was statistically different but not biologically relevant within an ovary in three of five fish. There were no differences in follicle diameter between ovaries. No statistical differences were observed between SW-NIR spectra collected at different locations within an ovary or between ovaries. These results emphasize the importance of utilizing both oocyte PI measurement and progesterone-induced oocyte maturation assays while deciding when to hormonally induce spawning in sturgeon females.

  11. A toolbox for determining subdiffusive mechanisms

    NASA Astrophysics Data System (ADS)

    Meroz, Yasmine; Sokolov, Igor M.

    2015-04-01

    Subdiffusive processes have become a field of great interest in the last decades, due to amounting experimental evidence of subdiffusive behavior in complex systems, and especially in biological systems. Different physical scenarios leading to subdiffusion differ in the details of the dynamics. These differences are what allow to theoretically reconstruct the underlying physics from the results of observations, and will be the topic of this review. We review the main statistical analyses available today to distinguish between these scenarios, categorizing them according to the relevant characteristics. We collect the available tools and statistical tests, presenting them within a broader perspective. We also consider possible complications such as the subordination of subdiffusive mechanisms. Due to the advances in single particle tracking experiments in recent years, we focus on the relevant case of where the available experimental data is scant, at the level of single trajectories.

  12. aCGH-MAS: Analysis of aCGH by means of Multiagent System

    PubMed Central

    Benito, Rocío; Bajo, Javier; Rodríguez, Ana Eugenia; Abáigar, María

    2015-01-01

    There are currently different techniques, such as CGH arrays, to study genetic variations in patients. CGH arrays analyze gains and losses in different regions in the chromosome. Regions with gains or losses in pathologies are important for selecting relevant genes or CNVs (copy-number variations) associated with the variations detected within chromosomes. Information corresponding to mutations, genes, proteins, variations, CNVs, and diseases can be found in different databases and it would be of interest to incorporate information of different sources to extract relevant information. This work proposes a multiagent system to manage the information of aCGH arrays, with the aim of providing an intuitive and extensible system to analyze and interpret the results. The agent roles integrate statistical techniques to select relevant variations and visualization techniques for the interpretation of the final results and to extract relevant information from different sources of information by applying a CBR system. PMID:25874203

  13. Pitfalls in the statistical examination and interpretation of the correspondence between physician and patient satisfaction ratings and their relevance for shared decision making research

    PubMed Central

    2011-01-01

    Background The correspondence of satisfaction ratings between physicians and patients can be assessed on different dimensions. One may examine whether they differ between the two groups or focus on measures of association or agreement. The aim of our study was to evaluate methodological difficulties in calculating the correspondence between patient and physician satisfaction ratings and to show the relevance for shared decision making research. Methods We utilised a structured tool for cardiovascular prevention (arriba™) in a pragmatic cluster-randomised controlled trial. Correspondence between patient and physician satisfaction ratings after individual primary care consultations was assessed using the Patient Participation Scale (PPS). We used the Wilcoxon signed-rank test, the marginal homogeneity test, Kendall's tau-b, weighted kappa, percentage of agreement, and the Bland-Altman method to measure differences, associations, and agreement between physicians and patients. Results Statistical measures signal large differences between patient and physician satisfaction ratings with more favourable ratings provided by patients and a low correspondence regardless of group allocation. Closer examination of the raw data revealed a high ceiling effect of satisfaction ratings and only slight disagreement regarding the distributions of differences between physicians' and patients' ratings. Conclusions Traditional statistical measures of association and agreement are not able to capture a clinically relevant appreciation of the physician-patient relationship by both parties in skewed satisfaction ratings. Only the Bland-Altman method for assessing agreement augmented by bar charts of differences was able to indicate this. Trial registration ISRCTN: ISRCT71348772 PMID:21592337

  14. Statistical interpretation of machine learning-based feature importance scores for biomarker discovery.

    PubMed

    Huynh-Thu, Vân Anh; Saeys, Yvan; Wehenkel, Louis; Geurts, Pierre

    2012-07-01

    Univariate statistical tests are widely used for biomarker discovery in bioinformatics. These procedures are simple, fast and their output is easily interpretable by biologists but they can only identify variables that provide a significant amount of information in isolation from the other variables. As biological processes are expected to involve complex interactions between variables, univariate methods thus potentially miss some informative biomarkers. Variable relevance scores provided by machine learning techniques, however, are potentially able to highlight multivariate interacting effects, but unlike the p-values returned by univariate tests, these relevance scores are usually not statistically interpretable. This lack of interpretability hampers the determination of a relevance threshold for extracting a feature subset from the rankings and also prevents the wide adoption of these methods by practicians. We evaluated several, existing and novel, procedures that extract relevant features from rankings derived from machine learning approaches. These procedures replace the relevance scores with measures that can be interpreted in a statistical way, such as p-values, false discovery rates, or family wise error rates, for which it is easier to determine a significance level. Experiments were performed on several artificial problems as well as on real microarray datasets. Although the methods differ in terms of computing times and the tradeoff, they achieve in terms of false positives and false negatives, some of them greatly help in the extraction of truly relevant biomarkers and should thus be of great practical interest for biologists and physicians. As a side conclusion, our experiments also clearly highlight that using model performance as a criterion for feature selection is often counter-productive. Python source codes of all tested methods, as well as the MATLAB scripts used for data simulation, can be found in the Supplementary Material.

  15. BIOREL: the benchmark resource to estimate the relevance of the gene networks.

    PubMed

    Antonov, Alexey V; Mewes, Hans W

    2006-02-06

    The progress of high-throughput methodologies in functional genomics has lead to the development of statistical procedures to infer gene networks from various types of high-throughput data. However, due to the lack of common standards, the biological significance of the results of the different studies is hard to compare. To overcome this problem we propose a benchmark procedure and have developed a web resource (BIOREL), which is useful for estimating the biological relevance of any genetic network by integrating different sources of biological information. The associations of each gene from the network are classified as biologically relevant or not. The proportion of genes in the network classified as "relevant" is used as the overall network relevance score. Employing synthetic data we demonstrated that such a score ranks the networks fairly in respect to the relevance level. Using BIOREL as the benchmark resource we compared the quality of experimental and theoretically predicted protein interaction data.

  16. Innovations in curriculum design: A multi-disciplinary approach to teaching statistics to undergraduate medical students

    PubMed Central

    Freeman, Jenny V; Collier, Steve; Staniforth, David; Smith, Kevin J

    2008-01-01

    Background Statistics is relevant to students and practitioners in medicine and health sciences and is increasingly taught as part of the medical curriculum. However, it is common for students to dislike and under-perform in statistics. We sought to address these issues by redesigning the way that statistics is taught. Methods The project brought together a statistician, clinician and educational experts to re-conceptualize the syllabus, and focused on developing different methods of delivery. New teaching materials, including videos, animations and contextualized workbooks were designed and produced, placing greater emphasis on applying statistics and interpreting data. Results Two cohorts of students were evaluated, one with old style and one with new style teaching. Both were similar with respect to age, gender and previous level of statistics. Students who were taught using the new approach could better define the key concepts of p-value and confidence interval (p < 0.001 for both). They were more likely to regard statistics as integral to medical practice (p = 0.03), and to expect to use it in their medical career (p = 0.003). There was no significant difference in the numbers who thought that statistics was essential to understand the literature (p = 0.28) and those who felt comfortable with the basics of statistics (p = 0.06). More than half the students in both cohorts felt that they were comfortable with the basics of medical statistics. Conclusion Using a variety of media, and placing emphasis on interpretation can help make teaching, learning and understanding of statistics more people-centred and relevant, resulting in better outcomes for students. PMID:18452599

  17. Effects of Cognitive Load on Trust

    DTIC Science & Technology

    2013-10-01

    that may be affected by load  Build a parsing tool to extract relevant features  Statistical analysis of results (by load components) Achieved...for a business application. Participants assessed potential job candidates and reviewed the applicants’ virtual resume which included standard...substantially different from each other that would make any confounding problems or other issues. Some statistics of the Australian data collection are

  18. Temporal and spatial scaling impacts on extreme precipitation

    NASA Astrophysics Data System (ADS)

    Eggert, B.; Berg, P.; Haerter, J. O.; Jacob, D.; Moseley, C.

    2015-01-01

    Both in the current climate and in the light of climate change, understanding of the causes and risk of precipitation extremes is essential for protection of human life and adequate design of infrastructure. Precipitation extreme events depend qualitatively on the temporal and spatial scales at which they are measured, in part due to the distinct types of rain formation processes that dominate extremes at different scales. To capture these differences, we first filter large datasets of high-resolution radar measurements over Germany (5 min temporally and 1 km spatially) using synoptic cloud observations, to distinguish convective and stratiform rain events. In a second step, for each precipitation type, the observed data are aggregated over a sequence of time intervals and spatial areas. The resulting matrix allows a detailed investigation of the resolutions at which convective or stratiform events are expected to contribute most to the extremes. We analyze where the statistics of the two types differ and discuss at which resolutions transitions occur between dominance of either of the two precipitation types. We characterize the scales at which the convective or stratiform events will dominate the statistics. For both types, we further develop a mapping between pairs of spatially and temporally aggregated statistics. The resulting curve is relevant when deciding on data resolutions where statistical information in space and time is balanced. Our study may hence also serve as a practical guide for modelers, and for planning the space-time layout of measurement campaigns. We also describe a mapping between different pairs of resolutions, possibly relevant when working with mismatched model and observational resolutions, such as in statistical bias correction.

  19. 3 CFR - Enhanced Collection of Relevant Data and Statistics Relating to Women

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 3 The President 1 2012-01-01 2012-01-01 false Enhanced Collection of Relevant Data and Statistics Relating to Women Presidential Documents Other Presidential Documents Memorandum of March 4, 2011 Enhanced Collection of Relevant Data and Statistics Relating to Women Memorandum for the Heads of Executive Departments and Agencies I am proud to work...

  20. Identifying biologically relevant differences between metagenomic communities.

    PubMed

    Parks, Donovan H; Beiko, Robert G

    2010-03-15

    Metagenomics is the study of genetic material recovered directly from environmental samples. Taxonomic and functional differences between metagenomic samples can highlight the influence of ecological factors on patterns of microbial life in a wide range of habitats. Statistical hypothesis tests can help us distinguish ecological influences from sampling artifacts, but knowledge of only the P-value from a statistical hypothesis test is insufficient to make inferences about biological relevance. Current reporting practices for pairwise comparative metagenomics are inadequate, and better tools are needed for comparative metagenomic analysis. We have developed a new software package, STAMP, for comparative metagenomics that supports best practices in analysis and reporting. Examination of a pair of iron mine metagenomes demonstrates that deeper biological insights can be gained using statistical techniques available in our software. An analysis of the functional potential of 'Candidatus Accumulibacter phosphatis' in two enhanced biological phosphorus removal metagenomes identified several subsystems that differ between the A.phosphatis stains in these related communities, including phosphate metabolism, secretion and metal transport. Python source code and binaries are freely available from our website at http://kiwi.cs.dal.ca/Software/STAMP CONTACT: beiko@cs.dal.ca Supplementary data are available at Bioinformatics online.

  1. Uncovering robust patterns of microRNA co-expression across cancers using Bayesian Relevance Networks

    PubMed Central

    2017-01-01

    Co-expression networks have long been used as a tool for investigating the molecular circuitry governing biological systems. However, most algorithms for constructing co-expression networks were developed in the microarray era, before high-throughput sequencing—with its unique statistical properties—became the norm for expression measurement. Here we develop Bayesian Relevance Networks, an algorithm that uses Bayesian reasoning about expression levels to account for the differing levels of uncertainty in expression measurements between highly- and lowly-expressed entities, and between samples with different sequencing depths. It combines data from groups of samples (e.g., replicates) to estimate group expression levels and confidence ranges. It then computes uncertainty-moderated estimates of cross-group correlations between entities, and uses permutation testing to assess their statistical significance. Using large scale miRNA data from The Cancer Genome Atlas, we show that our Bayesian update of the classical Relevance Networks algorithm provides improved reproducibility in co-expression estimates and lower false discovery rates in the resulting co-expression networks. Software is available at www.perkinslab.ca. PMID:28817636

  2. Uncovering robust patterns of microRNA co-expression across cancers using Bayesian Relevance Networks.

    PubMed

    Ramachandran, Parameswaran; Sánchez-Taltavull, Daniel; Perkins, Theodore J

    2017-01-01

    Co-expression networks have long been used as a tool for investigating the molecular circuitry governing biological systems. However, most algorithms for constructing co-expression networks were developed in the microarray era, before high-throughput sequencing-with its unique statistical properties-became the norm for expression measurement. Here we develop Bayesian Relevance Networks, an algorithm that uses Bayesian reasoning about expression levels to account for the differing levels of uncertainty in expression measurements between highly- and lowly-expressed entities, and between samples with different sequencing depths. It combines data from groups of samples (e.g., replicates) to estimate group expression levels and confidence ranges. It then computes uncertainty-moderated estimates of cross-group correlations between entities, and uses permutation testing to assess their statistical significance. Using large scale miRNA data from The Cancer Genome Atlas, we show that our Bayesian update of the classical Relevance Networks algorithm provides improved reproducibility in co-expression estimates and lower false discovery rates in the resulting co-expression networks. Software is available at www.perkinslab.ca.

  3. A methodology for the design of experiments in computational intelligence with multiple regression models.

    PubMed

    Fernandez-Lozano, Carlos; Gestal, Marcos; Munteanu, Cristian R; Dorado, Julian; Pazos, Alejandro

    2016-01-01

    The design of experiments and the validation of the results achieved with them are vital in any research study. This paper focuses on the use of different Machine Learning approaches for regression tasks in the field of Computational Intelligence and especially on a correct comparison between the different results provided for different methods, as those techniques are complex systems that require further study to be fully understood. A methodology commonly accepted in Computational intelligence is implemented in an R package called RRegrs. This package includes ten simple and complex regression models to carry out predictive modeling using Machine Learning and well-known regression algorithms. The framework for experimental design presented herein is evaluated and validated against RRegrs. Our results are different for three out of five state-of-the-art simple datasets and it can be stated that the selection of the best model according to our proposal is statistically significant and relevant. It is of relevance to use a statistical approach to indicate whether the differences are statistically significant using this kind of algorithms. Furthermore, our results with three real complex datasets report different best models than with the previously published methodology. Our final goal is to provide a complete methodology for the use of different steps in order to compare the results obtained in Computational Intelligence problems, as well as from other fields, such as for bioinformatics, cheminformatics, etc., given that our proposal is open and modifiable.

  4. A methodology for the design of experiments in computational intelligence with multiple regression models

    PubMed Central

    Gestal, Marcos; Munteanu, Cristian R.; Dorado, Julian; Pazos, Alejandro

    2016-01-01

    The design of experiments and the validation of the results achieved with them are vital in any research study. This paper focuses on the use of different Machine Learning approaches for regression tasks in the field of Computational Intelligence and especially on a correct comparison between the different results provided for different methods, as those techniques are complex systems that require further study to be fully understood. A methodology commonly accepted in Computational intelligence is implemented in an R package called RRegrs. This package includes ten simple and complex regression models to carry out predictive modeling using Machine Learning and well-known regression algorithms. The framework for experimental design presented herein is evaluated and validated against RRegrs. Our results are different for three out of five state-of-the-art simple datasets and it can be stated that the selection of the best model according to our proposal is statistically significant and relevant. It is of relevance to use a statistical approach to indicate whether the differences are statistically significant using this kind of algorithms. Furthermore, our results with three real complex datasets report different best models than with the previously published methodology. Our final goal is to provide a complete methodology for the use of different steps in order to compare the results obtained in Computational Intelligence problems, as well as from other fields, such as for bioinformatics, cheminformatics, etc., given that our proposal is open and modifiable. PMID:27920952

  5. A demonstration of the application of the new paradigm for the evaluation of forensic evidence under conditions reflecting those of a real forensic-voice-comparison case.

    PubMed

    Enzinger, Ewald; Morrison, Geoffrey Stewart; Ochoa, Felipe

    2016-01-01

    The new paradigm for the evaluation of the strength of forensic evidence includes: The use of the likelihood-ratio framework. The use of relevant data, quantitative measurements, and statistical models. Empirical testing of validity and reliability under conditions reflecting those of the case under investigation. Transparency as to decisions made and procedures employed. The present paper illustrates the use of the new paradigm to evaluate strength of evidence under conditions reflecting those of a real forensic-voice-comparison case. The offender recording was from a landline telephone system, had background office noise, and was saved in a compressed format. The suspect recording included substantial reverberation and ventilation system noise, and was saved in a different compressed format. The present paper includes descriptions of the selection of the relevant hypotheses, sampling of data from the relevant population, simulation of suspect and offender recording conditions, and acoustic measurement and statistical modelling procedures. The present paper also explores the use of different techniques to compensate for the mismatch in recording conditions. It also examines how system performance would have differed had the suspect recording been of better quality. Copyright © 2015 The Chartered Society of Forensic Sciences. Published by Elsevier Ireland Ltd. All rights reserved.

  6. Cluster detection methods applied to the Upper Cape Cod cancer data.

    PubMed

    Ozonoff, Al; Webster, Thomas; Vieira, Veronica; Weinberg, Janice; Ozonoff, David; Aschengrau, Ann

    2005-09-15

    A variety of statistical methods have been suggested to assess the degree and/or the location of spatial clustering of disease cases. However, there is relatively little in the literature devoted to comparison and critique of different methods. Most of the available comparative studies rely on simulated data rather than real data sets. We have chosen three methods currently used for examining spatial disease patterns: the M-statistic of Bonetti and Pagano; the Generalized Additive Model (GAM) method as applied by Webster; and Kulldorff's spatial scan statistic. We apply these statistics to analyze breast cancer data from the Upper Cape Cancer Incidence Study using three different latency assumptions. The three different latency assumptions produced three different spatial patterns of cases and controls. For 20 year latency, all three methods generally concur. However, for 15 year latency and no latency assumptions, the methods produce different results when testing for global clustering. The comparative analyses of real data sets by different statistical methods provides insight into directions for further research. We suggest a research program designed around examining real data sets to guide focused investigation of relevant features using simulated data, for the purpose of understanding how to interpret statistical methods applied to epidemiological data with a spatial component.

  7. Origin of the spike-timing-dependent plasticity rule

    NASA Astrophysics Data System (ADS)

    Cho, Myoung Won; Choi, M. Y.

    2016-08-01

    A biological synapse changes its efficacy depending on the difference between pre- and post-synaptic spike timings. Formulating spike-timing-dependent interactions in terms of the path integral, we establish a neural-network model, which makes it possible to predict relevant quantities rigorously by means of standard methods in statistical mechanics and field theory. In particular, the biological synaptic plasticity rule is shown to emerge as the optimal form for minimizing the free energy. It is further revealed that maximization of the entropy of neural activities gives rise to the competitive behavior of biological learning. This demonstrates that statistical mechanics helps to understand rigorously key characteristic behaviors of a neural network, thus providing the possibility of physics serving as a useful and relevant framework for probing life.

  8. Diagnosis of Mood Disorders.

    ERIC Educational Resources Information Center

    Seligman, Linda; Moore, Bonita Marcus

    1995-01-01

    Provides an overview of mood disorders according to Diagnostic and Statistical Manual (fourth edition) criteria and other relevant information. Differential diagnosis is facilitated through discussion of differences and similarities among mental disorders, age and gender-related patterns of mood disorders, and useful diagnostic tools. (Author)

  9. 76 FR 12823 - Enhanced Collection of Relevant Data and Statistics Relating to Women

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-03-09

    ... greater understanding of policies and programs. Preparation of this report revealed the vast data... Collection of Relevant Data and Statistics Relating to Women Memorandum for the Heads of Executive... accompanying website collection of relevant data, will assist Government officials in crafting policies in...

  10. A statistical assessment of differences and equivalences between genetically modified and reference plant varieties

    PubMed Central

    2011-01-01

    Background Safety assessment of genetically modified organisms is currently often performed by comparative evaluation. However, natural variation of plant characteristics between commercial varieties is usually not considered explicitly in the statistical computations underlying the assessment. Results Statistical methods are described for the assessment of the difference between a genetically modified (GM) plant variety and a conventional non-GM counterpart, and for the assessment of the equivalence between the GM variety and a group of reference plant varieties which have a history of safe use. It is proposed to present the results of both difference and equivalence testing for all relevant plant characteristics simultaneously in one or a few graphs, as an aid for further interpretation in safety assessment. A procedure is suggested to derive equivalence limits from the observed results for the reference plant varieties using a specific implementation of the linear mixed model. Three different equivalence tests are defined to classify any result in one of four equivalence classes. The performance of the proposed methods is investigated by a simulation study, and the methods are illustrated on compositional data from a field study on maize grain. Conclusions A clear distinction of practical relevance is shown between difference and equivalence testing. The proposed tests are shown to have appropriate performance characteristics by simulation, and the proposed simultaneous graphical representation of results was found to be helpful for the interpretation of results from a practical field trial data set. PMID:21324199

  11. Clinical relevance of IL-6 gene polymorphism in severely injured patients

    PubMed Central

    Jeremić, Vasilije; Alempijević, Tamara; Mijatović, Srđan; Šijački, Ana; Dragašević, Sanja; Pavlović, Sonja; Miličić, Biljana; Krstić, Slobodan

    2014-01-01

    In polytrauma, injuries that may be surgically treated under regular circumstances due to a systemic inflammatory response become life-threatening. The inflammatory response involves a complex pattern of humoral and cellular responses and the expression of related factors is thought to be governed by genetic variations. This aim of this paper is to examine the influence of interleukin (IL) 6 single nucleotide polymorphism (SNP) -174C/G and -596G/A on the treatment outcome in severely injured patients. Forty-seven severely injured patients were included in this study. Patients were assigned an Injury Severity Score. Blood samples were drawn within 24 h after admission (designated day 1) and on subsequent days (24, 48, 72 hours and 7days) of hospitalization. The IL-6 levels were determined through ELISA technique. Polymorphisms were analyzed by a method of Polymerase Chain Reaction-Restriction Fragment Length Polymorphism (PCR). Among subjects with different outcomes, no statistically relevant difference was found with regards to the gene IL-6 SNP-174G/C polymorphism. More than a half of subjects who died had the SNP-174G/C polymorphism, while this polymorphism was represented in a slightly lower number in survivors. The incidence of subjects without polymorphism and those with heterozygous and homozygous gene IL-6 SNP-596G/A polymorphism did not present statistically significant variations between survivors and those who died. The levels of IL-6 over the observation period did not present any statistically relevant difference among subjects without the IL-6 SNP-174 or IL-6 SNP -596 gene polymorphism and those who had either a heterozygous or a homozygous polymorphism. PMID:24856384

  12. The Pros and Cons of Two-Year Versus Four-Year Degrees.

    ERIC Educational Resources Information Center

    Urbaniak, Anthony

    1985-01-01

    Reports results of a research survey conducted to determine what differences (job titles, income, relevance of courses, satisfaction, suitability) exist between two-year (Associate of Science) graduates and four-year (Bachelor of Science) graduates in business. Statistical tables are included. (CT)

  13. Establishing Statistical Equivalence of Data from Different Sampling Approaches for Assessment of Bacterial Phenotypic Antimicrobial Resistance

    PubMed Central

    2018-01-01

    ABSTRACT To assess phenotypic bacterial antimicrobial resistance (AMR) in different strata (e.g., host populations, environmental areas, manure, or sewage effluents) for epidemiological purposes, isolates of target bacteria can be obtained from a stratum using various sample types. Also, different sample processing methods can be applied. The MIC of each target antimicrobial drug for each isolate is measured. Statistical equivalence testing of the MIC data for the isolates allows evaluation of whether different sample types or sample processing methods yield equivalent estimates of the bacterial antimicrobial susceptibility in the stratum. We demonstrate this approach on the antimicrobial susceptibility estimates for (i) nontyphoidal Salmonella spp. from ground or trimmed meat versus cecal content samples of cattle in processing plants in 2013-2014 and (ii) nontyphoidal Salmonella spp. from urine, fecal, and blood human samples in 2015 (U.S. National Antimicrobial Resistance Monitoring System data). We found that the sample types for cattle yielded nonequivalent susceptibility estimates for several antimicrobial drug classes and thus may gauge distinct subpopulations of salmonellae. The quinolone and fluoroquinolone susceptibility estimates for nontyphoidal salmonellae from human blood are nonequivalent to those from urine or feces, conjecturally due to the fluoroquinolone (ciprofloxacin) use to treat infections caused by nontyphoidal salmonellae. We also demonstrate statistical equivalence testing for comparing sample processing methods for fecal samples (culturing one versus multiple aliquots per sample) to assess AMR in fecal Escherichia coli. These methods yield equivalent results, except for tetracyclines. Importantly, statistical equivalence testing provides the MIC difference at which the data from two sample types or sample processing methods differ statistically. Data users (e.g., microbiologists and epidemiologists) may then interpret practical relevance of the difference. IMPORTANCE Bacterial antimicrobial resistance (AMR) needs to be assessed in different populations or strata for the purposes of surveillance and determination of the efficacy of interventions to halt AMR dissemination. To assess phenotypic antimicrobial susceptibility, isolates of target bacteria can be obtained from a stratum using different sample types or employing different sample processing methods in the laboratory. The MIC of each target antimicrobial drug for each of the isolates is measured, yielding the MIC distribution across the isolates from each sample type or sample processing method. We describe statistical equivalence testing for the MIC data for evaluating whether two sample types or sample processing methods yield equivalent estimates of the bacterial phenotypic antimicrobial susceptibility in the stratum. This includes estimating the MIC difference at which the data from the two approaches differ statistically. Data users (e.g., microbiologists, epidemiologists, and public health professionals) can then interpret whether that present difference is practically relevant. PMID:29475868

  14. Establishing Statistical Equivalence of Data from Different Sampling Approaches for Assessment of Bacterial Phenotypic Antimicrobial Resistance.

    PubMed

    Shakeri, Heman; Volkova, Victoriya; Wen, Xuesong; Deters, Andrea; Cull, Charley; Drouillard, James; Müller, Christian; Moradijamei, Behnaz; Jaberi-Douraki, Majid

    2018-05-01

    To assess phenotypic bacterial antimicrobial resistance (AMR) in different strata (e.g., host populations, environmental areas, manure, or sewage effluents) for epidemiological purposes, isolates of target bacteria can be obtained from a stratum using various sample types. Also, different sample processing methods can be applied. The MIC of each target antimicrobial drug for each isolate is measured. Statistical equivalence testing of the MIC data for the isolates allows evaluation of whether different sample types or sample processing methods yield equivalent estimates of the bacterial antimicrobial susceptibility in the stratum. We demonstrate this approach on the antimicrobial susceptibility estimates for (i) nontyphoidal Salmonella spp. from ground or trimmed meat versus cecal content samples of cattle in processing plants in 2013-2014 and (ii) nontyphoidal Salmonella spp. from urine, fecal, and blood human samples in 2015 (U.S. National Antimicrobial Resistance Monitoring System data). We found that the sample types for cattle yielded nonequivalent susceptibility estimates for several antimicrobial drug classes and thus may gauge distinct subpopulations of salmonellae. The quinolone and fluoroquinolone susceptibility estimates for nontyphoidal salmonellae from human blood are nonequivalent to those from urine or feces, conjecturally due to the fluoroquinolone (ciprofloxacin) use to treat infections caused by nontyphoidal salmonellae. We also demonstrate statistical equivalence testing for comparing sample processing methods for fecal samples (culturing one versus multiple aliquots per sample) to assess AMR in fecal Escherichia coli These methods yield equivalent results, except for tetracyclines. Importantly, statistical equivalence testing provides the MIC difference at which the data from two sample types or sample processing methods differ statistically. Data users (e.g., microbiologists and epidemiologists) may then interpret practical relevance of the difference. IMPORTANCE Bacterial antimicrobial resistance (AMR) needs to be assessed in different populations or strata for the purposes of surveillance and determination of the efficacy of interventions to halt AMR dissemination. To assess phenotypic antimicrobial susceptibility, isolates of target bacteria can be obtained from a stratum using different sample types or employing different sample processing methods in the laboratory. The MIC of each target antimicrobial drug for each of the isolates is measured, yielding the MIC distribution across the isolates from each sample type or sample processing method. We describe statistical equivalence testing for the MIC data for evaluating whether two sample types or sample processing methods yield equivalent estimates of the bacterial phenotypic antimicrobial susceptibility in the stratum. This includes estimating the MIC difference at which the data from the two approaches differ statistically. Data users (e.g., microbiologists, epidemiologists, and public health professionals) can then interpret whether that present difference is practically relevant. Copyright © 2018 Shakeri et al.

  15. Probabilistic reasoning under time pressure: an assessment in Italian, Spanish and English psychology undergraduates

    NASA Astrophysics Data System (ADS)

    Agus, M.; Hitchcott, P. K.; Penna, M. P.; Peró-Cebollero, M.; Guàrdia-Olmos, J.

    2016-11-01

    Many studies have investigated the features of probabilistic reasoning developed in relation to different formats of problem presentation, showing that it is affected by various individual and contextual factors. Incomplete understanding of the identity and role of these factors may explain the inconsistent evidence concerning the effect of problem presentation format. Thus, superior performance has sometimes been observed for graphically, rather than verbally, presented problems. The present study was undertaken to address this issue. Psychology undergraduates without any statistical expertise (N = 173 in Italy; N = 118 in Spain; N = 55 in England) were administered statistical problems in two formats (verbal-numerical and graphical-pictorial) under a condition of time pressure. Students also completed additional measures indexing several potentially relevant individual dimensions (statistical ability, statistical anxiety, attitudes towards statistics and confidence). Interestingly, a facilitatory effect of graphical presentation was observed in the Italian and Spanish samples but not in the English one. Significantly, the individual dimensions predicting statistical performance also differed between the samples, highlighting a different role of confidence. Hence, these findings confirm previous observations concerning problem presentation format while simultaneously highlighting the importance of individual dimensions.

  16. Properties of different selection signature statistics and a new strategy for combining them.

    PubMed

    Ma, Y; Ding, X; Qanbari, S; Weigend, S; Zhang, Q; Simianer, H

    2015-11-01

    Identifying signatures of recent or ongoing selection is of high relevance in livestock population genomics. From a statistical perspective, determining a proper testing procedure and combining various test statistics is challenging. On the basis of extensive simulations in this study, we discuss the statistical properties of eight different established selection signature statistics. In the considered scenario, we show that a reasonable power to detect selection signatures is achieved with high marker density (>1 SNP/kb) as obtained from sequencing, while rather small sample sizes (~15 diploid individuals) appear to be sufficient. Most selection signature statistics such as composite likelihood ratio and cross population extended haplotype homozogysity have the highest power when fixation of the selected allele is reached, while integrated haplotype score has the highest power when selection is ongoing. We suggest a novel strategy, called de-correlated composite of multiple signals (DCMS) to combine different statistics for detecting selection signatures while accounting for the correlation between the different selection signature statistics. When examined with simulated data, DCMS consistently has a higher power than most of the single statistics and shows a reliable positional resolution. We illustrate the new statistic to the established selective sweep around the lactase gene in human HapMap data providing further evidence of the reliability of this new statistic. Then, we apply it to scan selection signatures in two chicken samples with diverse skin color. Our analysis suggests that a set of well-known genes such as BCO2, MC1R, ASIP and TYR were involved in the divergent selection for this trait.

  17. Analysis of respiratory events in obstructive sleep apnea syndrome: Inter-relations and association to simple nocturnal features.

    PubMed

    Ghandeharioun, H; Rezaeitalab, F; Lotfi, R

    2016-01-01

    This study carefully evaluates the association of different respiration-related events to each other and to simple nocturnal features in obstructive sleep apnea-hypopnea syndrome (OSAS). The events include apneas, hypopneas, respiratory event-related arousals and snores. We conducted a statistical study on 158 adults who underwent polysomnography between July 2012 and May 2014. To monitor relevance, along with linear statistical strategies like analysis of variance and bootstrapping a correlation coefficient standard error, the non-linear method of mutual information is also applied to illuminate vague results of linear techniques. Based on normalized mutual information weights (NMIW), indices of apnea are 1.3 times more relevant to AHI values than those of hypopnea. NMIW for the number of blood oxygen desaturation below 95% is considerable (0.531). The next relevant feature is "respiratory arousals index" with NMIW of 0.501. Snore indices (0.314), and BMI (0.203) take the next place. Based on NMIW values, snoring events are nearly one-third (29.9%) more dependent to hypopneas than RERAs. 1. The more sever the OSAS is, the more frequently the apneic events happen. 2. The association of snore with hypopnea/RERA revealed which is routinely ignored in regression-based OSAS modeling. 3. The statistical dependencies of oximetry features potentially can lead to home-based screening of OSAS. 4. Poor ESS-AHI relevance in the database under study indicates its disability for the OSA diagnosis compared to oximetry. 5. Based on poor RERA-snore/ESS relevance, detailed history of the symptoms plus polysomnography is suggested for accurate diagnosis of RERAs. Copyright © 2015 Sociedade Portuguesa de Pneumologia. Published by Elsevier España, S.L.U. All rights reserved.

  18. Statistical downscaling of GCM simulations to streamflow using relevance vector machine

    NASA Astrophysics Data System (ADS)

    Ghosh, Subimal; Mujumdar, P. P.

    2008-01-01

    General circulation models (GCMs), the climate models often used in assessing the impact of climate change, operate on a coarse scale and thus the simulation results obtained from GCMs are not particularly useful in a comparatively smaller river basin scale hydrology. The article presents a methodology of statistical downscaling based on sparse Bayesian learning and Relevance Vector Machine (RVM) to model streamflow at river basin scale for monsoon period (June, July, August, September) using GCM simulated climatic variables. NCEP/NCAR reanalysis data have been used for training the model to establish a statistical relationship between streamflow and climatic variables. The relationship thus obtained is used to project the future streamflow from GCM simulations. The statistical methodology involves principal component analysis, fuzzy clustering and RVM. Different kernel functions are used for comparison purpose. The model is applied to Mahanadi river basin in India. The results obtained using RVM are compared with those of state-of-the-art Support Vector Machine (SVM) to present the advantages of RVMs over SVMs. A decreasing trend is observed for monsoon streamflow of Mahanadi due to high surface warming in future, with the CCSR/NIES GCM and B2 scenario.

  19. Analysis and Interpretation of Findings Using Multiple Regression Techniques

    ERIC Educational Resources Information Center

    Hoyt, William T.; Leierer, Stephen; Millington, Michael J.

    2006-01-01

    Multiple regression and correlation (MRC) methods form a flexible family of statistical techniques that can address a wide variety of different types of research questions of interest to rehabilitation professionals. In this article, we review basic concepts and terms, with an emphasis on interpretation of findings relevant to research questions…

  20. Quantitative Analysis of Repertoire Scale Immunoglobulin properties in Vaccine Induced B cell Responses

    DTIC Science & Technology

    Immunosequencing now readily generates 103105 sequences per sample ; however, statistical analysis of these repertoires is challenging because of the high genetic...diversity of BCRs and the elaborate clonal relationships among them. To date, most immunosequencing analyses have focused on reporting qualitative ...repertoire differences, (2) identifying how two repertoires differ, and (3) determining appropriate confidence intervals for assessing the size of the differences and their potential biological relevance.

  1. Detecting clinically relevant new information in clinical notes across specialties and settings.

    PubMed

    Zhang, Rui; Pakhomov, Serguei V S; Arsoniadis, Elliot G; Lee, Janet T; Wang, Yan; Melton, Genevieve B

    2017-07-05

    Automated methods for identifying clinically relevant new versus redundant information in electronic health record (EHR) clinical notes is useful for clinicians and researchers involved in patient care and clinical research, respectively. We evaluated methods to automatically identify clinically relevant new information in clinical notes, and compared the quantity of redundant information across specialties and clinical settings. Statistical language models augmented with semantic similarity measures were evaluated as a means to detect and quantify clinically relevant new and redundant information over longitudinal clinical notes for a given patient. A corpus of 591 progress notes over 40 inpatient admissions was annotated for new information longitudinally by physicians to generate a reference standard. Note redundancy between various specialties was evaluated on 71,021 outpatient notes and 64,695 inpatient notes from 500 solid organ transplant patients (April 2015 through August 2015). Our best method achieved at best performance of 0.87 recall, 0.62 precision, and 0.72 F-measure. Addition of semantic similarity metrics compared to baseline improved recall but otherwise resulted in similar performance. While outpatient and inpatient notes had relatively similar levels of high redundancy (61% and 68%, respectively), redundancy differed by author specialty with mean redundancy of 75%, 66%, 57%, and 55% observed in pediatric, internal medicine, psychiatry and surgical notes, respectively. Automated techniques with statistical language models for detecting redundant versus clinically relevant new information in clinical notes do not improve with the addition of semantic similarity measures. While levels of redundancy seem relatively similar in the inpatient and ambulatory settings in the Fairview Health Services, clinical note redundancy appears to vary significantly with different medical specialties.

  2. Resilience Among Students at the Basic Enlisted Submarine School

    DTIC Science & Technology

    2016-12-01

    reported resilience. The Hayes’ Macro in the Statistical Package for the Social Sciences (SSPS) was used to uncover factors relevant to mediation analysis... Statistical Package for the Social Sciences (SPSS) was used to uncover factors relevant to mediation analysis. Findings suggest that the encouragement of...to Stressful Experiences Scale RTC Recruit Training Command SPSS Statistical Package for the Social Sciences SS Social Support SWB Subjective Well

  3. Combining Shapley value and statistics to the analysis of gene expression data in children exposed to air pollution

    PubMed Central

    Moretti, Stefano; van Leeuwen, Danitsja; Gmuender, Hans; Bonassi, Stefano; van Delft, Joost; Kleinjans, Jos; Patrone, Fioravante; Merlo, Domenico Franco

    2008-01-01

    Background In gene expression analysis, statistical tests for differential gene expression provide lists of candidate genes having, individually, a sufficiently low p-value. However, the interpretation of each single p-value within complex systems involving several interacting genes is problematic. In parallel, in the last sixty years, game theory has been applied to political and social problems to assess the power of interacting agents in forcing a decision and, more recently, to represent the relevance of genes in response to certain conditions. Results In this paper we introduce a Bootstrap procedure to test the null hypothesis that each gene has the same relevance between two conditions, where the relevance is represented by the Shapley value of a particular coalitional game defined on a microarray data-set. This method, which is called Comparative Analysis of Shapley value (shortly, CASh), is applied to data concerning the gene expression in children differentially exposed to air pollution. The results provided by CASh are compared with the results from a parametric statistical test for testing differential gene expression. Both lists of genes provided by CASh and t-test are informative enough to discriminate exposed subjects on the basis of their gene expression profiles. While many genes are selected in common by CASh and the parametric test, it turns out that the biological interpretation of the differences between these two selections is more interesting, suggesting a different interpretation of the main biological pathways in gene expression regulation for exposed individuals. A simulation study suggests that CASh offers more power than t-test for the detection of differential gene expression variability. Conclusion CASh is successfully applied to gene expression analysis of a data-set where the joint expression behavior of genes may be critical to characterize the expression response to air pollution. We demonstrate a synergistic effect between coalitional games and statistics that resulted in a selection of genes with a potential impact in the regulation of complex pathways. PMID:18764936

  4. Infants are superior in implicit crossmodal learning and use other learning mechanisms than adults

    PubMed Central

    von Frieling, Marco; Röder, Brigitte

    2017-01-01

    During development internal models of the sensory world must be acquired which have to be continuously adapted later. We used event-related potentials (ERP) to test the hypothesis that infants extract crossmodal statistics implicitly while adults learn them when task relevant. Participants were passively exposed to frequent standard audio-visual combinations (A1V1, A2V2, p=0.35 each), rare recombinations of these standard stimuli (A1V2, A2V1, p=0.10 each), and a rare audio-visual deviant with infrequent auditory and visual elements (A3V3, p=0.10). While both six-month-old infants and adults differentiated between rare deviants and standards involving early neural processing stages only infants were sensitive to crossmodal statistics as indicated by a late ERP difference between standard and recombined stimuli. A second experiment revealed that adults differentiated recombined and standard combinations when crossmodal combinations were task relevant. These results demonstrate a heightened sensitivity for crossmodal statistics in infants and a change in learning mode from infancy to adulthood. PMID:28949291

  5. Short-term rainfall: its scaling properties over Portugal

    NASA Astrophysics Data System (ADS)

    de Lima, M. Isabel P.

    2010-05-01

    The characterization of rainfall at a variety of space- and time-scales demands usually that data from different origins and resolution are explored. Different tools and methodologies can be used for this purpose. In regions where the spatial variation of rain is marked, the study of the scaling structure of rainfall can lead to a better understanding of the type of events affecting that specific area, which is essential for many engineering applications. The relevant factors affecting rain variability, in time and space, can lead to contrasting statistics which should be carefully taken into account in design procedures and decision making processes. One such region is Mainland Portugal; the territory is located in the transitional region between the sub-tropical anticyclone and the subpolar depression zones and is characterized by strong north-south and east-west rainfall gradients. The spatial distribution and seasonal variability of rain are particularly influenced by the characteristics of the global circulation. One specific feature is the Atlantic origin of many synoptic disturbances in the context of the regional geography (e.g. latitude, orography, oceanic and continental influences). Thus, aiming at investigating the statistical signature of rain events of different origins, resulting from the large number of mechanisms and factors affecting the rainfall climate over Portugal, scale-invariant analyses of the temporal structure of rain from several locations in mainland Portugal were conducted. The study used short-term rainfall time series. Relevant scaling ranges were identified and characterized that help clarifying the small-scale behaviour and statistics of this process.

  6. Gender-, age-, and race/ethnicity-based differential item functioning analysis of the movement disorder society-sponsored revision of the Unified Parkinson's disease rating scale.

    PubMed

    Goetz, Christopher G; Liu, Yuanyuan; Stebbins, Glenn T; Wang, Lu; Tilley, Barbara C; Teresi, Jeanne A; Merkitch, Douglas; Luo, Sheng

    2016-12-01

    Assess MDS-UPDRS items for gender-, age-, and race/ethnicity-based differential item functioning. Assessing differential item functioning is a core rating scale validation step. For the MDS-UPDRS, differential item functioning occurs if item-score probability among people with similar levels of parkinsonism differ according to selected covariates (gender, age, race/ethnicity). If the magnitude of differential item functioning is clinically relevant, item-score interpretation must consider influences by these covariates. Differential item functioning can be nonuniform (covariate variably influences an item-score across different levels of parkinsonism) or uniform (covariate influences an item-score consistently over all levels of parkinsonism). Using the MDS-UPDRS translation database of more than 5,000 PD patients from 14 languages, we tested gender-, age-, and race/ethnicity-based differential item functioning. To designate an item as having clinically relevant differential item functioning, we required statistical confirmation by 2 independent methods, along with a McFadden pseudo-R 2 magnitude statistic greater than "negligible." Most items showed no gender-, age- or race/ethnicity-based differential item functioning. When differential item functioning was identified, the magnitude statistic was always in the "negligible" range, and the scale-level impact was minimal. The absence of clinically relevant differential item functioning across all items and all parts of the MDS-UPDRS is strong evidence that the scale can be used confidently. As studies of Parkinson's disease increasingly involve multinational efforts and the MDS-UPDRS has several validated non-English translations, the findings support the scale's broad applicability in populations with varying gender, age, and race/ethnicity distributions. © 2016 International Parkinson and Movement Disorder Society. © 2016 International Parkinson and Movement Disorder Society.

  7. Hadoop and friends - first experience at CERN with a new platform for high throughput analysis steps

    NASA Astrophysics Data System (ADS)

    Duellmann, D.; Surdy, K.; Menichetti, L.; Toebbicke, R.

    2017-10-01

    The statistical analysis of infrastructure metrics comes with several specific challenges, including the fairly large volume of unstructured metrics from a large set of independent data sources. Hadoop and Spark provide an ideal environment in particular for the first steps of skimming rapidly through hundreds of TB of low relevance data to find and extract the much smaller data volume that is relevant for statistical analysis and modelling. This presentation will describe the new Hadoop service at CERN and the use of several of its components for high throughput data aggregation and ad-hoc pattern searches. We will describe the hardware setup used, the service structure with a small set of decoupled clusters and the first experience with co-hosting different applications and performing software upgrades. We will further detail the common infrastructure used for data extraction and preparation from continuous monitoring and database input sources.

  8. An ANOVA approach for statistical comparisons of brain networks.

    PubMed

    Fraiman, Daniel; Fraiman, Ricardo

    2018-03-16

    The study of brain networks has developed extensively over the last couple of decades. By contrast, techniques for the statistical analysis of these networks are less developed. In this paper, we focus on the statistical comparison of brain networks in a nonparametric framework and discuss the associated detection and identification problems. We tested network differences between groups with an analysis of variance (ANOVA) test we developed specifically for networks. We also propose and analyse the behaviour of a new statistical procedure designed to identify different subnetworks. As an example, we show the application of this tool in resting-state fMRI data obtained from the Human Connectome Project. We identify, among other variables, that the amount of sleep the days before the scan is a relevant variable that must be controlled. Finally, we discuss the potential bias in neuroimaging findings that is generated by some behavioural and brain structure variables. Our method can also be applied to other kind of networks such as protein interaction networks, gene networks or social networks.

  9. A perceptual space of local image statistics.

    PubMed

    Victor, Jonathan D; Thengone, Daniel J; Rizvi, Syed M; Conte, Mary M

    2015-12-01

    Local image statistics are important for visual analysis of textures, surfaces, and form. There are many kinds of local statistics, including those that capture luminance distributions, spatial contrast, oriented segments, and corners. While sensitivity to each of these kinds of statistics have been well-studied, much less is known about visual processing when multiple kinds of statistics are relevant, in large part because the dimensionality of the problem is high and different kinds of statistics interact. To approach this problem, we focused on binary images on a square lattice - a reduced set of stimuli which nevertheless taps many kinds of local statistics. In this 10-parameter space, we determined psychophysical thresholds to each kind of statistic (16 observers) and all of their pairwise combinations (4 observers). Sensitivities and isodiscrimination contours were consistent across observers. Isodiscrimination contours were elliptical, implying a quadratic interaction rule, which in turn determined ellipsoidal isodiscrimination surfaces in the full 10-dimensional space, and made predictions for sensitivities to complex combinations of statistics. These predictions, including the prediction of a combination of statistics that was metameric to random, were verified experimentally. Finally, check size had only a mild effect on sensitivities over the range from 2.8 to 14min, but sensitivities to second- and higher-order statistics was substantially lower at 1.4min. In sum, local image statistics form a perceptual space that is highly stereotyped across observers, in which different kinds of statistics interact according to simple rules. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. A perceptual space of local image statistics

    PubMed Central

    Victor, Jonathan D.; Thengone, Daniel J.; Rizvi, Syed M.; Conte, Mary M.

    2015-01-01

    Local image statistics are important for visual analysis of textures, surfaces, and form. There are many kinds of local statistics, including those that capture luminance distributions, spatial contrast, oriented segments, and corners. While sensitivity to each of these kinds of statistics have been well-studied, much less is known about visual processing when multiple kinds of statistics are relevant, in large part because the dimensionality of the problem is high and different kinds of statistics interact. To approach this problem, we focused on binary images on a square lattice – a reduced set of stimuli which nevertheless taps many kinds of local statistics. In this 10-parameter space, we determined psychophysical thresholds to each kind of statistic (16 observers) and all of their pairwise combinations (4 observers). Sensitivities and isodiscrimination contours were consistent across observers. Isodiscrimination contours were elliptical, implying a quadratic interaction rule, which in turn determined ellipsoidal isodiscrimination surfaces in the full 10-dimensional space, and made predictions for sensitivities to complex combinations of statistics. These predictions, including the prediction of a combination of statistics that was metameric to random, were verified experimentally. Finally, check size had only a mild effect on sensitivities over the range from 2.8 to 14 min, but sensitivities to second- and higher-order statistics was substantially lower at 1.4 min. In sum, local image statistics forms a perceptual space that is highly stereotyped across observers, in which different kinds of statistics interact according to simple rules. PMID:26130606

  11. Cost effectiveness of brace, physiotherapy, or both for treatment of tennis elbow

    PubMed Central

    Struijs, P A A; Bos, I B C Korthals‐de; van Tulder, M W; van Dijk, C N; Bouter, L M

    2006-01-01

    Background The annual incidence of tennis elbow in the general population is high (1–3%). Tennis elbow often leads to limitation of activities of daily living and work absenteeism. Physiotherapy and braces are the most common treatments. Objectives The hypothesis of the trial was that no difference exists in the cost effectiveness of physiotherapy, braces, and a combination of the two for treatment of tennis elbow. Methods The trial was designed as a randomised controlled trial with intention to treat analysis. A total of 180 patients with tennis elbow were randomised to brace only (n  =  68), physiotherapy (n  =  56), or a combination of the two (n  =  56). Outcome measures were success rate, severity of complaints, pain, functional disability, and quality of life. Follow up was at six, 26, and 52 weeks. Direct healthcare and non‐healthcare costs and indirect costs were measured. Mean cost differences over 12 months were evaluated by applying non‐parametric bootstrap techniques. Results No clinically relevant or statistically significant differences were found between the groups. Success rate at 12 months was 89% in the physiotherapy group, 86% in the brace group, and 87% in the combination group. Mean total costs per patient were €2069 in the brace only group, €978 in the physiotherapy group, and €1256 in the combination group. The mean difference in total costs between the physiotherapy and brace group was substantial (€1005), although not significant. Cost effectiveness ratios and cost utility ratios showed physiotherapy to be the most cost effective, although this also was not statistically significant. Conclusion No clinically relevant or statistically significant differences in costs were identified between the three strategies. PMID:16687482

  12. Spatially Pooled Contrast Responses Predict Neural and Perceptual Similarity of Naturalistic Image Categories

    PubMed Central

    Groen, Iris I. A.; Ghebreab, Sennay; Lamme, Victor A. F.; Scholte, H. Steven

    2012-01-01

    The visual world is complex and continuously changing. Yet, our brain transforms patterns of light falling on our retina into a coherent percept within a few hundred milliseconds. Possibly, low-level neural responses already carry substantial information to facilitate rapid characterization of the visual input. Here, we computationally estimated low-level contrast responses to computer-generated naturalistic images, and tested whether spatial pooling of these responses could predict image similarity at the neural and behavioral level. Using EEG, we show that statistics derived from pooled responses explain a large amount of variance between single-image evoked potentials (ERPs) in individual subjects. Dissimilarity analysis on multi-electrode ERPs demonstrated that large differences between images in pooled response statistics are predictive of more dissimilar patterns of evoked activity, whereas images with little difference in statistics give rise to highly similar evoked activity patterns. In a separate behavioral experiment, images with large differences in statistics were judged as different categories, whereas images with little differences were confused. These findings suggest that statistics derived from low-level contrast responses can be extracted in early visual processing and can be relevant for rapid judgment of visual similarity. We compared our results with two other, well- known contrast statistics: Fourier power spectra and higher-order properties of contrast distributions (skewness and kurtosis). Interestingly, whereas these statistics allow for accurate image categorization, they do not predict ERP response patterns or behavioral categorization confusions. These converging computational, neural and behavioral results suggest that statistics of pooled contrast responses contain information that corresponds with perceived visual similarity in a rapid, low-level categorization task. PMID:23093921

  13. A brief introduction to probability.

    PubMed

    Di Paola, Gioacchino; Bertani, Alessandro; De Monte, Lavinia; Tuzzolino, Fabio

    2018-02-01

    The theory of probability has been debated for centuries: back in 1600, French mathematics used the rules of probability to place and win bets. Subsequently, the knowledge of probability has significantly evolved and is now an essential tool for statistics. In this paper, the basic theoretical principles of probability will be reviewed, with the aim of facilitating the comprehension of statistical inference. After a brief general introduction on probability, we will review the concept of the "probability distribution" that is a function providing the probabilities of occurrence of different possible outcomes of a categorical or continuous variable. Specific attention will be focused on normal distribution that is the most relevant distribution applied to statistical analysis.

  14. Statistical Learning is Related to Early Literacy-Related Skills

    PubMed Central

    Spencer, Mercedes; Kaschak, Michael P.; Jones, John L.; Lonigan, Christopher J.

    2015-01-01

    It has been demonstrated that statistical learning, or the ability to use statistical information to learn the structure of one’s environment, plays a role in young children’s acquisition of linguistic knowledge. Although most research on statistical learning has focused on language acquisition processes, such as the segmentation of words from fluent speech and the learning of syntactic structure, some recent studies have explored the extent to which individual differences in statistical learning are related to literacy-relevant knowledge and skills. The present study extends on this literature by investigating the relations between two measures of statistical learning and multiple measures of skills that are critical to the development of literacy—oral language, vocabulary knowledge, and phonological processing—within a single model. Our sample included a total of 553 typically developing children from prekindergarten through second grade. Structural equation modeling revealed that statistical learning accounted for a unique portion of the variance in these literacy-related skills. Practical implications for instruction and assessment are discussed. PMID:26478658

  15. Hematological change parameters in patients with pressure ulcer at long-term care hospital

    PubMed Central

    Neiva, Giselle Protta; Carnevalli, Julia Romualdo; Cataldi, Rodrigo Lessa; Furtado, Denise Mendes; Fabri, Rodrigo Luiz; Silva, Pâmela Souza

    2014-01-01

    Objective To assess factors associated with the development of pressure ulcers, and to compare the effectiveness of pharmacological treatments. Methods The factors associated with the development of pressure ulcers were compared in lesion-carrying patients (n=14) and non-carriers (n=16). Lesion-carrying patients were treated with 1% silver sulfadiazine or 0.6IU/g collagenase and were observed for 8 weeks. The data collected was analyzed with p<0.05 being statistically relevant. Results The prevalence of pressure ulcers was about 6%. The comparison of carrier and non-carrier groups of pressure ulcers revealed no statistically significant difference in its occurrence with respect to age, sex, skin color, mobility, or the use of diapers. However, levels of hemoglobin, hematocrit, and red blood cells were found to be statistically different between groups, being lower in lesion-carrying patients. There was no significant difference found in lesion area between patients treated with collagenase or silver sulfadiazine, although both groups showed an overall reduction in lesion area through the treatment course. Conclusion Hematologic parameters showed a statistically significant difference between the two groups. Regarding the treatment of ulcers, there was no difference in the area of the lesion found between the groups treated with collagenase and silver sulfadiazine. PMID:25295450

  16. Cost-Effectiveness Analysis: a proposal of new reporting standards in statistical analysis

    PubMed Central

    Bang, Heejung; Zhao, Hongwei

    2014-01-01

    Cost-effectiveness analysis (CEA) is a method for evaluating the outcomes and costs of competing strategies designed to improve health, and has been applied to a variety of different scientific fields. Yet, there are inherent complexities in cost estimation and CEA from statistical perspectives (e.g., skewness, bi-dimensionality, and censoring). The incremental cost-effectiveness ratio that represents the additional cost per one unit of outcome gained by a new strategy has served as the most widely accepted methodology in the CEA. In this article, we call for expanded perspectives and reporting standards reflecting a more comprehensive analysis that can elucidate different aspects of available data. Specifically, we propose that mean and median-based incremental cost-effectiveness ratios and average cost-effectiveness ratios be reported together, along with relevant summary and inferential statistics as complementary measures for informed decision making. PMID:24605979

  17. Statistics on continuous IBD data: Exact distribution evaluation for a pair of full(half)-sibs and a pair of a (great-) grandchild with a (great-) grandparent

    PubMed Central

    Stefanov, Valeri T

    2002-01-01

    Background Pairs of related individuals are widely used in linkage analysis. Most of the tests for linkage analysis are based on statistics associated with identity by descent (IBD) data. The current biotechnology provides data on very densely packed loci, and therefore, it may provide almost continuous IBD data for pairs of closely related individuals. Therefore, the distribution theory for statistics on continuous IBD data is of interest. In particular, distributional results which allow the evaluation of p-values for relevant tests are of importance. Results A technology is provided for numerical evaluation, with any given accuracy, of the cumulative probabilities of some statistics on continuous genome data for pairs of closely related individuals. In the case of a pair of full-sibs, the following statistics are considered: (i) the proportion of genome with 2 (at least 1) haplotypes shared identical-by-descent (IBD) on a chromosomal segment, (ii) the number of distinct pieces (subsegments) of a chromosomal segment, on each of which exactly 2 (at least 1) haplotypes are shared IBD. The natural counterparts of these statistics for the other relationships are also considered. Relevant Maple codes are provided for a rapid evaluation of the cumulative probabilities of such statistics. The genomic continuum model, with Haldane's model for the crossover process, is assumed. Conclusions A technology, together with relevant software codes for its automated implementation, are provided for exact evaluation of the distributions of relevant statistics associated with continuous genome data on closely related individuals. PMID:11996673

  18. Skin antiseptics in venous puncture site disinfection for preventing blood culture contamination: A Bayesian network meta-analysis of randomized controlled trials.

    PubMed

    Liu, Wenjie; Duan, Yuchen; Cui, Wenyao; Li, Li; Wang, Xia; Dai, Heling; You, Chao; Chen, Maojun

    2016-07-01

    To compare the efficacy of several antiseptics in decreasing the blood culture contamination rate. Network meta-analysis. Electronic searches of PubMed and Embase were conducted up to November 2015. Only randomized controlled trials or quasi-randomized controlled trials were eligible. We applied no language restriction. A comprehensive review of articles in the reference lists was also accomplished for possible relevant studies. Relevant studies evaluating efficacy of different antiseptics in venous puncture site for decreasing the blood culture contamination rate were included. The data were extracted from the included randomized controlled trials by two authors independently. The risk of bias was evaluated using Detsky scale by two authors independently. We used WinBUGS1.43 software and statistic model described by Chaimani to perform this network meta-analysis. Then graphs of statistical results of WinBUGS1.43 software were generated using 'networkplot', 'ifplot', 'netfunnel' and 'sucra' procedure by STATA13.0. Odds ratio and 95% confidence intervals were assessed for dichotomous data. A probability of p less than 0.05 was considered to be statistically significant. Compared with ordinary meta-analyses, this network meta-analysis offered hierarchies for the efficacy of different antiseptics in decreasing the blood culture contamination rate. Seven randomized controlled trials involving 34,408 blood samples were eligible for the meta-analysis. No significant difference was found in blood culture contamination rate among different antiseptics. No significant difference was found between non-alcoholic antiseptics and alcoholic antiseptics, alcoholic chlorhexidine and povidone iodine, chlorhexidine and iodine compounds, povidone iodine and iodine tincture in this aspect, respectively. Different antiseptics may not affect the blood culture contamination rate. Different intervals between the skin disinfection and the venous puncture, the different settings (emergency room, medical wards, and intensive care units) and the performance of the phlebotomy may affect the blood culture contamination rate. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Major bleeding after percutaneous coronary intervention and risk of subsequent mortality: a systematic review and meta-analysis

    PubMed Central

    Kwok, Chun Shing; Rao, Sunil V; Myint, Phyo K; Keavney, Bernard; Nolan, James; Ludman, Peter F; de Belder, Mark A; Loke, Yoon K; Mamas, Mamas A

    2014-01-01

    Objectives To examine the relationship between periprocedural bleeding complications and major adverse cardiovascular events (MACEs) and mortality outcomes following percutaneous coronary intervention (PCI) and study differences in the prognostic impact of different bleeding definitions. Methods We conducted a systematic review and meta-analysis of PCI studies that evaluated periprocedural bleeding complications and their impact on MACEs and mortality outcomes. A systematic search of MEDLINE and EMBASE was conducted to identify relevant studies. Data from relevant studies were extracted and random effects meta-analysis was used to estimate the risk of adverse outcomes with periprocedural bleeding. Statistical heterogeneity was assessed by considering the I2 statistic. Results 42 relevant studies were identified including 533 333 patients. Meta-analysis demonstrated that periprocedural major bleeding complications was independently associated with increased risk of mortality (OR 3.31 (2.86 to 3.82), I2=80%) and MACEs (OR 3.89 (3.26 to 4.64), I2=42%). A differential impact of major bleeding as defined by different bleeding definitions on mortality outcomes was observed, in which the REPLACE-2 (OR 6.69, 95% CI 2.26 to 19.81), STEEPLE (OR 6.59, 95% CI 3.89 to 11.16) and BARC (OR 5.40, 95% CI 1.74 to 16.74) had the worst prognostic impacts while HORIZONS-AMI (OR 1.51, 95% CI 1.11 to 2.05) had the least impact on mortality outcomes. Conclusions Major bleeding after PCI is independently associated with a threefold increase in mortality and MACEs outcomes. Different contemporary bleeding definitions have differential impacts on mortality outcomes, with 1.5–6.7-fold increases in mortality observed depending on the definition of major bleeding used. PMID:25332786

  20. Investigation of Polarization Phase Difference Related to Forest Fields Characterizations

    NASA Astrophysics Data System (ADS)

    Majidi, M.; Maghsoudi, Y.

    2013-09-01

    The information content of Synthetic Aperture Radar (SAR) data significantly included in the radiometric polarization channels, hence polarimetric SAR data should be analyzed in relation with target structure. The importance of the phase difference between two co-polarized scattered signals due to the possible association between the biophysical parameters and the measured Polarization Phase Difference (PPD) statistics of the backscattered signal recorded components has been recognized in geophysical remote sensing. This paper examines two Radarsat-2 images statistics of the phase difference to describe the feasibility of relationship with the physical properties of scattering targets and tries to understand relevance of PPD statistics with various types of forest fields. As well as variation of incidence angle due to affecting on PPD statistics is investigated. The experimental forest pieces that are used in this research are characterized white pine (Pinus strobus L.), red pine (Pinus resinosa Ait.), jack pine (Pinus banksiana Lamb.), white spruce (Picea glauca (Moench Voss), black spruce (Picea mariana (Mill) B.S.P.), poplar (Populus L.), red oak (Quercus rubra L.) , aspen and ground vegetation. The experimental results show that despite of biophysical parameters have a wide diversity, PPD statistics are almost the same. Forest fields distributions as distributed targets have close to zero means regardless of the incidence angle. Also, The PPD distribution are function of both target and sensor parameters, but for more appropriate examination related to PPD statistics the observations should made in the leaf-off season or in bands with lower frequencies.

  1. Teacher beliefs about the aetiology of individual differences in cognitive ability, and the relevance of behavioural genetics to education.

    PubMed

    Crosswaite, Madeline; Asbury, Kathryn

    2018-04-26

    Despite a large body of research that has explored the influence of genetic and environmental factors on educationally relevant traits, few studies have explored teachers' beliefs about, or knowledge of, developments in behavioural genetics related to education. This study aimed to describe the beliefs and knowledge of UK teachers about behavioural genetics and its relevance to education, and to test for differences between groups of teachers based on factors including years of experience and age of children taught. Data were gathered from n = 402 teachers from a representative sample of UK schools. Teachers from primary and secondary schools, and from across the state and independent sectors, were recruited. An online questionnaire was used to gather demographic data (gender, age, years of experience, age of children taught, and state vs. independent) and also data on beliefs about the relative influence of nature and nurture on cognitive ability; knowledge of behavioural genetics; openness to genetic research in education; and mindset. Data were analysed using descriptive statistics, ANOVA, correlations, and multiple regression. Teachers perceived genetic and environmental factors as equally important influences on cognitive ability and tended towards a growth mindset. Knowledge about behavioural genetics was low, but openness to learning more about genetics was high. Statistically significant differences were observed between groups based on age of children taught (openness higher among primary teachers) and state versus independent (more growth-minded in state sector). Although teachers have a limited knowledge of behavioural genetics, they are keen to learn more. © 2018 The British Psychological Society.

  2. Promoting Statistical Thinking in Schools with Road Injury Data

    ERIC Educational Resources Information Center

    Woltman, Marie

    2017-01-01

    Road injury is an immediately relevant topic for 9-19 year olds. Current availability of Open Data makes it increasingly possible to find locally relevant data. Statistical lessons developed from these data can mutually reinforce life lessons about minimizing risk on the road. Devon County Council demonstrate how a wide array of statistical…

  3. Utilisation of Local Inputs in the Funding and Administration of Education in Nigeria

    ERIC Educational Resources Information Center

    Akiri, Agharuwhe A.

    2014-01-01

    The article discussed how, why and who is in charge of administering and funding schools in Nigeria. The author utilised the relevant statistical approach which examined and discussed various political and historical trends affecting education. Besides this, relevant documented statistical data were used to both buttress and substantiate related…

  4. Mathematical models of cytotoxic effects in endpoint tumor cell line assays: critical assessment of the application of a single parametric value as a standard criterion to quantify the dose-response effects and new unexplored proposal formats.

    PubMed

    Calhelha, Ricardo C; Martínez, Mireia A; Prieto, M A; Ferreira, Isabel C F R

    2017-10-23

    The development of convenient tools for describing and quantifying the effects of standard and novel therapeutic agents is essential for the research community, to perform more precise evaluations. Although mathematical models and quantification criteria have been exchanged in the last decade between different fields of study, there are relevant methodologies that lack proper mathematical descriptions and standard criteria to quantify their responses. Therefore, part of the relevant information that can be drawn from the experimental results obtained and the quantification of its statistical reliability are lost. Despite its relevance, there is not a standard form for the in vitro endpoint tumor cell lines' assays (TCLA) that enables the evaluation of the cytotoxic dose-response effects of anti-tumor drugs. The analysis of all the specific problems associated with the diverse nature of the available TCLA used is unfeasible. However, since most TCLA share the main objectives and similar operative requirements, we have chosen the sulforhodamine B (SRB) colorimetric assay for cytotoxicity screening of tumor cell lines as an experimental case study. In this work, the common biological and practical non-linear dose-response mathematical models are tested against experimental data and, following several statistical analyses, the model based on the Weibull distribution was confirmed as the convenient approximation to test the cytotoxic effectiveness of anti-tumor compounds. Then, the advantages and disadvantages of all the different parametric criteria derived from the model, which enable the quantification of the dose-response drug-effects, are extensively discussed. Therefore, model and standard criteria for easily performing the comparisons between different compounds are established. The advantages include a simple application, provision of parametric estimations that characterize the response as standard criteria, economization of experimental effort and enabling rigorous comparisons among the effects of different compounds and experimental approaches. In all experimental data fitted, the calculated parameters were always statistically significant, the equations proved to be consistent and the correlation coefficient of determination was, in most of the cases, higher than 0.98.

  5. A Survey of Introductory Statistics Courses at University Faculties of Pharmaceutical Sciences in Japan.

    PubMed

    Matsumura, Mina; Nakayama, Takuto; Sozu, Takashi

    2016-01-01

    A survey of introductory statistics courses at Japanese medical schools was published as a report in 2014. To obtain a complete understanding of the way in which statistics is taught at the university level in Japan, it is important to extend this survey to related fields, including pharmacy, dentistry, and nursing. The current study investigates the introductory statistics courses offered by faculties of pharmaceutical sciences (six-year programs) at Japanese universities, comparing the features of these courses with those studied in the survey of medical schools. We collected relevant data from the online syllabi of statistics courses published on the websites of 71 universities. The survey items included basic course information (for example, the course names, the targeted student grades, the number of credits, and course classification), textbooks, handouts, the doctoral subject and employment status of each lecturer, and course contents. The period surveyed was July-September 2015. We found that these 71 universities offered a total of 128 statistics courses. There were 67 course names, the most common of which was "biostatistics (iryou toukeigaku)." About half of the courses were designed for first- or second-year students. Students earned fewer than two credits. There were 62 different types of textbooks. The lecturers held doctoral degrees in 18 different subjects, the most common being a doctorate in pharmacy or science. Some course content differed, reflecting the lecturers' academic specialties. The content of introductory statistics courses taught in pharmaceutical science programs also differed slightly from the equivalent content taught in medical schools.

  6. Prognostic relevance of motor talent predictors in early adolescence: A group- and individual-based evaluation considering different levels of achievement in youth football.

    PubMed

    Höner, Oliver; Votteler, Andreas

    2016-12-01

    In the debate about the usefulness of motor diagnostics in the talent identification process, the prognostic validity for tests conducted in early adolescence is of critical interest. Using a group- and individual-based statistical approach, this prospective cohort study evaluated a nationwide assessment of speed abilities and technical skills regarding its relevance for future achievement levels. The sample consisted of 22,843 U12-players belonging to the top 4% in German football. The U12-results in five tests served as predictors for players' selection levels in U16-U19 (youth national team, regional association, youth academy, not selected). Group-mean differences proved the prognostic relevance for all predictors. Low individual selection probabilities demonstrated limited predictive values, while excellent test results proved their particular prognostic relevance. Players scoring percentile ranks (PRs) ≥ 99 had a 12 times higher chance to become youth national team players than players scoring PR < 99. Simulating increasing score cut-off values not only enhanced specificity (correctly identified non-talents) but also led to lower sensitivity (loss of talents). Extending the current research, these different approaches revealed the ambiguity of the diagnostics' prognostic relevance, representing both the usefulness and several pitfalls of nationwide diagnostics. Therefore, the present diagnostics can support but not substitute for coaches' subjective decisions for talent identification, and multidisciplinary designs are required.

  7. Decompression models: review, relevance and validation capabilities.

    PubMed

    Hugon, J

    2014-01-01

    For more than a century, several types of mathematical models have been proposed to describe tissue desaturation mechanisms in order to limit decompression sickness. These models are statistically assessed by DCS cases, and, over time, have gradually included bubble formation biophysics. This paper proposes to review this evolution and discuss its limitations. This review is organized around the comparison of decompression model biophysical criteria and theoretical foundations. Then, the DCS-predictive capability was analyzed to assess whether it could be improved by combining different approaches. Most of the operational decompression models have a neo-Haldanian form. Nevertheless, bubble modeling has been gaining popularity, and the circulating bubble amount has become a major output. By merging both views, it seems possible to build a relevant global decompression model that intends to simulate bubble production while predicting DCS risks for all types of exposures and decompression profiles. A statistical approach combining both DCS and bubble detection databases has to be developed to calibrate a global decompression model. Doppler ultrasound and DCS data are essential: i. to make correlation and validation phases reliable; ii. to adjust biophysical criteria to fit at best the observed bubble kinetics; and iii. to build a relevant risk function.

  8. Burns education for non-burn specialist clinicians in Western Australia.

    PubMed

    McWilliams, Tania; Hendricks, Joyce; Twigg, Di; Wood, Fiona

    2015-03-01

    Burn patients often receive their initial care by non-burn specialist clinicians, with increasingly collaborative burn models of care. The provision of relevant and accessible education for these clinicians is therefore vital for optimal patient care. A two phase design was used. A state-wide survey of multidisciplinary non-burn specialist clinicians throughout Western Australia identified learning needs related to paediatric burn care. A targeted education programme was developed and delivered live via videoconference. Pre-post-test analysis evaluated changes in knowledge as a result of attendance at each education session. Non-burn specialist clinicians identified numerous areas of burn care relevant to their practice. Statistically significant differences between perceived relevance of care and confidence in care provision were reported for aspects of acute burn care. Following attendance at the education sessions, statistically significant increases in knowledge were noted for most areas of acute burn care. Identification of learning needs facilitated the development of a targeted education programme for non-burn specialist clinicians. Increased non-burn specialist clinician knowledge following attendance at most education sessions supports the use of videoconferencing as an acceptable and effective method of delivering burns education in Western Australia. Copyright © 2014 Elsevier Ltd and ISBI. All rights reserved.

  9. Statistical approach for selection of biologically informative genes.

    PubMed

    Das, Samarendra; Rai, Anil; Mishra, D C; Rai, Shesh N

    2018-05-20

    Selection of informative genes from high dimensional gene expression data has emerged as an important research area in genomics. Many gene selection techniques have been proposed so far are either based on relevancy or redundancy measure. Further, the performance of these techniques has been adjudged through post selection classification accuracy computed through a classifier using the selected genes. This performance metric may be statistically sound but may not be biologically relevant. A statistical approach, i.e. Boot-MRMR, was proposed based on a composite measure of maximum relevance and minimum redundancy, which is both statistically sound and biologically relevant for informative gene selection. For comparative evaluation of the proposed approach, we developed two biological sufficient criteria, i.e. Gene Set Enrichment with QTL (GSEQ) and biological similarity score based on Gene Ontology (GO). Further, a systematic and rigorous evaluation of the proposed technique with 12 existing gene selection techniques was carried out using five gene expression datasets. This evaluation was based on a broad spectrum of statistically sound (e.g. subject classification) and biological relevant (based on QTL and GO) criteria under a multiple criteria decision-making framework. The performance analysis showed that the proposed technique selects informative genes which are more biologically relevant. The proposed technique is also found to be quite competitive with the existing techniques with respect to subject classification and computational time. Our results also showed that under the multiple criteria decision-making setup, the proposed technique is best for informative gene selection over the available alternatives. Based on the proposed approach, an R Package, i.e. BootMRMR has been developed and available at https://cran.r-project.org/web/packages/BootMRMR. This study will provide a practical guide to select statistical techniques for selecting informative genes from high dimensional expression data for breeding and system biology studies. Published by Elsevier B.V.

  10. Prediction of biomechanical parameters of the proximal femur using statistical appearance models and support vector regression.

    PubMed

    Fritscher, Karl; Schuler, Benedikt; Link, Thomas; Eckstein, Felix; Suhm, Norbert; Hänni, Markus; Hengg, Clemens; Schubert, Rainer

    2008-01-01

    Fractures of the proximal femur are one of the principal causes of mortality among elderly persons. Traditional methods for the determination of femoral fracture risk use methods for measuring bone mineral density. However, BMD alone is not sufficient to predict bone failure load for an individual patient and additional parameters have to be determined for this purpose. In this work an approach that uses statistical models of appearance to identify relevant regions and parameters for the prediction of biomechanical properties of the proximal femur will be presented. By using Support Vector Regression the proposed model based approach is capable of predicting two different biomechanical parameters accurately and fully automatically in two different testing scenarios.

  11. The role of different PI-RADS versions in prostate multiparametric magnetic resonance tomography assessment.

    PubMed

    Aliukonis, Paulius; Letauta, Tadas; Briedienė, Rūta; Naruševičiūtė, Ieva; Letautienė, Simona

    2017-01-01

    Background . Standardised Prostate Imaging Reporting and Data System (PI-RADS) guidelines for the assessment of prostate alterations were designed for the assessment of prostate pathology. Published by the ESUR in 2012, PI-RADS v1 was based on the total score of different MRI sequences with subsequent calculation. PI-RADS v2 was published by the American College of Radiology in 2015 and featured different assessment criteria for prostate peripheral and transitory zones. Aim . To assess the correlations of PI-RADS v1 and PI-RADS v2 with Gleason score values and to define their predictive values of the diagnosis of prostate cancer. Materials and methods . A retrospective analysis of 66 patients. Prostate specific antigen (PSA) value and the Gleason score (GS) were assessed. One the most malignant focal lesion was selected in the peripheral zone of each lobe of the prostate (91 in total). Statistical analysis was carried out applying SPSS software, v.23, p < 0.05. Results . Focal lesions assessed by PI-RADS v1 score: 10% - 1, 12% - 2, 41% - 3, 23% - 4, 14% - 5. Assessment applying PI-RADS v.2: 20% - 1, 7.5% - 2, 26%, 29.5%, and 17% were assessed by 3, 4, and 5 scores. Statistically relevant correlation was found only between GS and PI-RADS ( p = 0.033). The positive predictive value of both versions of PI-RADS - 75%, negative predictive value of PI-RADS v1 - 46%, PI-RADS v2 - 43%. Conclusions . PI-RADS v1 was more statistically relevant in assessing the grade of tumour. Prediction values were similar in both versions.

  12. Comparison of untreated adolescent idiopathic scoliosis with normal controls: a review and statistical analysis of the literature.

    PubMed

    Rushton, Paul R P; Grevitt, Michael P

    2013-04-20

    Review and statistical analysis of studies evaluating health-related quality of life (HRQOL) in adolescents with untreated adolescent idiopathic scoliosis (AIS) using Scoliosis Research Society (SRS) outcomes. To apply normative values and minimum clinical important differences for the SRS-22r to the literature. Identify whether the HRQOL of adolescents with untreated AIS differs from unaffected peers and whether any differences are clinically relevant. The effect of untreated AIS on adolescent HRQOL is uncertain. The lack of published normative values and minimum clinical important difference for the SRS-22r has so far hindered our interpretation of previous studies. The publication of this background data allows these studies to be re-examined. Using suitable inclusion criteria, a literature search identified studies examining HRQOL in untreated adolescents with AIS. Each cohort was analyzed individually. Statistically significant differences were identified by using 95% confidence intervals for the difference in SRS-22r domain mean scores between the cohorts with AIS and the published data for unaffected adolescents. If the lower bound of the confidence interval was greater than the minimum clinical important difference, the difference was considered clinically significant. Of the 21 included patient cohorts, 81% reported statistically worse pain than those unaffected. Yet in only 5% of cohorts was this difference clinically important. Of the 11 cohorts included examining patient self-image, 91% reported statistically worse scores than those unaffected. In 73% of cohorts this difference was clinically significant. Affected cohorts tended to score well in function/activity and mental health domains and differences from those unaffected rarely reached clinically significant values. Pain and self-image tend to be statistically lower among cohorts with AIS than those unaffected. The literature to date suggests that it is only self-image which consistently differs clinically. This should be considered when assessing the possible benefits of surgery.

  13. Electroencephalogram Signal Classification for Automated Epileptic Seizure Detection Using Genetic Algorithm

    PubMed Central

    Nanthini, B. Suguna; Santhi, B.

    2017-01-01

    Background: Epilepsy causes when the repeated seizure occurs in the brain. Electroencephalogram (EEG) test provides valuable information about the brain functions and can be useful to detect brain disorder, especially for epilepsy. In this study, application for an automated seizure detection model has been introduced successfully. Materials and Methods: The EEG signals are decomposed into sub-bands by discrete wavelet transform using db2 (daubechies) wavelet. The eight statistical features, the four gray level co-occurrence matrix and Renyi entropy estimation with four different degrees of order, are extracted from the raw EEG and its sub-bands. Genetic algorithm (GA) is used to select eight relevant features from the 16 dimension features. The model has been trained and tested using support vector machine (SVM) classifier successfully for EEG signals. The performance of the SVM classifier is evaluated for two different databases. Results: The study has been experimented through two different analyses and achieved satisfactory performance for automated seizure detection using relevant features as the input to the SVM classifier. Conclusion: Relevant features using GA give better accuracy performance for seizure detection. PMID:28781480

  14. Electrical Conductivity of Charged Particle Systems and Zubarev's Nonequilibrium Statistical Operator Method

    NASA Astrophysics Data System (ADS)

    Röpke, G.

    2018-01-01

    One of the fundamental problems in physics that are not yet rigorously solved is the statistical mechanics of nonequilibrium processes. An important contribution to describing irreversible behavior starting from reversible Hamiltonian dynamics was given by D. N. Zubarev, who invented the method of the nonequilibrium statistical operator. We discuss this approach, in particular, the extended von Neumann equation, and as an example consider the electrical conductivity of a system of charged particles. We consider the selection of the set of relevant observables. We show the relation between kinetic theory and linear response theory. Using thermodynamic Green's functions, we present a systematic treatment of correlation functions, but the convergence needs investigation. We compare different expressions for the conductivity and list open questions.

  15. Analytics for Cyber Network Defense

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Plantenga, Todd.; Kolda, Tamara Gibson

    2011-06-01

    This report provides a brief survey of analytics tools considered relevant to cyber network defense (CND). Ideas and tools come from elds such as statistics, data mining, and knowledge discovery. Some analytics are considered standard mathematical or statistical techniques, while others re ect current research directions. In all cases the report attempts to explain the relevance to CND with brief examples.

  16. The Effects of Clinically Relevant Multiple-Choice Items on the Statistical Discrimination of Physician Clinical Competence.

    ERIC Educational Resources Information Center

    Downing, Steven M.; Maatsch, Jack L.

    To test the effect of clinically relevant multiple-choice item content on the validity of statistical discriminations of physicians' clinical competence, data were collected from a field test of the Emergency Medicine Examination, test items for the certification of specialists in emergency medicine. Two 91-item multiple-choice subscales were…

  17. Twin Data That Made a Big Difference, and That Deserve to Be Better-Known and Used in Teaching

    ERIC Educational Resources Information Center

    Campbell, Harlan; Hanley, James A.

    2017-01-01

    Because of their efficiency and ability to keep many other factors constant, twin studies have a special appeal for investigators. Just as with any teaching dataset, a "matched-sets" dataset used to illustrate a statistical model should be compelling, still relevant, and valid. Indeed, such a "model dataset" should meet the…

  18. Butterfly Floquet Spectrum in Driven SU(2) Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang Jiao; Department of Physics, Institute of Theoretical Physics and Astrophysics, Xiamen University, Xiamen 361005; Gong Jiangbin

    2009-06-19

    The Floquet spectrum of a class of driven SU(2) systems is shown to display a butterfly pattern with multifractal properties. The level crossing between Floquet states of the same parity or different parities is studied. The results are relevant to studies of fractal statistics, quantum chaos, coherent destruction of tunneling, and the validity of mean-field descriptions of Bose-Einstein condensates.

  19. Rank Dynamics of Word Usage at Multiple Scales

    NASA Astrophysics Data System (ADS)

    Morales, José A.; Colman, Ewan; Sánchez, Sergio; Sánchez-Puig, Fernanda; Pineda, Carlos; Iñiguez, Gerardo; Cocho, Germinal; Flores, Jorge; Gershenson, Carlos

    2018-05-01

    The recent dramatic increase in online data availability has allowed researchers to explore human culture with unprecedented detail, such as the growth and diversification of language. In particular, it provides statistical tools to explore whether word use is similar across languages, and if so, whether these generic features appear at different scales of language structure. Here we use the Google Books N-grams dataset to analyze the temporal evolution of word usage in several languages. We apply measures proposed recently to study rank dynamics, such as the diversity of N-grams in a given rank, the probability that an N-gram changes rank between successive time intervals, the rank entropy, and the rank complexity. Using different methods, results show that there are generic properties for different languages at different scales, such as a core of words necessary to minimally understand a language. We also propose a null model to explore the relevance of linguistic structure across multiple scales, concluding that N-gram statistics cannot be reduced to word statistics. We expect our results to be useful in improving text prediction algorithms, as well as in shedding light on the large-scale features of language use, beyond linguistic and cultural differences across human populations.

  20. A randomized trial comparing INR monitoring devices in patients with anticoagulation self-management: evaluation of a novel error-grid approach.

    PubMed

    Hemkens, Lars G; Hilden, Kristian M; Hartschen, Stephan; Kaiser, Thomas; Didjurgeit, Ulrike; Hansen, Roland; Bender, Ralf; Sawicki, Peter T

    2008-08-01

    In addition to the metrological quality of international normalized ratio (INR) monitoring devices used in patients' self-management of long-term anticoagulation, the effectiveness of self-monitoring with such devices has to be evaluated under real-life conditions with a focus on clinical implications. An approach to evaluate the clinical significance of inaccuracies is the error-grid analysis as already established in self-monitoring of blood glucose. Two anticoagulation monitors were compared in a real-life setting and a novel error-grid instrument for oral anticoagulation has been evaluated. In a randomized crossover study 16 patients performed self-management of anticoagulation using the INRatio and the CoaguChek S system. Main outcome measures were clinically relevant INR differences according to established criteria and to the error-grid approach. A lower rate of clinically relevant disagreements according to Anderson's criteria was found with CoaguChek S than with INRatio without statistical significance (10.77% vs. 12.90%; P = 0.787). Using the error-grid we found principally consistent results: More measurement pairs with discrepancies of no or low clinical relevance were found with CoaguChek S, whereas with INRatio we found more differences with a moderate clinical relevance. A high rate of patients' satisfaction with both of the point of care devices was found with only marginal differences. A principal appropriateness of the investigated point-of-care devices to adequately monitor the INR is shown. The error-grid is useful for comparing monitoring methods with a focus on clinical relevance under real-life conditions beyond assessing the pure metrological quality, but we emphasize that additional trials using this instrument with larger patient populations are needed to detect differences in clinically relevant disagreements.

  1. Rodent Biocompatibility Test Using the NASA Foodbar and Epoxy EP21LV

    NASA Technical Reports Server (NTRS)

    Tillman, J.; Steele, M.; Dumars, P.; Vasques, M.; Girten, B.; Sun, S. (Technical Monitor)

    2002-01-01

    Epoxy has been used successfully to affix NASA foodbars to the inner walls of the Animal Enclosure Module for past space flight experiments utilizing rodents. The epoxy used on past missions was discontinued, making it necessary to identify a new epoxy for use on the STS-108 and STS-107 missions. This experiment was designed to test the basic biocompatibility of epoxy EP21LV with male rats (Sprague Dawley) and mice (Swiss Webster) when applied to NASA foodbars. For each species, the test was conducted with a control group fed untreated foodbars and an experimental group fed foodbars applied with EP21LV. For each species, there were no group differences in animal health and no statistical differences (P<0.05) in body weights throughout the study. In mice, there was a 16% increase in heart weight in the epoxy group; this result was not found in rats. For both species, there were no statistical differences found in other organ weights measured. In rats, blood glucose levels were 15% higher and both total protein and globulin were 10% lower in the epoxy group. Statistical differences in these parameters were not found in mice. For both species, no statistical differences were found in other blood parameters tested. Food consumption was not different in rats but water consumption was significantly decreased 10 to 15% in the epoxy group. The difference in water consumption is likely due to an increased water content of the epoxy-treated foodbars. Finally, both species avoided consumption of the epoxy material. Based on the global analysis of the results, the few parameters found to be statistically different do not appear to be a physiologically relevant effect of the epoxy material, We conclude that the EP21LV epoxy is biocompatible with rodents.

  2. Impact of distributions on the archetypes and prototypes in heterogeneous nanoparticle ensembles.

    PubMed

    Fernandez, Michael; Wilson, Hugh F; Barnard, Amanda S

    2017-01-05

    The magnitude and complexity of the structural and functional data available on nanomaterials requires data analytics, statistical analysis and information technology to drive discovery. We demonstrate that multivariate statistical analysis can recognise the sets of truly significant nanostructures and their most relevant properties in heterogeneous ensembles with different probability distributions. The prototypical and archetypal nanostructures of five virtual ensembles of Si quantum dots (SiQDs) with Boltzmann, frequency, normal, Poisson and random distributions are identified using clustering and archetypal analysis, where we find that their diversity is defined by size and shape, regardless of the type of distribution. At the complex hull of the SiQD ensembles, simple configuration archetypes can efficiently describe a large number of SiQDs, whereas more complex shapes are needed to represent the average ordering of the ensembles. This approach provides a route towards the characterisation of computationally intractable virtual nanomaterial spaces, which can convert big data into smart data, and significantly reduce the workload to simulate experimentally relevant virtual samples.

  3. Data Science in the Research Domain Criteria Era: Relevance of Machine Learning to the Study of Stress Pathology, Recovery, and Resilience

    PubMed Central

    Galatzer-Levy, Isaac R.; Ruggles, Kelly; Chen, Zhe

    2017-01-01

    Diverse environmental and biological systems interact to influence individual differences in response to environmental stress. Understanding the nature of these complex relationships can enhance the development of methods to: (1) identify risk, (2) classify individuals as healthy or ill, (3) understand mechanisms of change, and (4) develop effective treatments. The Research Domain Criteria (RDoC) initiative provides a theoretical framework to understand health and illness as the product of multiple inter-related systems but does not provide a framework to characterize or statistically evaluate such complex relationships. Characterizing and statistically evaluating models that integrate multiple levels (e.g. synapses, genes, environmental factors) as they relate to outcomes that a free from prior diagnostic benchmarks represents a challenge requiring new computational tools that are capable to capture complex relationships and identify clinically relevant populations. In the current review, we will summarize machine learning methods that can achieve these goals. PMID:29527592

  4. Relevant principal component analysis applied to the characterisation of Portuguese heather honey.

    PubMed

    Martins, Rui C; Lopes, Victor V; Valentão, Patrícia; Carvalho, João C M F; Isabel, Paulo; Amaral, Maria T; Batista, Maria T; Andrade, Paula B; Silva, Branca M

    2008-01-01

    The main purpose of this study was the characterisation of 'Serra da Lousã' heather honey by using novel statistical methodology, relevant principal component analysis, in order to assess the correlations between production year, locality and composition. Herein, we also report its chemical composition in terms of sugars, glycerol and ethanol, and physicochemical parameters. Sugars profiles from 'Serra da Lousã' heather and 'Terra Quente de Trás-os-Montes' lavender honeys were compared and allowed the discrimination: 'Serra da Lousã' honeys do not contain sucrose, generally exhibit lower contents of turanose, trehalose and maltose and higher contents of fructose and glucose. Different localities from 'Serra da Lousã' provided groups of samples with high and low glycerol contents. Glycerol and ethanol contents were revealed to be independent of the sugars profiles. These data and statistical models can be very useful in the comparison and detection of adulterations during the quality control analysis of 'Serra da Lousã' honey.

  5. Quantifying predictability in a model with statistical features of the atmosphere

    PubMed Central

    Kleeman, Richard; Majda, Andrew J.; Timofeyev, Ilya

    2002-01-01

    The Galerkin truncated inviscid Burgers equation has recently been shown by the authors to be a simple model with many degrees of freedom, with many statistical properties similar to those occurring in dynamical systems relevant to the atmosphere. These properties include long time-correlated, large-scale modes of low frequency variability and short time-correlated “weather modes” at smaller scales. The correlation scaling in the model extends over several decades and may be explained by a simple theory. Here a thorough analysis of the nature of predictability in the idealized system is developed by using a theoretical framework developed by R.K. This analysis is based on a relative entropy functional that has been shown elsewhere by one of the authors to measure the utility of statistical predictions precisely. The analysis is facilitated by the fact that most relevant probability distributions are approximately Gaussian if the initial conditions are assumed to be so. Rather surprisingly this holds for both the equilibrium (climatological) and nonequilibrium (prediction) distributions. We find that in most cases the absolute difference in the first moments of these two distributions (the “signal” component) is the main determinant of predictive utility variations. Contrary to conventional belief in the ensemble prediction area, the dispersion of prediction ensembles is generally of secondary importance in accounting for variations in utility associated with different initial conditions. This conclusion has potentially important implications for practical weather prediction, where traditionally most attention has focused on dispersion and its variability. PMID:12429863

  6. A framework for medical image retrieval using machine learning and statistical similarity matching techniques with relevance feedback.

    PubMed

    Rahman, Md Mahmudur; Bhattacharya, Prabir; Desai, Bipin C

    2007-01-01

    A content-based image retrieval (CBIR) framework for diverse collection of medical images of different imaging modalities, anatomic regions with different orientations and biological systems is proposed. Organization of images in such a database (DB) is well defined with predefined semantic categories; hence, it can be useful for category-specific searching. The proposed framework consists of machine learning methods for image prefiltering, similarity matching using statistical distance measures, and a relevance feedback (RF) scheme. To narrow down the semantic gap and increase the retrieval efficiency, we investigate both supervised and unsupervised learning techniques to associate low-level global image features (e.g., color, texture, and edge) in the projected PCA-based eigenspace with their high-level semantic and visual categories. Specially, we explore the use of a probabilistic multiclass support vector machine (SVM) and fuzzy c-mean (FCM) clustering for categorization and prefiltering of images to reduce the search space. A category-specific statistical similarity matching is proposed in a finer level on the prefiltered images. To incorporate a better perception subjectivity, an RF mechanism is also added to update the query parameters dynamically and adjust the proposed matching functions. Experiments are based on a ground-truth DB consisting of 5000 diverse medical images of 20 predefined categories. Analysis of results based on cross-validation (CV) accuracy and precision-recall for image categorization and retrieval is reported. It demonstrates the improvement, effectiveness, and efficiency achieved by the proposed framework.

  7. Back to the basics: Identifying and addressing underlying challenges in achieving high quality and relevant health statistics for indigenous populations in Canada

    PubMed Central

    Smylie, Janet; Firestone, Michelle

    2015-01-01

    Canada is known internationally for excellence in both the quality and public policy relevance of its health and social statistics. There is a double standard however with respect to the relevance and quality of statistics for Indigenous populations in Canada. Indigenous specific health and social statistics gathering is informed by unique ethical, rights-based, policy and practice imperatives regarding the need for Indigenous participation and leadership in Indigenous data processes throughout the spectrum of indicator development, data collection, management, analysis and use. We demonstrate how current Indigenous data quality challenges including misclassification errors and non-response bias systematically contribute to a significant underestimate of inequities in health determinants, health status, and health care access between Indigenous and non-Indigenous people in Canada. The major quality challenge underlying these errors and biases is the lack of Indigenous specific identifiers that are consistent and relevant in major health and social data sources. The recent removal of an Indigenous identity question from the Canadian census has resulted in further deterioration of an already suboptimal system. A revision of core health data sources to include relevant, consistent, and inclusive Indigenous self-identification is urgently required. These changes need to be carried out in partnership with Indigenous peoples and their representative and governing organizations. PMID:26793283

  8. Back to basics: an introduction to statistics.

    PubMed

    Halfens, R J G; Meijers, J M M

    2013-05-01

    In the second in the series, Professor Ruud Halfens and Dr Judith Meijers give an overview of statistics, both descriptive and inferential. They describe the first principles of statistics, including some relevant inferential tests.

  9. Make measurable what is not so: national monitoring of the status of persons with intellectual disability.

    PubMed

    Fujiura, Glenn T; Rutkowski-Kmitta, Violet; Owen, Randall

    2010-12-01

    Statistics are critical in holding governments accountable for the well-being of citizens with disability. International initiatives are underway to improve the quality of disability statistics, but meaningful ID data is exceptionally rare. The status of ID data was evaluated in a review of 12 national statistical systems. Recurring data collection by national ministries was identified and the availability of measures of poverty, exclusion, and disadvantage was assessed. A total of 131 recurring systems coordinated by 50 different ministries were identified. The majority included general disability but less than 25% of the systems screened ID. Of these, few provided policy-relevant data. The scope of ID data was dismal at best, though a significant statistical infrastructure exists for the integration of ID data. Advocacy will be necessary. There is no optimal form of data monitoring, and decisions regarding priorities in purpose, targeted audiences, and the goals for surveillance must be resolved.

  10. Detecting temporal change in freshwater fisheries surveys: statistical power and the important linkages between management questions and monitoring objectives

    USGS Publications Warehouse

    Wagner, Tyler; Irwin, Brian J.; James R. Bence,; Daniel B. Hayes,

    2016-01-01

    Monitoring to detect temporal trends in biological and habitat indices is a critical component of fisheries management. Thus, it is important that management objectives are linked to monitoring objectives. This linkage requires a definition of what constitutes a management-relevant “temporal trend.” It is also important to develop expectations for the amount of time required to detect a trend (i.e., statistical power) and for choosing an appropriate statistical model for analysis. We provide an overview of temporal trends commonly encountered in fisheries management, review published studies that evaluated statistical power of long-term trend detection, and illustrate dynamic linear models in a Bayesian context, as an additional analytical approach focused on shorter term change. We show that monitoring programs generally have low statistical power for detecting linear temporal trends and argue that often management should be focused on different definitions of trends, some of which can be better addressed by alternative analytical approaches.

  11. [Statistical validity of the Mexican Food Security Scale and the Latin American and Caribbean Food Security Scale].

    PubMed

    Villagómez-Ornelas, Paloma; Hernández-López, Pedro; Carrasco-Enríquez, Brenda; Barrios-Sánchez, Karina; Pérez-Escamilla, Rafael; Melgar-Quiñónez, Hugo

    2014-01-01

    This article validates the statistical consistency of two food security scales: the Mexican Food Security Scale (EMSA) and the Latin American and Caribbean Food Security Scale (ELCSA). Validity tests were conducted in order to verify that both scales were consistent instruments, conformed by independent, properly calibrated and adequately sorted items, arranged in a continuum of severity. The following tests were developed: sorting of items; Cronbach's alpha analysis; parallelism of prevalence curves; Rasch models; sensitivity analysis through mean differences' hypothesis test. The tests showed that both scales meet the required attributes and are robust statistical instruments for food security measurement. This is relevant given that the lack of access to food indicator, included in multidimensional poverty measurement in Mexico, is calculated with EMSA.

  12. Guidelines for Genome-Scale Analysis of Biological Rhythms.

    PubMed

    Hughes, Michael E; Abruzzi, Katherine C; Allada, Ravi; Anafi, Ron; Arpat, Alaaddin Bulak; Asher, Gad; Baldi, Pierre; de Bekker, Charissa; Bell-Pedersen, Deborah; Blau, Justin; Brown, Steve; Ceriani, M Fernanda; Chen, Zheng; Chiu, Joanna C; Cox, Juergen; Crowell, Alexander M; DeBruyne, Jason P; Dijk, Derk-Jan; DiTacchio, Luciano; Doyle, Francis J; Duffield, Giles E; Dunlap, Jay C; Eckel-Mahan, Kristin; Esser, Karyn A; FitzGerald, Garret A; Forger, Daniel B; Francey, Lauren J; Fu, Ying-Hui; Gachon, Frédéric; Gatfield, David; de Goede, Paul; Golden, Susan S; Green, Carla; Harer, John; Harmer, Stacey; Haspel, Jeff; Hastings, Michael H; Herzel, Hanspeter; Herzog, Erik D; Hoffmann, Christy; Hong, Christian; Hughey, Jacob J; Hurley, Jennifer M; de la Iglesia, Horacio O; Johnson, Carl; Kay, Steve A; Koike, Nobuya; Kornacker, Karl; Kramer, Achim; Lamia, Katja; Leise, Tanya; Lewis, Scott A; Li, Jiajia; Li, Xiaodong; Liu, Andrew C; Loros, Jennifer J; Martino, Tami A; Menet, Jerome S; Merrow, Martha; Millar, Andrew J; Mockler, Todd; Naef, Felix; Nagoshi, Emi; Nitabach, Michael N; Olmedo, Maria; Nusinow, Dmitri A; Ptáček, Louis J; Rand, David; Reddy, Akhilesh B; Robles, Maria S; Roenneberg, Till; Rosbash, Michael; Ruben, Marc D; Rund, Samuel S C; Sancar, Aziz; Sassone-Corsi, Paolo; Sehgal, Amita; Sherrill-Mix, Scott; Skene, Debra J; Storch, Kai-Florian; Takahashi, Joseph S; Ueda, Hiroki R; Wang, Han; Weitz, Charles; Westermark, Pål O; Wijnen, Herman; Xu, Ying; Wu, Gang; Yoo, Seung-Hee; Young, Michael; Zhang, Eric Erquan; Zielinski, Tomasz; Hogenesch, John B

    2017-10-01

    Genome biology approaches have made enormous contributions to our understanding of biological rhythms, particularly in identifying outputs of the clock, including RNAs, proteins, and metabolites, whose abundance oscillates throughout the day. These methods hold significant promise for future discovery, particularly when combined with computational modeling. However, genome-scale experiments are costly and laborious, yielding "big data" that are conceptually and statistically difficult to analyze. There is no obvious consensus regarding design or analysis. Here we discuss the relevant technical considerations to generate reproducible, statistically sound, and broadly useful genome-scale data. Rather than suggest a set of rigid rules, we aim to codify principles by which investigators, reviewers, and readers of the primary literature can evaluate the suitability of different experimental designs for measuring different aspects of biological rhythms. We introduce CircaInSilico, a web-based application for generating synthetic genome biology data to benchmark statistical methods for studying biological rhythms. Finally, we discuss several unmet analytical needs, including applications to clinical medicine, and suggest productive avenues to address them.

  13. Guidelines for Genome-Scale Analysis of Biological Rhythms

    PubMed Central

    Hughes, Michael E.; Abruzzi, Katherine C.; Allada, Ravi; Anafi, Ron; Arpat, Alaaddin Bulak; Asher, Gad; Baldi, Pierre; de Bekker, Charissa; Bell-Pedersen, Deborah; Blau, Justin; Brown, Steve; Ceriani, M. Fernanda; Chen, Zheng; Chiu, Joanna C.; Cox, Juergen; Crowell, Alexander M.; DeBruyne, Jason P.; Dijk, Derk-Jan; DiTacchio, Luciano; Doyle, Francis J.; Duffield, Giles E.; Dunlap, Jay C.; Eckel-Mahan, Kristin; Esser, Karyn A.; FitzGerald, Garret A.; Forger, Daniel B.; Francey, Lauren J.; Fu, Ying-Hui; Gachon, Frédéric; Gatfield, David; de Goede, Paul; Golden, Susan S.; Green, Carla; Harer, John; Harmer, Stacey; Haspel, Jeff; Hastings, Michael H.; Herzel, Hanspeter; Herzog, Erik D.; Hoffmann, Christy; Hong, Christian; Hughey, Jacob J.; Hurley, Jennifer M.; de la Iglesia, Horacio O.; Johnson, Carl; Kay, Steve A.; Koike, Nobuya; Kornacker, Karl; Kramer, Achim; Lamia, Katja; Leise, Tanya; Lewis, Scott A.; Li, Jiajia; Li, Xiaodong; Liu, Andrew C.; Loros, Jennifer J.; Martino, Tami A.; Menet, Jerome S.; Merrow, Martha; Millar, Andrew J.; Mockler, Todd; Naef, Felix; Nagoshi, Emi; Nitabach, Michael N.; Olmedo, Maria; Nusinow, Dmitri A.; Ptáček, Louis J.; Rand, David; Reddy, Akhilesh B.; Robles, Maria S.; Roenneberg, Till; Rosbash, Michael; Ruben, Marc D.; Rund, Samuel S.C.; Sancar, Aziz; Sassone-Corsi, Paolo; Sehgal, Amita; Sherrill-Mix, Scott; Skene, Debra J.; Storch, Kai-Florian; Takahashi, Joseph S.; Ueda, Hiroki R.; Wang, Han; Weitz, Charles; Westermark, Pål O.; Wijnen, Herman; Xu, Ying; Wu, Gang; Yoo, Seung-Hee; Young, Michael; Zhang, Eric Erquan; Zielinski, Tomasz; Hogenesch, John B.

    2017-01-01

    Genome biology approaches have made enormous contributions to our understanding of biological rhythms, particularly in identifying outputs of the clock, including RNAs, proteins, and metabolites, whose abundance oscillates throughout the day. These methods hold significant promise for future discovery, particularly when combined with computational modeling. However, genome-scale experiments are costly and laborious, yielding “big data” that are conceptually and statistically difficult to analyze. There is no obvious consensus regarding design or analysis. Here we discuss the relevant technical considerations to generate reproducible, statistically sound, and broadly useful genome-scale data. Rather than suggest a set of rigid rules, we aim to codify principles by which investigators, reviewers, and readers of the primary literature can evaluate the suitability of different experimental designs for measuring different aspects of biological rhythms. We introduce CircaInSilico, a web-based application for generating synthetic genome biology data to benchmark statistical methods for studying biological rhythms. Finally, we discuss several unmet analytical needs, including applications to clinical medicine, and suggest productive avenues to address them. PMID:29098954

  14. Statistical functions and relevant correlation coefficients of clearness index

    NASA Astrophysics Data System (ADS)

    Pavanello, Diego; Zaaiman, Willem; Colli, Alessandra; Heiser, John; Smith, Scott

    2015-08-01

    This article presents a statistical analysis of the sky conditions, during years from 2010 to 2012, for three different locations: the Joint Research Centre site in Ispra (Italy, European Solar Test Installation - ESTI laboratories), the site of National Renewable Energy Laboratory in Golden (Colorado, USA) and the site of Brookhaven National Laboratories in Upton (New York, USA). The key parameter is the clearness index kT, a dimensionless expression of the global irradiance impinging upon a horizontal surface at a given instant of time. In the first part, the sky conditions are characterized using daily averages, giving a general overview of the three sites. In the second part the analysis is performed using data sets with a short-term resolution of 1 sample per minute, demonstrating remarkable properties of the statistical distributions of the clearness index, reinforced by a proof using fuzzy logic methods. Successively some time-dependent correlations between different meteorological variables are presented in terms of Pearson and Spearman correlation coefficients, and introducing a new one.

  15. Non-arbitrage in financial markets: A Bayesian approach for verification

    NASA Astrophysics Data System (ADS)

    Cerezetti, F. V.; Stern, Julio Michael

    2012-10-01

    The concept of non-arbitrage plays an essential role in finance theory. Under certain regularity conditions, the Fundamental Theorem of Asset Pricing states that, in non-arbitrage markets, prices of financial instruments are martingale processes. In this theoretical framework, the analysis of the statistical distributions of financial assets can assist in understanding how participants behave in the markets, and may or may not engender arbitrage conditions. Assuming an underlying Variance Gamma statistical model, this study aims to test, using the FBST - Full Bayesian Significance Test, if there is a relevant price difference between essentially the same financial asset traded at two distinct locations. Specifically, we investigate and compare the behavior of call options on the BOVESPA Index traded at (a) the Equities Segment and (b) the Derivatives Segment of BM&FBovespa. Our results seem to point out significant statistical differences. To what extent this evidence is actually the expression of perennial arbitrage opportunities is still an open question.

  16. An Analysis of Competencies for Managing Science and Technology Programs

    DTIC Science & Technology

    2008-03-19

    competency modeling through a two-year task force commissioned by the Society for Industrial and Organizational Psychology (Shippmann and others, 2000:704...positions—specifically within Research and Development (R&D) programs. If so, the final investigative question tests whether those differences are...statistics are used to analyze the comparisons through hypothesis testing and t- tests relevant to the research investigative questions. These

  17. Features of statistical dynamics in a finite system

    NASA Astrophysics Data System (ADS)

    Yan, Shiwei; Sakata, Fumihiko; Zhuo, Yizhong

    2002-03-01

    We study features of statistical dynamics in a finite Hamilton system composed of a relevant one degree of freedom coupled to an irrelevant multidegree of freedom system through a weak interaction. Special attention is paid on how the statistical dynamics changes depending on the number of degrees of freedom in the irrelevant system. It is found that the macrolevel statistical aspects are strongly related to an appearance of the microlevel chaotic motion, and a dissipation of the relevant motion is realized passing through three distinct stages: dephasing, statistical relaxation, and equilibrium regimes. It is clarified that the dynamical description and the conventional transport approach provide us with almost the same macrolevel and microlevel mechanisms only for the system with a very large number of irrelevant degrees of freedom. It is also shown that the statistical relaxation in the finite system is an anomalous diffusion and the fluctuation effects have a finite correlation time.

  18. Features of statistical dynamics in a finite system.

    PubMed

    Yan, Shiwei; Sakata, Fumihiko; Zhuo, Yizhong

    2002-03-01

    We study features of statistical dynamics in a finite Hamilton system composed of a relevant one degree of freedom coupled to an irrelevant multidegree of freedom system through a weak interaction. Special attention is paid on how the statistical dynamics changes depending on the number of degrees of freedom in the irrelevant system. It is found that the macrolevel statistical aspects are strongly related to an appearance of the microlevel chaotic motion, and a dissipation of the relevant motion is realized passing through three distinct stages: dephasing, statistical relaxation, and equilibrium regimes. It is clarified that the dynamical description and the conventional transport approach provide us with almost the same macrolevel and microlevel mechanisms only for the system with a very large number of irrelevant degrees of freedom. It is also shown that the statistical relaxation in the finite system is an anomalous diffusion and the fluctuation effects have a finite correlation time.

  19. Clinical relevance of findings in trials of CBT for depression.

    PubMed

    Lepping, P; Whittington, R; Sambhi, R S; Lane, S; Poole, R; Leucht, S; Cuijpers, P; McCabe, R; Waheed, W

    2017-09-01

    Cognitive behavioural therapy (CBT) is beneficial in depression. Symptom scores can be translated into Clinical Global Impression (CGI) scale scores to indicate clinical relevance. We aimed to assess the clinical relevance of findings of randomised controlled trials (RCTs) of CBT in depression. We identified RCTs of CBT that used the Hamilton Rating Scale for Depression (HAMD). HAMD scores were translated into Clinical Global Impression - Change scale (CGI-I) scores to measure clinical relevance. One hundred and seventy datasets from 82 studies were included. The mean percentage HAMD change for treatment arms was 53.66%, and 29.81% for control arms, a statistically significant difference. Combined active therapies showed the biggest improvement on CGI-I score, followed by CBT alone. All active treatments had better than expected HAMD percentage reduction and CGI-I scores. CBT has a clinically relevant effect in depression, with a notional CGI-I score of 2.2, indicating a significant clinical response. The non-specific or placebo effect of being in a psychotherapy trial was a 29% reduction of HAMD. Copyright © 2017. Published by Elsevier Masson SAS.

  20. Active browsing using similarity pyramids

    NASA Astrophysics Data System (ADS)

    Chen, Jau-Yuen; Bouman, Charles A.; Dalton, John C.

    1998-12-01

    In this paper, we describe a new approach to managing large image databases, which we call active browsing. Active browsing integrates relevance feedback into the browsing environment, so that users can modify the database's organization to suit the desired task. Our method is based on a similarity pyramid data structure, which hierarchically organizes the database, so that it can be efficiently browsed. At coarse levels, the similarity pyramid allows users to view the database as large clusters of similar images. Alternatively, users can 'zoom into' finer levels to view individual images. We discuss relevance feedback for the browsing process, and argue that it is fundamentally different from relevance feedback for more traditional search-by-query tasks. We propose two fundamental operations for active browsing: pruning and reorganization. Both of these operations depend on a user-defined relevance set, which represents the image or set of images desired by the user. We present statistical methods for accurately pruning the database, and we propose a new 'worm hole' distance metric for reorganizing the database, so that members of the relevance set are grouped together.

  1. Surveys Assessing Students' Attitudes toward Statistics: A Systematic Review of Validity and Reliability

    ERIC Educational Resources Information Center

    Nolan, Meaghan M.; Beran, Tanya; Hecker, Kent G.

    2012-01-01

    Students with positive attitudes toward statistics are likely to show strong academic performance in statistics courses. Multiple surveys measuring students' attitudes toward statistics exist; however, a comparison of the validity and reliability of interpretations based on their scores is needed. A systematic review of relevant electronic…

  2. Functional relevance of interindividual differences in temporal lobe callosal pathways: a DTI tractography study.

    PubMed

    Westerhausen, René; Grüner, Renate; Specht, Karsten; Hugdahl, Kenneth

    2009-06-01

    The midsagittal corpus callosum is topographically organized, that is, with regard to their cortical origin several subtracts can be distinguished within the corpus callosum that belong to specific functional brain networks. Recent diffusion tensor tractography studies have also revealed remarkable interindividual differences in the size and exact localization of these tracts. To examine the functional relevance of interindividual variability in callosal tracts, 17 right-handed male participants underwent structural and diffusion tensor magnetic resonance imaging. Probabilistic tractography was carried out to identify the callosal subregions that interconnect left and right temporal lobe auditory processing areas, and the midsagittal size of this tract was seen as indicator of the (anatomical) strength of this connection. Auditory information transfer was assessed applying an auditory speech perception task with dichotic presentations of consonant-vowel syllables (e.g., /ba-ga/). The frequency of correct left ear reports in this task served as a functional measure of interhemispheric transfer. Statistical analysis showed that a stronger anatomical connection between the superior temporal lobe areas supports a better information transfer. This specific structure-function association in the auditory modality supports the general notion that interindividual differences in callosal topography possess functional relevance.

  3. Patch testing in children from 2005 to 2012: results from the North American contact dermatitis group.

    PubMed

    Zug, Kathryn A; Pham, Anh Khoa; Belsito, Donald V; DeKoven, Joel G; DeLeo, Vincent A; Fowler, Joseph F; Fransway, Anthony F; Maibach, Howard I; Marks, James G; Mathias, C G Toby; Pratt, Melanie D; Sasseville, Denis; Storrs, Frances J; Taylor, James S; Warshaw, Erin M; Zirwas, Matthew J

    2014-01-01

    Allergic contact dermatitis is common in children. Epicutaneous patch testing is an important tool for identifying responsible allergens. The objective of this study was to provide the patch test results from children (aged ≤18 years) examined by the North American Contact Dermatitis Group from 2005 to 2012. This is a retrospective analysis of children patch-tested with the North American Contact Dermatitis Group 65- or 70-allergen series. Frequencies and counts were compared with previously published data (2001-2004) using χ statistics. A total of 883 children were tested during the study period. A percentage of 62.3% had ≥1 positive patch test and 56.7% had ≥1 relevant positive patch test. Frequencies of positive patch test and relevant positive patch test reaction were highest with nickel sulfate (28.1/25.6), cobalt chloride (12.3/9.1), neomycin sulfate (7.1/6.6), balsam of Peru (5.7/5.5), and lanolin alcohol 50% petrolatum vehicle (5.5/5.1). The ≥1 positive patch test and ≥1 relevant positive patch test in the children did not differ significantly from adults (≥19 years) or from previously tested children (2001-2004). The percentage of clinically relevant positive patch tests for 27 allergens differed significantly between the children and adults. A total of 23.6% of children had a relevant positive reaction to at least 1 supplemental allergen. Differences in positive patch test and relevant positive patch test frequencies between children and adults as well as test periods confirm the importance of reporting periodic updates of patch testing in children to enhance clinicians' vigilance to clinically important allergens.

  4. Do two machine-learning based prognostic signatures for breast cancer capture the same biological processes?

    PubMed

    Drier, Yotam; Domany, Eytan

    2011-03-14

    The fact that there is very little if any overlap between the genes of different prognostic signatures for early-discovery breast cancer is well documented. The reasons for this apparent discrepancy have been explained by the limits of simple machine-learning identification and ranking techniques, and the biological relevance and meaning of the prognostic gene lists was questioned. Subsequently, proponents of the prognostic gene lists claimed that different lists do capture similar underlying biological processes and pathways. The present study places under scrutiny the validity of this claim, for two important gene lists that are at the focus of current large-scale validation efforts. We performed careful enrichment analysis, controlling the effects of multiple testing in a manner which takes into account the nested dependent structure of gene ontologies. In contradiction to several previous publications, we find that the only biological process or pathway for which statistically significant concordance can be claimed is cell proliferation, a process whose relevance and prognostic value was well known long before gene expression profiling. We found that the claims reported by others, of wider concordance between the biological processes captured by the two prognostic signatures studied, were found either to be lacking statistical rigor or were in fact based on addressing some other question.

  5. Evaluation of a Partial Genome Screening of Two Asthma Susceptibility Regions Using Bayesian Network Based Bayesian Multilevel Analysis of Relevance

    PubMed Central

    Antal, Péter; Kiszel, Petra Sz.; Gézsi, András; Hadadi, Éva; Virág, Viktor; Hajós, Gergely; Millinghoffer, András; Nagy, Adrienne; Kiss, András; Semsei, Ágnes F.; Temesi, Gergely; Melegh, Béla; Kisfali, Péter; Széll, Márta; Bikov, András; Gálffy, Gabriella; Tamási, Lilla; Falus, András; Szalai, Csaba

    2012-01-01

    Genetic studies indicate high number of potential factors related to asthma. Based on earlier linkage analyses we selected the 11q13 and 14q22 asthma susceptibility regions, for which we designed a partial genome screening study using 145 SNPs in 1201 individuals (436 asthmatic children and 765 controls). The results were evaluated with traditional frequentist methods and we applied a new statistical method, called Bayesian network based Bayesian multilevel analysis of relevance (BN-BMLA). This method uses Bayesian network representation to provide detailed characterization of the relevance of factors, such as joint significance, the type of dependency, and multi-target aspects. We estimated posteriors for these relations within the Bayesian statistical framework, in order to estimate the posteriors whether a variable is directly relevant or its association is only mediated. With frequentist methods one SNP (rs3751464 in the FRMD6 gene) provided evidence for an association with asthma (OR = 1.43(1.2–1.8); p = 3×10−4). The possible role of the FRMD6 gene in asthma was also confirmed in an animal model and human asthmatics. In the BN-BMLA analysis altogether 5 SNPs in 4 genes were found relevant in connection with asthma phenotype: PRPF19 on chromosome 11, and FRMD6, PTGER2 and PTGDR on chromosome 14. In a subsequent step a partial dataset containing rhinitis and further clinical parameters was used, which allowed the analysis of relevance of SNPs for asthma and multiple targets. These analyses suggested that SNPs in the AHNAK and MS4A2 genes were indirectly associated with asthma. This paper indicates that BN-BMLA explores the relevant factors more comprehensively than traditional statistical methods and extends the scope of strong relevance based methods to include partial relevance, global characterization of relevance and multi-target relevance. PMID:22432035

  6. Combining censored and uncensored data in a U-statistic: design and sample size implications for cell therapy research.

    PubMed

    Moyé, Lemuel A; Lai, Dejian; Jing, Kaiyan; Baraniuk, Mary Sarah; Kwak, Minjung; Penn, Marc S; Wu, Colon O

    2011-01-01

    The assumptions that anchor large clinical trials are rooted in smaller, Phase II studies. In addition to specifying the target population, intervention delivery, and patient follow-up duration, physician-scientists who design these Phase II studies must select the appropriate response variables (endpoints). However, endpoint measures can be problematic. If the endpoint assesses the change in a continuous measure over time, then the occurrence of an intervening significant clinical event (SCE), such as death, can preclude the follow-up measurement. Finally, the ideal continuous endpoint measurement may be contraindicated in a fraction of the study patients, a change that requires a less precise substitution in this subset of participants.A score function that is based on the U-statistic can address these issues of 1) intercurrent SCE's and 2) response variable ascertainments that use different measurements of different precision. The scoring statistic is easy to apply, clinically relevant, and provides flexibility for the investigators' prospective design decisions. Sample size and power formulations for this statistic are provided as functions of clinical event rates and effect size estimates that are easy for investigators to identify and discuss. Examples are provided from current cardiovascular cell therapy research.

  7. Sensitivity of submersed freshwater macrophytes and endpoints in laboratory toxicity tests.

    PubMed

    Arts, Gertie H P; Belgers, J Dick M; Hoekzema, Conny H; Thissen, Jac T N M

    2008-05-01

    The toxicological sensitivity and variability of a range of macrophyte endpoints were statistically tested with data from chronic, non-axenic, macrophyte toxicity tests. Five submersed freshwater macrophytes, four pesticides/biocides and 13 endpoints were included in the statistical analyses. Root endpoints, reflecting root growth, were most sensitive in the toxicity tests, while endpoints relating to biomass, growth and shoot length were less sensitive. The endpoints with the lowest coefficients of variation were not necessarily the endpoints, which were toxicologically most sensitive. Differences in sensitivity were in the range of 10-1000 for different macrophyte-specific endpoints. No macrophyte species was consistently the most sensitive. Criteria to select endpoints in macrophyte toxicity tests should include toxicological sensitivity, variance and ecological relevance. Hence, macrophyte toxicity tests should comprise an array of endpoints, including very sensitive endpoints like those relating to root growth.

  8. Understanding common statistical methods, Part I: descriptive methods, probability, and continuous data.

    PubMed

    Skinner, Carl G; Patel, Manish M; Thomas, Jerry D; Miller, Michael A

    2011-01-01

    Statistical methods are pervasive in medical research and general medical literature. Understanding general statistical concepts will enhance our ability to critically appraise the current literature and ultimately improve the delivery of patient care. This article intends to provide an overview of the common statistical methods relevant to medicine.

  9. MSW Students' Perceptions of Relevance and Application of Statistics: Implications for Field Education

    ERIC Educational Resources Information Center

    Davis, Ashley; Mirick, Rebecca G.

    2015-01-01

    Many social work students feel anxious when taking a statistics course. Their attitudes, beliefs, and behaviors after learning statistics are less known. However, such information could help instructors support students' ongoing development of statistical knowledge. With a sample of MSW students (N = 101) in one program, this study examined…

  10. Measured, modeled, and causal conceptions of fitness

    PubMed Central

    Abrams, Marshall

    2012-01-01

    This paper proposes partial answers to the following questions: in what senses can fitness differences plausibly be considered causes of evolution?What relationships are there between fitness concepts used in empirical research, modeling, and abstract theoretical proposals? How does the relevance of different fitness concepts depend on research questions and methodological constraints? The paper develops a novel taxonomy of fitness concepts, beginning with type fitness (a property of a genotype or phenotype), token fitness (a property of a particular individual), and purely mathematical fitness. Type fitness includes statistical type fitness, which can be measured from population data, and parametric type fitness, which is an underlying property estimated by statistical type fitnesses. Token fitness includes measurable token fitness, which can be measured on an individual, and tendential token fitness, which is assumed to be an underlying property of the individual in its environmental circumstances. Some of the paper's conclusions can be outlined as follows: claims that fitness differences do not cause evolution are reasonable when fitness is treated as statistical type fitness, measurable token fitness, or purely mathematical fitness. Some of the ways in which statistical methods are used in population genetics suggest that what natural selection involves are differences in parametric type fitnesses. Further, it's reasonable to think that differences in parametric type fitness can cause evolution. Tendential token fitnesses, however, are not themselves sufficient for natural selection. Though parametric type fitnesses are typically not directly measurable, they can be modeled with purely mathematical fitnesses and estimated by statistical type fitnesses, which in turn are defined in terms of measurable token fitnesses. The paper clarifies the ways in which fitnesses depend on pragmatic choices made by researchers. PMID:23112804

  11. How to show that unicorn milk is a chronobiotic: the regression-to-the-mean statistical artifact.

    PubMed

    Atkinson, G; Waterhouse, J; Reilly, T; Edwards, B

    2001-11-01

    Few chronobiologists may be aware of the regression-to-the-mean (RTM) statistical artifact, even though it may have far-reaching influences on chronobiological data. With the aid of simulated measurements of the circadian rhythm phase of body temperature and a completely bogus stimulus (unicorn milk), we explain what RTM is and provide examples relevant to chronobiology. We show how RTM may lead to erroneous conclusions regarding individual differences in phase responses to rhythm disturbances and how it may appear as though unicorn milk has phase-shifting effects and can successfully treat some circadian rhythm disorders. Guidelines are provided to ensure RTM effects are minimized in chronobiological investigations.

  12. The body of knowledge on compliance in heart failure patients: we are not there yet.

    PubMed

    Nieuwenhuis, Maurice M W; van der Wal, Martje H L; Jaarsma, Tiny

    2011-01-01

    Noncompliance with diet and fluid restriction is a problem in patients with heart failure (HF). In recent studies, a relationship between compliance with sodium and fluid restriction and knowledge and beliefs regarding compliance was found. In these studies, however, compliance was primarily measured by interview or questionnaire. To examine the relationship between compliance with sodium and fluid restriction measured with a nutrition diary and knowledge, beliefs, and other relevant variables in HF patients. Eighty-four HF patients completed a nutrition diary for 3 days. Patients also completed questionnaires on knowledge, beliefs regarding compliance, and depressive symptoms. Differences in relevant variables between compliant and noncompliant patients were assessed. Compliance with sodium and fluid restriction was 79% and 72%. Although not statistically significant, a higher percentage of patients were compliant with the less stringent restrictions compared with the more stringent restrictions, and in addition, more noncompliant patients perceived difficulty following the regimen compared with their compliant counterparts. In contrast with other studies, no significant differences in knowledge, beliefs, and relevant demographic and clinical variables were found between compliant and noncompliant patients. Perceived difficulty and the amount of the prescribed restriction seem to be relevant concepts that play a role in compliance with sodium and fluid restriction in HF and need to be explored in future research.

  13. Assessing the prevalence and clinical relevance of positive abdominal and pelvic CT findings in senior patients presenting to the emergency department.

    PubMed

    Alabousi, Abdullah; Patlas, Michael N; Meshki, Malek; Monteiro, Sandra; Katz, Douglas S

    2016-04-01

    The purpose of our study was to retrospectively evaluate the prevalence and clinical relevance of positive abdominal and pelvic CT findings for patients 65 years of age and older, when compared with all other scanned adult Emergency Department (ED) patients, at a single tertiary care hospital. Our hypothesis was that there is an increased prevalence and clinical relevance of positive abdominal/pelvic CT findings in senior patients. A research ethics board-approved retrospective review of all adult patients who underwent an emergency CT of the abdomen and pelvis for acute nontraumatic abdominal and/or pelvic signs and symptoms was performed. Two thousand one hundred two patients between October 1, 2011, and September 30, 2013, were reviewed. Six hundred thirty-one patients were included in the <65 group (298 men and 333 women; mean age 46, age range 18-64), and 462 were included in the >65 group (209 men and 253 women; mean age 77.6, age range 65-99). Overall, there were more positive CT findings for patients <65 (389 positive cases, 61.6 %) compared with the >65 group (257 positive cases, 55.6 %), which was a statistically significant difference (p < 0.03). Moreover, with the exception of complicated appendicitis cases, which were more common in the >65 group, there were no statistically significant differences in the clinical/surgical relevance of the positive CT findings between the two groups. The findings of our retrospective study therefore refute our hypothesis that there is an increased prevalence of positive abdominal CT findings in patients >65. This may be related to ED physicians at our institution being more hesitant to order CT examinations for the younger population, presumably due to radiation concerns. However, older patients in our series were more likely to present with complicated appendicitis, and a lower threshold for ordering CT examinations of the abdomen and pelvis in this patient population should therefore be considered.

  14. Indicators for evaluating European population health: a Delphi selection process.

    PubMed

    Freitas, Ângela; Santana, Paula; Oliveira, Mónica D; Almendra, Ricardo; Bana E Costa, João C; Bana E Costa, Carlos A

    2018-04-27

    Indicators are essential instruments for monitoring and evaluating population health. The selection of a multidimensional set of indicators should not only reflect the scientific evidence on health outcomes and health determinants, but also the views of health experts and stakeholders. The aim of this study is to describe the Delphi selection process designed to promote agreement on indicators considered relevant to evaluate population health at the European regional level. Indicators were selected in a Delphi survey conducted using a web-platform designed to implement and monitor participatory processes. It involved a panel of 51 experts and 30 stakeholders from different areas of knowledge and geographies. In three consecutive rounds the panel indicated their level of agreement or disagreement with indicator's relevance for evaluating population health in Europe. Inferential statistics were applied to draw conclusions on observed level of agreement (Scott's Pi interrater reliability coefficient) and opinion change (McNemar Chi-square test). Multivariate analysis of variance was conducted to check if the field of expertise influenced the panellist responses (Wilk's Lambda test). The panel participated extensively in the study (overall response rate: 80%). Eighty indicators reached group agreement for selection in the areas of: economic and social environment (12); demographic change (5); lifestyle and health behaviours (8); physical environment (6); built environment (12); healthcare services (11) and health outcomes (26). Higher convergence of group opinion towards agreement on the relevance of indicators was seen for lifestyle and health behaviours, healthcare services, and health outcomes. The panellists' field of expertise influenced responses: statistically significant differences were found for economic and social environment (p < 0.05 in round 1 and 2), physical environment (p < 0.01 in round 1) and health outcomes (p < 0.01 in round 3). The high levels of participation observed in this study, by involving experts and stakeholders and ascertaining their views, underpinned the added value of using a transparent Web-Delphi process to promote agreement on what indicators are relevant to appraise population health.

  15. Optimizing α for better statistical decisions: a case study involving the pace-of-life syndrome hypothesis: optimal α levels set to minimize Type I and II errors frequently result in different conclusions from those using α = 0.05.

    PubMed

    Mudge, Joseph F; Penny, Faith M; Houlahan, Jeff E

    2012-12-01

    Setting optimal significance levels that minimize Type I and Type II errors allows for more transparent and well-considered statistical decision making compared to the traditional α = 0.05 significance level. We use the optimal α approach to re-assess conclusions reached by three recently published tests of the pace-of-life syndrome hypothesis, which attempts to unify occurrences of different physiological, behavioral, and life history characteristics under one theory, over different scales of biological organization. While some of the conclusions reached using optimal α were consistent to those previously reported using the traditional α = 0.05 threshold, opposing conclusions were also frequently reached. The optimal α approach reduced probabilities of Type I and Type II errors, and ensured statistical significance was associated with biological relevance. Biologists should seriously consider their choice of α when conducting null hypothesis significance tests, as there are serious disadvantages with consistent reliance on the traditional but arbitrary α = 0.05 significance level. Copyright © 2012 WILEY Periodicals, Inc.

  16. Interchangeability of Procalcitonin Measurements Using the Point of Care Testing i-CHROMATM Reader and the Automated Liaison XL.

    PubMed

    Stenner, Elisabetta; Barbati, Giulia; West, Nicole; Ben, Fabia Del; Martin, Francesca; Ruscio, Maurizio

    2018-06-01

    Our aim was to verify if procalcitonin (PCT) measurements using the new point-of-care testing i-CHROMATM are interchangeable with those of Liaison XL. One hundred seventeen serum samples were processed sequentially on a Liaison XL and i-CHROMATM. Statistical analysis was done using the Passing-Bablok regression, Bland-Altman test, and Cohen's Kappa statistic. Proportional and constant differences were observed between i-CHROMATM and Liaison XL. The 95% CI of the mean bias% was very large, exceeding the maximum allowable TE% and the clinical reference change value. However, the concordance between methods at the clinical relevant cutoffs was strong, with the exception of the 0.25 ng/mL cutoff which was moderate. Our data suggest that i-CHROMATM is not interchangeable with Liaison XL. However, while the strong concordance at the clinical relevant cutoffs allows us to consider i-CHROMATM a suitable option to Liaison XL to support clinicians' decision-making; nevertheless, the moderate agreement at the 0.25 ng/mL cutoff recommends caution in interpreting the data around this cutoff.

  17. Hybrid statistics-simulations based method for atom-counting from ADF STEM images.

    PubMed

    De Wael, Annelies; De Backer, Annick; Jones, Lewys; Nellist, Peter D; Van Aert, Sandra

    2017-06-01

    A hybrid statistics-simulations based method for atom-counting from annular dark field scanning transmission electron microscopy (ADF STEM) images of monotype crystalline nanostructures is presented. Different atom-counting methods already exist for model-like systems. However, the increasing relevance of radiation damage in the study of nanostructures demands a method that allows atom-counting from low dose images with a low signal-to-noise ratio. Therefore, the hybrid method directly includes prior knowledge from image simulations into the existing statistics-based method for atom-counting, and accounts in this manner for possible discrepancies between actual and simulated experimental conditions. It is shown by means of simulations and experiments that this hybrid method outperforms the statistics-based method, especially for low electron doses and small nanoparticles. The analysis of a simulated low dose image of a small nanoparticle suggests that this method allows for far more reliable quantitative analysis of beam-sensitive materials. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Confidence interval or p-value?: part 4 of a series on evaluation of scientific publications.

    PubMed

    du Prel, Jean-Baptist; Hommel, Gerhard; Röhrig, Bernd; Blettner, Maria

    2009-05-01

    An understanding of p-values and confidence intervals is necessary for the evaluation of scientific articles. This article will inform the reader of the meaning and interpretation of these two statistical concepts. The uses of these two statistical concepts and the differences between them are discussed on the basis of a selective literature search concerning the methods employed in scientific articles. P-values in scientific studies are used to determine whether a null hypothesis formulated before the performance of the study is to be accepted or rejected. In exploratory studies, p-values enable the recognition of any statistically noteworthy findings. Confidence intervals provide information about a range in which the true value lies with a certain degree of probability, as well as about the direction and strength of the demonstrated effect. This enables conclusions to be drawn about the statistical plausibility and clinical relevance of the study findings. It is often useful for both statistical measures to be reported in scientific articles, because they provide complementary types of information.

  19. Conducting Simulation Studies in the R Programming Environment.

    PubMed

    Hallgren, Kevin A

    2013-10-12

    Simulation studies allow researchers to answer specific questions about data analysis, statistical power, and best-practices for obtaining accurate results in empirical research. Despite the benefits that simulation research can provide, many researchers are unfamiliar with available tools for conducting their own simulation studies. The use of simulation studies need not be restricted to researchers with advanced skills in statistics and computer programming, and such methods can be implemented by researchers with a variety of abilities and interests. The present paper provides an introduction to methods used for running simulation studies using the R statistical programming environment and is written for individuals with minimal experience running simulation studies or using R. The paper describes the rationale and benefits of using simulations and introduces R functions relevant for many simulation studies. Three examples illustrate different applications for simulation studies, including (a) the use of simulations to answer a novel question about statistical analysis, (b) the use of simulations to estimate statistical power, and (c) the use of simulations to obtain confidence intervals of parameter estimates through bootstrapping. Results and fully annotated syntax from these examples are provided.

  20. Relationship between perceptual learning in speech and statistical learning in younger and older adults

    PubMed Central

    Neger, Thordis M.; Rietveld, Toni; Janse, Esther

    2014-01-01

    Within a few sentences, listeners learn to understand severely degraded speech such as noise-vocoded speech. However, individuals vary in the amount of such perceptual learning and it is unclear what underlies these differences. The present study investigates whether perceptual learning in speech relates to statistical learning, as sensitivity to probabilistic information may aid identification of relevant cues in novel speech input. If statistical learning and perceptual learning (partly) draw on the same general mechanisms, then statistical learning in a non-auditory modality using non-linguistic sequences should predict adaptation to degraded speech. In the present study, 73 older adults (aged over 60 years) and 60 younger adults (aged between 18 and 30 years) performed a visual artificial grammar learning task and were presented with 60 meaningful noise-vocoded sentences in an auditory recall task. Within age groups, sentence recognition performance over exposure was analyzed as a function of statistical learning performance, and other variables that may predict learning (i.e., hearing, vocabulary, attention switching control, working memory, and processing speed). Younger and older adults showed similar amounts of perceptual learning, but only younger adults showed significant statistical learning. In older adults, improvement in understanding noise-vocoded speech was constrained by age. In younger adults, amount of adaptation was associated with lexical knowledge and with statistical learning ability. Thus, individual differences in general cognitive abilities explain listeners' variability in adapting to noise-vocoded speech. Results suggest that perceptual and statistical learning share mechanisms of implicit regularity detection, but that the ability to detect statistical regularities is impaired in older adults if visual sequences are presented quickly. PMID:25225475

  1. Relationship between perceptual learning in speech and statistical learning in younger and older adults.

    PubMed

    Neger, Thordis M; Rietveld, Toni; Janse, Esther

    2014-01-01

    Within a few sentences, listeners learn to understand severely degraded speech such as noise-vocoded speech. However, individuals vary in the amount of such perceptual learning and it is unclear what underlies these differences. The present study investigates whether perceptual learning in speech relates to statistical learning, as sensitivity to probabilistic information may aid identification of relevant cues in novel speech input. If statistical learning and perceptual learning (partly) draw on the same general mechanisms, then statistical learning in a non-auditory modality using non-linguistic sequences should predict adaptation to degraded speech. In the present study, 73 older adults (aged over 60 years) and 60 younger adults (aged between 18 and 30 years) performed a visual artificial grammar learning task and were presented with 60 meaningful noise-vocoded sentences in an auditory recall task. Within age groups, sentence recognition performance over exposure was analyzed as a function of statistical learning performance, and other variables that may predict learning (i.e., hearing, vocabulary, attention switching control, working memory, and processing speed). Younger and older adults showed similar amounts of perceptual learning, but only younger adults showed significant statistical learning. In older adults, improvement in understanding noise-vocoded speech was constrained by age. In younger adults, amount of adaptation was associated with lexical knowledge and with statistical learning ability. Thus, individual differences in general cognitive abilities explain listeners' variability in adapting to noise-vocoded speech. Results suggest that perceptual and statistical learning share mechanisms of implicit regularity detection, but that the ability to detect statistical regularities is impaired in older adults if visual sequences are presented quickly.

  2. A global approach to estimate irrigated areas - a comparison between different data and statistics

    NASA Astrophysics Data System (ADS)

    Meier, Jonas; Zabel, Florian; Mauser, Wolfram

    2018-02-01

    Agriculture is the largest global consumer of water. Irrigated areas constitute 40 % of the total area used for agricultural production (FAO, 2014a) Information on their spatial distribution is highly relevant for regional water management and food security. Spatial information on irrigation is highly important for policy and decision makers, who are facing the transition towards more efficient sustainable agriculture. However, the mapping of irrigated areas still represents a challenge for land use classifications, and existing global data sets differ strongly in their results. The following study tests an existing irrigation map based on statistics and extends the irrigated area using ancillary data. The approach processes and analyzes multi-temporal normalized difference vegetation index (NDVI) SPOT-VGT data and agricultural suitability data - both at a spatial resolution of 30 arcsec - incrementally in a multiple decision tree. It covers the period from 1999 to 2012. The results globally show a 18 % larger irrigated area than existing approaches based on statistical data. The largest differences compared to the official national statistics are found in Asia and particularly in China and India. The additional areas are mainly identified within already known irrigated regions where irrigation is more dense than previously estimated. The validation with global and regional products shows the large divergence of existing data sets with respect to size and distribution of irrigated areas caused by spatial resolution, the considered time period and the input data and assumption made.

  3. Genetic association between the dopamine D1-receptor gene and paranoid schizophrenia in a northern Han Chinese population.

    PubMed

    Yao, Jun; Ding, Mei; Xing, Jiaxin; Xuan, Jinfeng; Pang, Hao; Pan, Yuqing; Wang, Baojie

    2014-01-01

    Dysregulation of dopaminergic neurotransmission at the D1 receptor in the prefrontal cortex has been implicated in the pathogenesis of schizophrenia. Genetic polymorphisms of the dopamine D1-receptor gene have a plausible role in modulating the risk of schizophrenia. To determine the role of DRD1 genetic polymorphisms as a risk factor for schizophrenia, we undertook a case-control study to look for an association between the DRD1 gene and schizophrenia. We genotyped eleven single-nucleotide polymorphisms within the DRD1 gene by deoxyribonucleic acid sequencing involving 173 paranoid schizophrenia patients and 213 unrelated healthy individuals. Statistical analysis was performed to identify the difference of genotype, allele, or haplotype distribution between cases and controls. A significantly lower risk of paranoid schizophrenia was associated with the AG + GG genotype of rs5326 and the AG + GG genotype of rs4532 compared to the AA genotype and the AA genotype, respectively. Distribution of haplotypes was no different between controls and paranoid schizophrenia patients. In the males, the genotype distribution of rs5326 was statistically different between cases and controls. In the females, the genotype distribution of rs4532 was statistically different between cases and controls. However, the aforementioned statistical significances were lost after Bonferroni correction. It is unlikely that DRD1 accounts for a substantial proportion of the genetic risk for schizophrenia. As an important dopaminergic gene, DRD1 may contribute to schizophrenia by interacting with other genes, and further relevant studies are warranted.

  4. [Itraconazole and secnidazole capsules vs. vaginal ovules of fluocinolone acetonide, nystatin and metronidazole in the symptomatic treatment of vaginitis].

    PubMed

    Alvarado García, A; Gaviño Ambriz, S

    1998-04-01

    Evaluation of oral treatment in vaginitis and vaginosis using Itraconazol and sechidazol, in comparison to topic treatment using vaginal ovules of acetonido of fluocinolona 0.50 mg, nistatina 100,000 U and metronidazol 500 mg. Longitudinal, prospective and open comparative study. Servicio de Reproducción Humana(Human Reproduction Department), Centro Médico Nacional "20 de Noviembre". Forty female patients, without any relevant differences in their general characteristics, chose diagnosis was vaginitis and vaginosis, who were medically treated through external consultation, divided in two groups of twenty each one. Group 1 oral treatment with itraconazol and secnidazol. Group 2 had topic treatment with fluocinolona, nistatina and metronidazol. All of the patients were controlled in seven and fourteen days time, in order to evaluate the intensity of their clinical symptomatology, as well as the efficacy in both ways of treatment. Leukorrhea was the most important symptom in all the cases, going from minor to serious white discharge. After the treatment, we found a relevant difference statistically significative in patients treated with intraconazol and secnidazol. We did not find any differences in relation to ardor, pruritus, dispareunia and disuria at post-treatment evaluation. However, group 1 betterment was statistically significative between the first and the seventh days of treatment. Treating vaginitis or vaginosis (or both) with itraconazol and secnidazol takes less time for betterment in addition to comfort and easiness of oral administration; therefore, we consider them proper medicines in these specific cases.

  5. geneCommittee: a web-based tool for extensively testing the discriminatory power of biologically relevant gene sets in microarray data classification.

    PubMed

    Reboiro-Jato, Miguel; Arrais, Joel P; Oliveira, José Luis; Fdez-Riverola, Florentino

    2014-01-30

    The diagnosis and prognosis of several diseases can be shortened through the use of different large-scale genome experiments. In this context, microarrays can generate expression data for a huge set of genes. However, to obtain solid statistical evidence from the resulting data, it is necessary to train and to validate many classification techniques in order to find the best discriminative method. This is a time-consuming process that normally depends on intricate statistical tools. geneCommittee is a web-based interactive tool for routinely evaluating the discriminative classification power of custom hypothesis in the form of biologically relevant gene sets. While the user can work with different gene set collections and several microarray data files to configure specific classification experiments, the tool is able to run several tests in parallel. Provided with a straightforward and intuitive interface, geneCommittee is able to render valuable information for diagnostic analyses and clinical management decisions based on systematically evaluating custom hypothesis over different data sets using complementary classifiers, a key aspect in clinical research. geneCommittee allows the enrichment of microarrays raw data with gene functional annotations, producing integrated datasets that simplify the construction of better discriminative hypothesis, and allows the creation of a set of complementary classifiers. The trained committees can then be used for clinical research and diagnosis. Full documentation including common use cases and guided analysis workflows is freely available at http://sing.ei.uvigo.es/GC/.

  6. Clarifying changes in student empathy throughout medical school: a scoping review.

    PubMed

    Ferreira-Valente, Alexandra; Monteiro, Joana S; Barbosa, Rita M; Salgueira, Ana; Costa, Patrício; Costa, Manuel J

    2017-12-01

    Despite the increasing awareness of the relevance of empathy in patient care, some findings suggest that medical schools may be contributing to the deterioration of students' empathy. Therefore, it is important to clarify the magnitude and direction of changes in empathy during medical school. We employed a scoping review to elucidate trends in students' empathy changes/differences throughout medical school and examine potential bias associated with research design. The literature published in English, Spanish, Portuguese and French from 2009 to 2016 was searched. Two-hundred and nine potentially relevant citations were identified. Twenty articles met the inclusion criteria. Effect sizes of empathy scores variations were calculated to assess the practical significance of results. Our results demonstrate that scoped studies differed considerably in their design, measures used, sample sizes and results. Most studies (12 out of 20 studies) reported either positive or non-statistically significant changes/differences in empathy regardless of the measure used. The predominant trend in cross-sectional studies (ten out of 13 studies) was of significantly higher empathy scores in later years or of similar empathy scores across years, while most longitudinal studies presented either mixed-results or empathy declines. There was not a generalized international trend in changes in students' empathy throughout medical school. Although statistically significant changes/differences were detected in 13 out of 20 studies, the calculated effect sizes were small in all but two studies, suggesting little practical significance. At the present moment, the literature does not offer clear conclusions relative to changes in student empathy throughout medical school.

  7. QUALITY OF LIFE IN CHILDREN WITH HEARING IMPAIRMENT: SYSTEMATIC REVIEW AND META-ANALYSIS

    PubMed Central

    Roland, Lauren; Fischer, Caroline; Tran, Kayla; Rachakonda, Tara; Kallogjeri, Dorina; Lieu, Judith

    2017-01-01

    Objective To determine the impact of pediatric hearing loss on quality of life (QOL). Data Sources A qualified medical librarian conducted a literature search for relevant publications that evaluate QOL in school-aged children with hearing loss (HL). Review Methods Studies were assessed independently by two reviewers for inclusion in the systematic review and meta-analysis. Results From 979 abstracts, 69 were identified as relevant; ultimately 41 articles were included in the systematic review. This review revealed that children with HL generally report a lower QOL than their normal hearing peers, and QOL improves after interventions. The extent of these differences is variable among studies, and depends on the QOL measure. Four studies using the Pediatric Quality of Life Inventory (PedsQL) had sufficient data for inclusion in a meta-analysis. After pooling studies, statistically and clinically significant differences in PedsQL scores were found between children with normal hearing and those with HL, specifically in the Social and School domains. Statistically significant differences were also noted in in total scores for children with unilateral HL and in the physical domain for children with bilateral HL as compared to normal hearing, however these differences were not clinically meaningful. Conclusions Our analysis reveals that decreased QOL in children with HL is detected in distinct domains of the PedsQL questionnaire. These domains of school functioning and social interactions are especially important for development and learning. Future work should focus on these specific aspects of QOL when assessing HL in the pediatric population. PMID:27118820

  8. Passage relevance models for genomics search.

    PubMed

    Urbain, Jay; Frieder, Ophir; Goharian, Nazli

    2009-03-19

    We present a passage relevance model for integrating syntactic and semantic evidence of biomedical concepts and topics using a probabilistic graphical model. Component models of topics, concepts, terms, and document are represented as potential functions within a Markov Random Field. The probability of a passage being relevant to a biologist's information need is represented as the joint distribution across all potential functions. Relevance model feedback of top ranked passages is used to improve distributional estimates of query concepts and topics in context, and a dimensional indexing strategy is used for efficient aggregation of concept and term statistics. By integrating multiple sources of evidence including dependencies between topics, concepts, and terms, we seek to improve genomics literature passage retrieval precision. Using this model, we are able to demonstrate statistically significant improvements in retrieval precision using a large genomics literature corpus.

  9. Statistical evaluation of GLONASS amplitude scintillation over low latitudes in the Brazilian territory

    NASA Astrophysics Data System (ADS)

    de Oliveira Moraes, Alison; Muella, Marcio T. A. H.; de Paula, Eurico R.; de Oliveira, César B. A.; Terra, William P.; Perrella, Waldecir J.; Meibach-Rosa, Pâmela R. P.

    2018-04-01

    The ionospheric scintillation, generated by the ionospheric plasma irregularities, affects the radio signals that pass through it. Their effects are widely studied in the literature with two different approaches. The first one deals with the use of radio signals to study and understand the morphology of this phenomenon, while the second one seeks to understand and model how much this phenomenon interferes in the radio signals and consequently in the services to which these systems work. The interest of several areas, particularly to those that are life critical, has increased using the concept of satellite multi-constellation, which consists of receiving, processing and using data from different navigation and positioning systems. Although there is a vast literature analyzing the effects of ionospheric scintillation on satellite navigation systems, the number of studies using signals received from the Russian satellite positioning system (named GLONASS) is still very rare. This work presents for the first time in the Brazilian low-latitude sector a statistical analysis of ionospheric scintillation data for all levels of magnetic activities obtained by a set of scintillation monitors that receive signals from the GLONASS system. In this study, data collected from four stations were used in the analysis; Fortaleza, Presidente Prudente, São José dos Campos and Porto Alegre. The GLONASS L-band signals were analyzed for the period from December 21, 2012 to June 20, 2016, which includes the peak of the solar cycle 24 that occurred in 2014. The main characteristics of scintillation presented in this study include: (1) the statistical evaluation of seasonal and solar activity, showing the chances that an user on similar geophysical conditions may be susceptible to the effects of ionospheric scintillation; (2) a temporal analysis based on the local time distribution of scintillation at different seasons and intensity levels; and (3) the evaluation of number of simultaneously affected channels and its effects on the dilution of precision (DOP) for GNSS users are also presented in order to alert the timetables in which navigation will be most susceptible to such effects, as well as statistics on simultaneously affected channels. Relevant results about these statistical characteristics of scintillation are presented and analyzed providing relevant information about availability of a navigation system.

  10. Retrieval of diagnostic and treatment studies for clinical use through PubMed and PubMed's Clinical Queries filters.

    PubMed

    Lokker, Cynthia; Haynes, R Brian; Wilczynski, Nancy L; McKibbon, K Ann; Walter, Stephen D

    2011-01-01

    Clinical Queries filters were developed to improve the retrieval of high-quality studies in searches on clinical matters. The study objective was to determine the yield of relevant citations and physician satisfaction while searching for diagnostic and treatment studies using the Clinical Queries page of PubMed compared with searching PubMed without these filters. Forty practicing physicians, presented with standardized treatment and diagnosis questions and one question of their choosing, entered search terms which were processed in a random, blinded fashion through PubMed alone and PubMed Clinical Queries. Participants rated search retrievals for applicability to the question at hand and satisfaction. For treatment, the primary outcome of retrieval of relevant articles was not significantly different between the groups, but a higher proportion of articles from the Clinical Queries searches met methodologic criteria (p=0.049), and more articles were published in core internal medicine journals (p=0.056). For diagnosis, the filtered results returned more relevant articles (p=0.031) and fewer irrelevant articles (overall retrieval less, p=0.023); participants needed to screen fewer articles before arriving at the first relevant citation (p<0.05). Relevance was also influenced by content terms used by participants in searching. Participants varied greatly in their search performance. Clinical Queries filtered searches returned more high-quality studies, though the retrieval of relevant articles was only statistically different between the groups for diagnosis questions. Retrieving clinically important research studies from Medline is a challenging task for physicians. Methodological search filters can improve search retrieval.

  11. A Study of the Effectiveness of the Contextual Lab Activity in the Teaching and Learning Statistics at the UTHM (Universiti Tun Hussein Onn Malaysia)

    ERIC Educational Resources Information Center

    Kamaruddin, Nafisah Kamariah Md; Jaafar, Norzilaila bt; Amin, Zulkarnain Md

    2012-01-01

    Inaccurate concept in statistics contributes to the assumption by the students that statistics do not relate to the real world and are not relevant to the engineering field. There are universities which introduced learning statistics using statistics lab activities. However, the learning is more on the learning how to use software and not to…

  12. Nuclear magnetic resonance (NMR) study of the effect of cisplatin on the metabolic profile of MG-63 osteosarcoma cells.

    PubMed

    Duarte, Iola F; Lamego, Ines; Marques, Joana; Marques, M Paula M; Blaise, Benjamin J; Gil, Ana M

    2010-11-05

    In the present study, (1)H HRMAS NMR spectroscopy was used to assess the changes in the intracellular metabolic profile of MG-63 human osteosarcoma (OS) cells induced by the chemotherapy agent cisplatin (CDDP) at different times of exposure. Multivariate analysis was applied to the cells spectra, enabling consistent variation patterns to be detected and drug-specific metabolic effects to be identified. Statistical recoupling of variables (SRV) analysis and spectral integration enabled the most relevant spectral changes to be evaluated, revealing significant time-dependent alterations in lipids, choline-containing compounds, some amino acids, polyalcohols, and nitrogenated bases. The metabolic relevance of these compounds in the response of MG-63 cells to CDDP treatment is discussed.

  13. Hierarchical relaxation dynamics in a tilted two-band Bose-Hubbard model

    NASA Astrophysics Data System (ADS)

    Cosme, Jayson G.

    2018-04-01

    We numerically examine slow and hierarchical relaxation dynamics of interacting bosons described by a tilted two-band Bose-Hubbard model. The system is found to exhibit signatures of quantum chaos within the spectrum and the validity of the eigenstate thermalization hypothesis for relevant physical observables is demonstrated for certain parameter regimes. Using the truncated Wigner representation in the semiclassical limit of the system, dynamics of relevant observables reveal hierarchical relaxation and the appearance of prethermalized states is studied from the perspective of statistics of the underlying mean-field trajectories. The observed prethermalization scenario can be attributed to different stages of glassy dynamics in the mode-time configuration space due to dynamical phase transition between ergodic and nonergodic trajectories.

  14. A whole brain morphometric analysis of changes associated with pre-term birth

    NASA Astrophysics Data System (ADS)

    Thomaz, C. E.; Boardman, J. P.; Counsell, S.; Hill, D. L. G.; Hajnal, J. V.; Edwards, A. D.; Rutherford, M. A.; Gillies, D. F.; Rueckert, D.

    2006-03-01

    Pre-term birth is strongly associated with subsequent neuropsychiatric impairment. To identify structural differences in preterm infants we have examined a dataset of magnetic resonance (MR) images containing 88 preterm infants and 19 term born controls. We have analyzed these images by combining image registration, deformation based morphometry (DBM), multivariate statistics, and effect size maps (ESM). The methodology described has been performed directly on the MR intensity images rather than on segmented versions of the images. The results indicate that the approach described makes clear the statistical differences between the control and preterm samples, showing a leave-one-out classification accuracy of 94.74% and 95.45% respectively. In addition, finding the most discriminant direction between the groups and using DBM features and ESM we are able to identify not only what are the changes between preterm and term groups but also how relatively relevant they are in terms of volume expansion and contraction.

  15. A heuristic multi-criteria classification approach incorporating data quality information for choropleth mapping

    PubMed Central

    Sun, Min; Wong, David; Kronenfeld, Barry

    2016-01-01

    Despite conceptual and technology advancements in cartography over the decades, choropleth map design and classification fail to address a fundamental issue: estimates that are statistically indifferent may be assigned to different classes on maps or vice versa. Recently, the class separability concept was introduced as a map classification criterion to evaluate the likelihood that estimates in two classes are statistical different. Unfortunately, choropleth maps created according to the separability criterion usually have highly unbalanced classes. To produce reasonably separable but more balanced classes, we propose a heuristic classification approach to consider not just the class separability criterion but also other classification criteria such as evenness and intra-class variability. A geovisual-analytic package was developed to support the heuristic mapping process to evaluate the trade-off between relevant criteria and to select the most preferable classification. Class break values can be adjusted to improve the performance of a classification. PMID:28286426

  16. A Retrospective Analysis of Hemostatic Techniques in Primary Total Knee Arthroplasty: Traditional Electrocautery, Bipolar Sealer, and Argon Beam Coagulation.

    PubMed

    Rosenthal, Brett D; Haughom, Bryan D; Levine, Brett R

    2016-01-01

    In this retrospective cohort study of 280 primary total knee arthroplasties, clinical outcomes relevant to hemostasis were compared by electrocautery type: traditional electrocautery (TE), bipolar sealer (BS), and argon beam coagulation (ABC). Age, sex, and preoperative diagnosis were not significantly different among the TE, BS, and ABC cohorts. The 3 hemostasis systems were statistically equivalent with respect to estimated blood loss. Wound drainage during the first 48 hours after surgery was equivalent between the BS and ABC cohorts but less for the TE cohort. Transfusion requirements were not significantly different among the cohorts. The 3 hemostasis systems were statistically equivalent with respect to mean change in hemoglobin level during the early postoperative period (levels were measured on postoperative day 1 and on discharge). As BS and ABC are clinically equivalent to TE, their increased cost may not be justified.

  17. Physical properties of wild mango fruit and nut

    NASA Astrophysics Data System (ADS)

    Ehiem, J.; Simonyan, K.

    2012-02-01

    Physical properties of two wild mango varieties were studied at 81.9 and 24.5% moisture (w.b.) for the fruits and nuts, respectively. The shape and size of the fruit are the same while that of nuts differs at P = 0.05. The mass, density and bulk density of the fruits are statistically different at P = 0.05 but the volume is the same. The shape and size, volume and bulk density of the nuts are statistically the same at P = 0.05. The nuts of both varieties are also the same at P = 0.05 in terms of mass and density. The packing factor for both fruits and nut of the two varieties are the same at 0.95. The relevant data obtained for the two varieties would be useful for design and development of machines and equipment for processing and handling operations.

  18. Precipitation in a boiling soup: is microphysics driving the statistical properties of intense turbulent convection?

    NASA Astrophysics Data System (ADS)

    Parodi, A.; von Hardenberg, J.; Provenzale, A.

    2012-04-01

    Intense precipitation events are often associated with strong convective phenomena in the atmosphere. A deeper understanding of how microphysics affects the spatial and temporal variability of convective processes is relevant for many hydro-meteorological applications, such as the estimation of rainfall using remote sensing techniques and the ability to predict severe precipitation processes. In this paper, high-resolution simulations (0.1-1 km) of an atmosphere in radiative-convective equilibrium are performed using the Weather Research and Forecasting (WRF) model by prescribing different microphysical parameterizations. The dependence of fine-scale spatio-temporal properties of convective structures on microphysical details are investigated and the simulation results are compared with the known properties of radar maps of precipitation fields. We analyze and discuss similarities and differences and, based also on previous results on the dependence of precipitation statistics on the raindrop terminal velocity, try to draw some general inferences.

  19. Statistics of optimal information flow in ensembles of regulatory motifs

    NASA Astrophysics Data System (ADS)

    Crisanti, Andrea; De Martino, Andrea; Fiorentino, Jonathan

    2018-02-01

    Genetic regulatory circuits universally cope with different sources of noise that limit their ability to coordinate input and output signals. In many cases, optimal regulatory performance can be thought to correspond to configurations of variables and parameters that maximize the mutual information between inputs and outputs. Since the mid-2000s, such optima have been well characterized in several biologically relevant cases. Here we use methods of statistical field theory to calculate the statistics of the maximal mutual information (the "capacity") achievable by tuning the input variable only in an ensemble of regulatory motifs, such that a single controller regulates N targets. Assuming (i) sufficiently large N , (ii) quenched random kinetic parameters, and (iii) small noise affecting the input-output channels, we can accurately reproduce numerical simulations both for the mean capacity and for the whole distribution. Our results provide insight into the inherent variability in effectiveness occurring in regulatory systems with heterogeneous kinetic parameters.

  20. Annual Research Briefs, 1987

    NASA Technical Reports Server (NTRS)

    Moin, Parviz; Reynolds, William C.

    1988-01-01

    Lagrangian techniques have found widespread application to the prediction and understanding of turbulent transport phenomena and have yielded satisfactory results for different cases of shear flow problems. However, it must be kept in mind that in most experiments what is really available are Eulerian statistics, and it is far from obvious how to extract from them the information relevant to the Lagrangian behavior of the flow; in consequence, Lagrangian models still include some hypothesis for which no adequate supporting evidence was until now available. Direct numerical simulation of turbulence offers a new way to obtain Lagrangian statistics and so verify the validity of the current predictive models and the accuracy of their results. After the pioneering work of Riley (Riley and Patterson, 1974) in the 70's, some such results have just appeared in the literature (Lee et al, Yeung and Pope). The present contribution follows in part similar lines, but focuses on two particle statistics and comparison with existing models.

  1. Designing the Bridge: Perceptions and Use of Downscaled Climate Data by Climate Modelers and Resource Managers in Hawaii

    NASA Astrophysics Data System (ADS)

    Keener, V. W.; Brewington, L.; Jaspers, K.

    2016-12-01

    To build an effective bridge from the climate modeling community to natural resource managers, we assessed the existing landscape to see where different groups diverge in their perceptions of climate data and needs. An understanding of a given community's shared knowledge and differences can help design more actionable science. Resource managers in Hawaii are eager to have future climate projections at spatial scales relevant to the islands. National initiatives to downscale climate data often exclude US insular regions, so researchers in Hawaii have generated regional dynamically and statistically downscaled projections. Projections of precipitation diverge, however, leading to difficulties in communication and use. Recently, a two day workshop was held with scientists and managers to evaluate available models and determine a set of best practices for moving forward with decision-relevant downscaling in Hawaii. To seed the discussion, the Pacific Regional Integrated Sciences and Assessments (RISA) program conducted a pre-workshop survey (N=65) of climate modelers and freshwater, ecosystem, and wildfire managers working in Hawaii. Scientists reported spending less than half of their time on operational research, although the majority was eager to partner with managers on specific projects. Resource managers had varying levels of familiarity with downscaled climate projections, but reported needing more information about uncertainty for decision making, and were less interested in the technical model details. There were large differences between groups of managers, with 41.7% of freshwater managers reporting that they used climate projections regularly, while a majority of ecosystem and wildfire managers reported having "no familiarity". Scientists and managers rated which spatial and temporal scales were most relevant to decision making. Finally, when asked to compare how confident they were in projections of specific climate variables between the dynamical and statistical data, 80-90% of managers responded that they had no opinion. Workshop attendees were very interested in the survey results, adding to evidence of a need for sustained engagement between modeler and user groups, as well as different strategies for working with different types of resource managers.

  2. Tree-space statistics and approximations for large-scale analysis of anatomical trees.

    PubMed

    Feragen, Aasa; Owen, Megan; Petersen, Jens; Wille, Mathilde M W; Thomsen, Laura H; Dirksen, Asger; de Bruijne, Marleen

    2013-01-01

    Statistical analysis of anatomical trees is hard to perform due to differences in the topological structure of the trees. In this paper we define statistical properties of leaf-labeled anatomical trees with geometric edge attributes by considering the anatomical trees as points in the geometric space of leaf-labeled trees. This tree-space is a geodesic metric space where any two trees are connected by a unique shortest path, which corresponds to a tree deformation. However, tree-space is not a manifold, and the usual strategy of performing statistical analysis in a tangent space and projecting onto tree-space is not available. Using tree-space and its shortest paths, a variety of statistical properties, such as mean, principal component, hypothesis testing and linear discriminant analysis can be defined. For some of these properties it is still an open problem how to compute them; others (like the mean) can be computed, but efficient alternatives are helpful in speeding up algorithms that use means iteratively, like hypothesis testing. In this paper, we take advantage of a very large dataset (N = 8016) to obtain computable approximations, under the assumption that the data trees parametrize the relevant parts of tree-space well. Using the developed approximate statistics, we illustrate how the structure and geometry of airway trees vary across a population and show that airway trees with Chronic Obstructive Pulmonary Disease come from a different distribution in tree-space than healthy ones. Software is available from http://image.diku.dk/aasa/software.php.

  3. Meta‐analysis using individual participant data: one‐stage and two‐stage approaches, and why they may differ

    PubMed Central

    Ensor, Joie; Riley, Richard D.

    2016-01-01

    Meta‐analysis using individual participant data (IPD) obtains and synthesises the raw, participant‐level data from a set of relevant studies. The IPD approach is becoming an increasingly popular tool as an alternative to traditional aggregate data meta‐analysis, especially as it avoids reliance on published results and provides an opportunity to investigate individual‐level interactions, such as treatment‐effect modifiers. There are two statistical approaches for conducting an IPD meta‐analysis: one‐stage and two‐stage. The one‐stage approach analyses the IPD from all studies simultaneously, for example, in a hierarchical regression model with random effects. The two‐stage approach derives aggregate data (such as effect estimates) in each study separately and then combines these in a traditional meta‐analysis model. There have been numerous comparisons of the one‐stage and two‐stage approaches via theoretical consideration, simulation and empirical examples, yet there remains confusion regarding when each approach should be adopted, and indeed why they may differ. In this tutorial paper, we outline the key statistical methods for one‐stage and two‐stage IPD meta‐analyses, and provide 10 key reasons why they may produce different summary results. We explain that most differences arise because of different modelling assumptions, rather than the choice of one‐stage or two‐stage itself. We illustrate the concepts with recently published IPD meta‐analyses, summarise key statistical software and provide recommendations for future IPD meta‐analyses. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. PMID:27747915

  4. Salaried and Professional Women: Relevant Statistics. Publication #92-3.

    ERIC Educational Resources Information Center

    Wilson, Pamela, Ed.

    This document contains 29 statistical tables grouped into five sections: "General Statistics,""Occupations and Earnings,""Earnings of Selected Professional Occupations,""Women and Higher Education," and "Family Income and Composition." Among the tables are those that show the following: (1) 1991 annual average U.S. civilian work force by…

  5. 34 CFR Appendix A to Subpart N of... - Sample Default Prevention Plan

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... relevant default prevention statistics, including a statistical analysis of the borrowers who default on...'s delinquency status by obtaining reports from data managers and FFEL Program lenders. 5. Enhance... academic study. III. Statistics for Measuring Progress 1. The number of students enrolled at your...

  6. Statistical Cost Estimation in Higher Education: Some Alternatives.

    ERIC Educational Resources Information Center

    Brinkman, Paul T.; Niwa, Shelley

    Recent developments in econometrics that are relevant to the task of estimating costs in higher education are reviewed. The relative effectiveness of alternative statistical procedures for estimating costs are also tested. Statistical cost estimation involves three basic parts: a model, a data set, and an estimation procedure. Actual data are used…

  7. Counting Better? An Examination of the Impact of Quantitative Method Teaching on Statistical Anxiety and Confidence

    ERIC Educational Resources Information Center

    Chamberlain, John Martyn; Hillier, John; Signoretta, Paola

    2015-01-01

    This article reports the results of research concerned with students' statistical anxiety and confidence to both complete and learn to complete statistical tasks. Data were collected at the beginning and end of a quantitative method statistics module. Students recognised the value of numeracy skills but felt they were not necessarily relevant for…

  8. Biased relevance filtering in the auditory system: A test of confidence-weighted first-impressions.

    PubMed

    Mullens, D; Winkler, I; Damaso, K; Heathcote, A; Whitson, L; Provost, A; Todd, J

    2016-03-01

    Although first-impressions are known to impact decision-making and to have prolonged effects on reasoning, it is less well known that the same type of rapidly formed assumptions can explain biases in automatic relevance filtering outside of deliberate behavior. This paper features two studies in which participants have been asked to ignore sequences of sound while focusing attention on a silent movie. The sequences consisted of blocks, each with a high-probability repetition interrupted by rare acoustic deviations (i.e., a sound of different pitch or duration). The probabilities of the two different sounds alternated across the concatenated blocks within the sequence (i.e., short-to-long and long-to-short). The sound probabilities are rapidly and automatically learned for each block and a perceptual inference is formed predicting the most likely characteristics of the upcoming sound. Deviations elicit a prediction-error signal known as mismatch negativity (MMN). Computational models of MMN generally assume that its elicitation is governed by transition statistics that define what sound attributes are most likely to follow the current sound. MMN amplitude reflects prediction confidence, which is derived from the stability of the current transition statistics. However, our prior research showed that MMN amplitude is modulated by a strong first-impression bias that outweighs transition statistics. Here we test the hypothesis that this bias can be attributed to assumptions about predictable vs. unpredictable nature of each tone within the first encountered context, which is weighted by the stability of that context. The results of Study 1 show that this bias is initially prevented if there is no 1:1 mapping between sound attributes and probability, but it returns once the auditory system determines which properties provide the highest predictive value. The results of Study 2 show that confidence in the first-impression bias drops if assumptions about the temporal stability of the transition-statistics are violated. Both studies provide compelling evidence that the auditory system extrapolates patterns on multiple timescales to adjust its response to prediction-errors, while profoundly distorting the effects of transition-statistics by the assumptions formed on the basis of first-impressions. Copyright © 2016 Elsevier B.V. All rights reserved.

  9. The Content of Statistical Requirements for Authors in Biomedical Research Journals

    PubMed Central

    Liu, Tian-Yi; Cai, Si-Yu; Nie, Xiao-Lu; Lyu, Ya-Qi; Peng, Xiao-Xia; Feng, Guo-Shuang

    2016-01-01

    Background: Robust statistical designing, sound statistical analysis, and standardized presentation are important to enhance the quality and transparency of biomedical research. This systematic review was conducted to summarize the statistical reporting requirements introduced by biomedical research journals with an impact factor of 10 or above so that researchers are able to give statistical issues’ serious considerations not only at the stage of data analysis but also at the stage of methodological design. Methods: Detailed statistical instructions for authors were downloaded from the homepage of each of the included journals or obtained from the editors directly via email. Then, we described the types and numbers of statistical guidelines introduced by different press groups. Items of statistical reporting guideline as well as particular requirements were summarized in frequency, which were grouped into design, method of analysis, and presentation, respectively. Finally, updated statistical guidelines and particular requirements for improvement were summed up. Results: Totally, 21 of 23 press groups introduced at least one statistical guideline. More than half of press groups can update their statistical instruction for authors gradually relative to issues of new statistical reporting guidelines. In addition, 16 press groups, covering 44 journals, address particular statistical requirements. The most of the particular requirements focused on the performance of statistical analysis and transparency in statistical reporting, including “address issues relevant to research design, including participant flow diagram, eligibility criteria, and sample size estimation,” and “statistical methods and the reasons.” Conclusions: Statistical requirements for authors are becoming increasingly perfected. Statistical requirements for authors remind researchers that they should make sufficient consideration not only in regards to statistical methods during the research design, but also standardized statistical reporting, which would be beneficial in providing stronger evidence and making a greater critical appraisal of evidence more accessible. PMID:27748343

  10. The Content of Statistical Requirements for Authors in Biomedical Research Journals.

    PubMed

    Liu, Tian-Yi; Cai, Si-Yu; Nie, Xiao-Lu; Lyu, Ya-Qi; Peng, Xiao-Xia; Feng, Guo-Shuang

    2016-10-20

    Robust statistical designing, sound statistical analysis, and standardized presentation are important to enhance the quality and transparency of biomedical research. This systematic review was conducted to summarize the statistical reporting requirements introduced by biomedical research journals with an impact factor of 10 or above so that researchers are able to give statistical issues' serious considerations not only at the stage of data analysis but also at the stage of methodological design. Detailed statistical instructions for authors were downloaded from the homepage of each of the included journals or obtained from the editors directly via email. Then, we described the types and numbers of statistical guidelines introduced by different press groups. Items of statistical reporting guideline as well as particular requirements were summarized in frequency, which were grouped into design, method of analysis, and presentation, respectively. Finally, updated statistical guidelines and particular requirements for improvement were summed up. Totally, 21 of 23 press groups introduced at least one statistical guideline. More than half of press groups can update their statistical instruction for authors gradually relative to issues of new statistical reporting guidelines. In addition, 16 press groups, covering 44 journals, address particular statistical requirements. The most of the particular requirements focused on the performance of statistical analysis and transparency in statistical reporting, including "address issues relevant to research design, including participant flow diagram, eligibility criteria, and sample size estimation," and "statistical methods and the reasons." Statistical requirements for authors are becoming increasingly perfected. Statistical requirements for authors remind researchers that they should make sufficient consideration not only in regards to statistical methods during the research design, but also standardized statistical reporting, which would be beneficial in providing stronger evidence and making a greater critical appraisal of evidence more accessible.

  11. Anatomical shape analysis of the mandible in Caucasian and Chinese for the production of preformed mandible reconstruction plates.

    PubMed

    Metzger, Marc C; Vogel, Mathias; Hohlweg-Majert, Bettina; Mast, Hansjörg; Fan, Xianqun; Rüdell, Alexandra; Schlager, Stefan

    2011-09-01

    The purpose of this study was to evaluate and analyze statistical shapes of the outer mandible contour of Caucasian and Chinese people, offering data for the production of preformed mandible reconstruction plates. A CT-database of 925 Caucasians (male: n=463, female: n=462) and 960 Chinese (male: n=469, female: n=491) including scans of unaffected mandibles were used and imported into the 3D modeling software Voxim (IVS-Solutions, Chemnitz, Germany). Anatomical landmarks (n=22 points for both sides) were set using the 3D view along the outer contour of the mandible at the area where reconstruction plates are commonly located. We used morphometric methods for statistical shape analysis. We found statistical relevant differences between populations including a distinct discrimination given by the landmarks at the mandible. After generating a metric model this shape information which separated the populations appeared to be of no clinical relevance. The metric size information given by ramus length however provided a profound base for the production of standard reconstruction plates. Clustering by ramus length into three sizes and calculating means of these size-clusters seem to be a good solution for constructing preformed reconstruction plates that will fit a vast majority. Copyright © 2010 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.

  12. What kind of noise is brain noise: anomalous scaling behavior of the resting brain activity fluctuations

    PubMed Central

    Fraiman, Daniel; Chialvo, Dante R.

    2012-01-01

    The study of spontaneous fluctuations of brain activity, often referred as brain noise, is getting increasing attention in functional magnetic resonance imaging (fMRI) studies. Despite important efforts, much of the statistical properties of such fluctuations remain largely unknown. This work scrutinizes these fluctuations looking at specific statistical properties which are relevant to clarify its dynamical origins. Here, three statistical features which clearly differentiate brain data from naive expectations for random processes are uncovered: First, the variance of the fMRI mean signal as a function of the number of averaged voxels remains constant across a wide range of observed clusters sizes. Second, the anomalous behavior of the variance is originated by bursts of synchronized activity across regions, regardless of their widely different sizes. Finally, the correlation length (i.e., the length at which the correlation strength between two regions vanishes) as well as mutual information diverges with the cluster's size considered, such that arbitrarily large clusters exhibit the same collective dynamics than smaller ones. These three properties are known to be exclusive of complex systems exhibiting critical dynamics, where the spatio-temporal dynamics show these peculiar type of fluctuations. Thus, these findings are fully consistent with previous reports of brain critical dynamics, and are relevant for the interpretation of the role of fluctuations and variability in brain function in health and disease. PMID:22934058

  13. Assessment of statistical significance and clinical relevance.

    PubMed

    Kieser, Meinhard; Friede, Tim; Gondan, Matthias

    2013-05-10

    In drug development, it is well accepted that a successful study will demonstrate not only a statistically significant result but also a clinically relevant effect size. Whereas standard hypothesis tests are used to demonstrate the former, it is less clear how the latter should be established. In the first part of this paper, we consider the responder analysis approach and study the performance of locally optimal rank tests when the outcome distribution is a mixture of responder and non-responder distributions. We find that these tests are quite sensitive to their planning assumptions and have therefore not really any advantage over standard tests such as the t-test and the Wilcoxon-Mann-Whitney test, which perform overall well and can be recommended for applications. In the second part, we present a new approach to the assessment of clinical relevance based on the so-called relative effect (or probabilistic index) and derive appropriate sample size formulae for the design of studies aiming at demonstrating both a statistically significant and clinically relevant effect. Referring to recent studies in multiple sclerosis, we discuss potential issues in the application of this approach. Copyright © 2012 John Wiley & Sons, Ltd.

  14. Adjusted scaling of FDG positron emission tomography images for statistical evaluation in patients with suspected Alzheimer's disease.

    PubMed

    Buchert, Ralph; Wilke, Florian; Chakrabarti, Bhismadev; Martin, Brigitte; Brenner, Winfried; Mester, Janos; Clausen, Malte

    2005-10-01

    Statistical parametric mapping (SPM) gained increasing acceptance for the voxel-based statistical evaluation of brain positron emission tomography (PET) with the glucose analog 2-[18F]-fluoro-2-deoxy-d-glucose (FDG) in patients with suspected Alzheimer's disease (AD). To increase the sensitivity for detection of local changes, individual differences of total brain FDG uptake are usually compensated for by proportional scaling. However, in cases of extensive hypometabolic areas, proportional scaling overestimates scaled uptake. This may cause significant underestimation of the extent of hypometabolic areas by the statistical test. To detect this problem, the authors tested for hypermetabolism. In patients with no visual evidence of true focal hypermetabolism, significant clusters of hypermetabolism in the presence of extended hypometabolism were interpreted as false-positive findings, indicating relevant overestimation of scaled uptake. In this case, scaled uptake was reduced step by step until there were no more significant clusters of hypermetabolism. In 22 consecutive patients with suspected AD, proportional scaling resulted in relevant overestimation of scaled uptake in 9 patients. Scaled uptake had to be reduced by 11.1% +/- 5.3% in these cases to eliminate the artifacts. Adjusted scaling resulted in extension of existing and appearance of new clusters of hypometabolism. Total volume of the additional voxels with significant hypometabolism depended linearly on the extent of the additional scaling and was 202 +/- 118 mL on average. Adjusted scaling helps to identify characteristic metabolic patterns in patients with suspected AD. It is expected to increase specificity of FDGPET in this group of patients.

  15. Optical properties of mice skin for optical therapy relevant wavelengths: influence of gender and pigmentation

    NASA Astrophysics Data System (ADS)

    Sabino, C. P.; Deana, A. M.; Silva, D. F. T.; França, C. M.; Yoshimura, T. M.; Ribeiro, M. S.

    2015-03-01

    Red and near-infrared light have been widely employed in optical therapies. Skin is the most common optical barrier in non-invasive techniques and in many cases it is the target tissue itself. Consequently, to optimize the outcomes brought by lightbased therapies, the optical properties of skin tissue must be very well elucidated. In the present study, we evaluated the dorsal skin optical properties of albino (BALB/c) and pigmented (C57BL/6) mice using the Kubelka-Munk photon transport model. We evaluated samples from male and female young mice of both strains. Analysis was performed for wavelengths at 630, 660, 780, 810 and 905 nm due to their prevalent use in optical therapies, such as low-level light (or laser) and photodynamic therapies. Spectrophotometric measurements of diffuse transmittance and reflectance were performed using a single integrating sphere coupled to a proper spectrophotometer. Statistic analysis was made by two-way ANOVA, with Tukey as post-test and Levenne and Shapiro-Wilks as pre-tests. Statistical significance was considered when p<0.05. Our results show only a slight transmittance increment (<10 %) as wavelengths are increased from 630 to 905 nm, and no statistical significance was observed. Albino male mice present reduced transmittance levels for all wavelengths. The organization and abundance of skin composing tissues significantly influence its scattering optical properties although absorption remains constant. We conclude that factors such as subcutaneous adiposity and connective tissue structure can have statistically significant influence on mice skin optical properties and these factors have relevant variations among different gender and strains.

  16. Natural variability of biochemical biomarkers in the macro-zoobenthos: Dependence on life stage and environmental factors.

    PubMed

    Scarduelli, Lucia; Giacchini, Roberto; Parenti, Paolo; Migliorati, Sonia; Di Brisco, Agnese Maria; Vighi, Marco

    2017-11-01

    Biomarkers are widely used in ecotoxicology as indicators of exposure to toxicants. However, their ability to provide ecologically relevant information remains controversial. One of the major problems is understanding whether the measured responses are determined by stress factors or lie within the natural variability range. In a previous work, the natural variability of enzymatic levels in invertebrates sampled in pristine rivers was proven to be relevant across both space and time. In the present study, the experimental design was improved by considering different life stages of the selected taxa and by measuring more environmental parameters. The experimental design considered sampling sites in 2 different rivers, 8 sampling dates covering the whole seasonal cycle, 4 species from 3 different taxonomic groups (Plecoptera, Perla grandis; Ephemeroptera, Baetis alpinus and Epeorus alpicula; Tricoptera, Hydropsyche pellucidula), different life stages for each species, and 4 enzymes (acetylcholinesterase, glutathione S-transferase, alkaline phosphatase, and catalase). Biomarker levels were related to environmental (physicochemical) parameters to verify any kind of dependence. Data were statistically elaborated using hierarchical multilevel Bayesian models. Natural variability was found to be relevant across both space and time. The results of the present study proved that care should be paid when interpreting biomarker results. Further research is needed to better understand the dependence of the natural variability on environmental parameters. Environ Toxicol Chem 2017;36:3158-3167. © 2017 SETAC. © 2017 SETAC.

  17. Inventory of DOT Statistical Information Systems

    DOT National Transportation Integrated Search

    1983-01-01

    The inventory represents an update of relevant systems described in the Transportation Statistical Reference File (TSRF), coordinated with the GAO update of Congressional Sources and Systems, and the Information Collection Budget. The inventory compi...

  18. Occlusal status and prevalence of occlusal malocclusion traits among 9-year-old schoolchildren.

    PubMed

    Lux, Christopher J; Dücker, Britta; Pritsch, Maria; Komposch, Gerda; Niekusch, Uwe

    2009-06-01

    The aim of this study was to provide detailed information concerning clinically relevant occlusal traits and the prevalence of occlusal anomalies in an orthodontically relevant period of dental development. Four hundred and ninety-four German schoolchildren (237 males and 257 females), median age 9 years, were orthodontically examined. Overjet and overbite were measured to the nearest 0.5 mm, and sagittal molar relationships were registered clinically to the nearest quarter unit. In addition, crossbites, scissor bites, and midline displacements were evaluated. Descriptive statistics was complemented by testing gender differences and differences between groups with Class I and Class II anomalies (Mann-Whitney U-test) as well as a statistical evaluation of differences between the three dental stages (Kruskal-Wallis test). Overjet exhibited an extreme range between -2 and 12 mm (median values 3-3.5 mm). An increased overjet was more prevalent than a reduced or reverse overjet, and a severely increased overjet greater than 6 mm was a common finding affecting around 5-10 per cent of the children. Similarly, overbite showed considerable variations of between -1 and 9 mm (medians 3-3.5 mm) and males exhibited a significantly larger overbite than females. In Class II malocclusion subjects, overbite was significantly enlarged (on average between 0.5 and 1 mm) when compared with those with a Class I malocclusion. Traumatic contact of the gingiva affected every 14th child. A Class II molar relationship of three-quarter units or more was a frequent finding affecting more than one child in five. In addition, at 9 years of age, 3 per cent of the children exhibited a Class III molar relationship of at least a half unit. The wide range of orthodontically relevant occlusal traits found in the present study underlines the need for orthodontic screening at 9 years of age (or earlier).

  19. Process evaluation to explore internal and external validity of the "Act in Case of Depression" care program in nursing homes.

    PubMed

    Leontjevas, Ruslan; Gerritsen, Debby L; Koopmans, Raymond T C M; Smalbrugge, Martin; Vernooij-Dassen, Myrra J F J

    2012-06-01

    A multidisciplinary, evidence-based care program to improve the management of depression in nursing home residents was implemented and tested using a stepped-wedge design in 23 nursing homes (NHs): "Act in case of Depression" (AiD). Before effect analyses, to evaluate AiD process data on sampling quality (recruitment and randomization, reach) and intervention quality (relevance and feasibility, extent to which AiD was performed), which can be used for understanding internal and external validity. In this article, a model is presented that divides process evaluation data into first- and second-order process data. Qualitative and quantitative data based on personal files of residents, interviews of nursing home professionals, and a research database were analyzed according to the following process evaluation components: sampling quality and intervention quality. Nursing home. The pattern of residents' informed consent rates differed for dementia special care units and somatic units during the study. The nursing home staff was satisfied with the AiD program and reported that the program was feasible and relevant. With the exception of the first screening step (nursing staff members using a short observer-based depression scale), AiD components were not performed fully by NH staff as prescribed in the AiD protocol. Although NH staff found the program relevant and feasible and was satisfied with the program content, individual AiD components may have different feasibility. The results on sampling quality implied that statistical analyses of AiD effectiveness should account for the type of unit, whereas the findings on intervention quality implied that, next to the type of unit, analyses should account for the extent to which individual AiD program components were performed. In general, our first-order process data evaluation confirmed internal and external validity of the AiD trial, and this evaluation enabled further statistical fine tuning. The importance of evaluating the first-order process data before executing statistical effect analyses is thus underlined. Copyright © 2012 American Medical Directors Association, Inc. Published by Elsevier Inc. All rights reserved.

  20. Additive scales in degenerative disease--calculation of effect sizes and clinical judgment.

    PubMed

    Riepe, Matthias W; Wilkinson, David; Förstl, Hans; Brieden, Andreas

    2011-12-16

    The therapeutic efficacy of an intervention is often assessed in clinical trials by scales measuring multiple diverse activities that are added to produce a cumulative global score. Medical communities and health care systems subsequently use these data to calculate pooled effect sizes to compare treatments. This is done because major doubt has been cast over the clinical relevance of statistically significant findings relying on p values with the potential to report chance findings. Hence in an aim to overcome this pooling the results of clinical studies into a meta-analyses with a statistical calculus has been assumed to be a more definitive way of deciding of efficacy. We simulate the therapeutic effects as measured with additive scales in patient cohorts with different disease severity and assess the limitations of an effect size calculation of additive scales which are proven mathematically. We demonstrate that the major problem, which cannot be overcome by current numerical methods, is the complex nature and neurobiological foundation of clinical psychiatric endpoints in particular and additive scales in general. This is particularly relevant for endpoints used in dementia research. 'Cognition' is composed of functions such as memory, attention, orientation and many more. These individual functions decline in varied and non-linear ways. Here we demonstrate that with progressive diseases cumulative values from multidimensional scales are subject to distortion by the limitations of the additive scale. The non-linearity of the decline of function impedes the calculation of effect sizes based on cumulative values from these multidimensional scales. Statistical analysis needs to be guided by boundaries of the biological condition. Alternatively, we suggest a different approach avoiding the error imposed by over-analysis of cumulative global scores from additive scales.

  1. TIP: protein backtranslation aided by genetic algorithms.

    PubMed

    Moreira, Andrés; Maass, Alejandro

    2004-09-01

    Several applications require the backtranslation of a protein sequence into a nucleic acid sequence. The degeneracy of the genetic code makes this process ambiguous; moreover, not every translation is equally viable. The usual answer is to mimic the codon usage of the target species; however, this does not capture all the relevant features of the 'genomic styles' from different taxa. The program TIP ' Traducción Inversa de Proteínas') applies genetic algorithms to improve the backtranslation, by minimizing the difference of some coding statistics with respect to their average value in the target. http://www.cmm.uchile.cl/genoma/tip/

  2. Transportable data from non-target arthropod field studies for the environmental risk assessment of genetically modified maize expressing an insecticidal double-stranded RNA.

    PubMed

    Ahmad, Aqeel; Negri, Ignacio; Oliveira, Wladecir; Brown, Christopher; Asiimwe, Peter; Sammons, Bernard; Horak, Michael; Jiang, Changjian; Carson, David

    2016-02-01

    As part of an environmental risk assessment, the potential impact of genetically modified (GM) maize MON 87411 on non-target arthropods (NTAs) was evaluated in the field. MON 87411 confers resistance to corn rootworm (CRW; Diabrotica spp.) by expressing an insecticidal double-stranded RNA (dsRNA) transcript and the Cry3Bb1 protein and tolerance to the herbicide glyphosate by producing the CP4 EPSPS protein. Field trials were conducted at 14 sites providing high geographic and environmental diversity within maize production areas from three geographic regions including the U.S., Argentina, and Brazil. MON 87411, the conventional control, and four commercial conventional reference hybrids were evaluated for NTA abundance and damage. Twenty arthropod taxa met minimum abundance criteria for valid statistical analysis. Nine of these taxa occurred in at least two of the three regions and in at least four sites across regions. These nine taxa included: aphid, predatory earwig, lacewing, ladybird beetle, leafhopper, minute pirate bug, parasitic wasp, sap beetle, and spider. In addition to wide regional distribution, these taxa encompass the ecological functions of herbivores, predators and parasitoids in maize agro-ecosystems. Thus, the nine arthropods may serve as representative taxa of maize agro-ecosystems, and thereby support that analysis of relevant data generated in one region can be transportable for the risk assessment of the same or similar GM crop products in another region. Across the 20 taxa analyzed, no statistically significant differences in abundance were detected between MON 87411 and the conventional control for 123 of the 128 individual-site comparisons (96.1%). For the nine widely distributed taxa, no statistically significant differences in abundance were detected between MON 87411 and the conventional control. Furthermore, no statistically significant differences were detected between MON 87411 and the conventional control for 53 out of 56 individual-site comparisons (94.6 %) of NTA pest damage to the crop. In each case where a significant difference was observed in arthropod abundance or damage, the mean value for MON 87411 was within the reference range and/or the difference was not consistently observed across collection methods and/or sites. Thus, the differences were not representative of an adverse effect unfamiliar to maize and/or were not indicative of a consistent plant response associated with the GM traits. Results from this study support a conclusion of no adverse environmental impact of MON 87411 on NTAs compared to conventional maize and demonstrate the utility of relevant transportable data across regions for the ERA of GM crops.

  3. A statistical approach to bioclimatic trend detection in the airborne pollen records of Catalonia (NE Spain)

    NASA Astrophysics Data System (ADS)

    Fernández-Llamazares, Álvaro; Belmonte, Jordina; Delgado, Rosario; De Linares, Concepción

    2014-04-01

    Airborne pollen records are a suitable indicator for the study of climate change. The present work focuses on the role of annual pollen indices for the detection of bioclimatic trends through the analysis of the aerobiological spectra of 11 taxa of great biogeographical relevance in Catalonia over an 18-year period (1994-2011), by means of different parametric and non-parametric statistical methods. Among others, two non-parametric rank-based statistical tests were performed for detecting monotonic trends in time series data of the selected airborne pollen types and we have observed that they have similar power in detecting trends. Except for those cases in which the pollen data can be well-modeled by a normal distribution, it is better to apply non-parametric statistical methods to aerobiological studies. Our results provide a reliable representation of the pollen trends in the region and suggest that greater pollen quantities are being liberated to the atmosphere in the last years, specially by Mediterranean taxa such as Pinus, Total Quercus and Evergreen Quercus, although the trends may differ geographically. Longer aerobiological monitoring periods are required to corroborate these results and survey the increasing levels of certain pollen types that could exert an impact in terms of public health.

  4. High-speed data search

    NASA Technical Reports Server (NTRS)

    Driscoll, James N.

    1994-01-01

    The high-speed data search system developed for KSC incorporates existing and emerging information retrieval technology to help a user intelligently and rapidly locate information found in large textual databases. This technology includes: natural language input; statistical ranking of retrieved information; an artificial intelligence concept called semantics, where 'surface level' knowledge found in text is used to improve the ranking of retrieved information; and relevance feedback, where user judgements about viewed information are used to automatically modify the search for further information. Semantics and relevance feedback are features of the system which are not available commercially. The system further demonstrates focus on paragraphs of information to decide relevance; and it can be used (without modification) to intelligently search all kinds of document collections, such as collections of legal documents medical documents, news stories, patents, and so forth. The purpose of this paper is to demonstrate the usefulness of statistical ranking, our semantic improvement, and relevance feedback.

  5. Contour changes in human alveolar bone following tooth extraction of the maxillary central incisor.

    PubMed

    Li, Bei; Wang, Yao

    2014-12-01

    The purpose of this study was to apply cone-beam computed tomography (CBCT) to observe contour changes in human alveolar bone after tooth extraction of the maxillary central incisor and to provide original morphological evidence for aesthetic implant treatment in the maxillary anterior area. Forty patients were recruited into the study. Each patient had two CBCT scans (CBCT I and CBCT II), one taken before and one taken three months after tooth extraction of maxillary central incisor (test tooth T). A fixed anatomic reference point was used to orient the starting axial slice of the two scans. On three CBCT I axial slices, which represented the deep, middle, and shallow layers of the socket, labial and palatal alveolar bone widths of T were measured. The number of sagittal slices from the start point to the pulp centre of T was recorded. On three CBCT II axial slices, the pulp centres of extracted T were oriented according to the number of moved sagittal slices recorded in CBCT I. Labial and palatal alveolar bone widths at the oriented sites were measured. On the CBCT I axial slice which represented the middle layer of the socket, sagittal slices were reconstructed. Relevant distances of T on the sagittal slice were measured, as were the alveolar bone width and tooth length of the opposite central incisor. On the CBCT II axial slice, which represented the middle layer of the socket, relevant distances recorded in CBCT I were transferred on the sagittal slice. The height reduction of alveolar bone on labial and palatal sides was measured, as were the alveolar bone width and tooth length of the opposite central incisor at the oriented site. Intraobserver reliability assessed by intraclass correlation coefficients (ICCs) was high. Paired sample t-tests were performed. The alveolar bone width and tooth length of the opposite central incisor showed no statistical differences (P<0.05). The labial alveolar bone widths of T at the deep, middle, and shallow layers all showed statistical differences. However, no palatal alveolar bone widths showed any statistical differences. The width reduction of alveolar bone was 1.2, 1.6, and 2.7 mm at the deep, middle, and shallow layers, respectively. The height reduction of alveolar bone on labial and palatal sides of T both showed statistical differences, which was 1.9 and 1.1 mm, respectively.

  6. Switching from usual brand cigarettes to a tobacco-heating cigarette or snus: Part 3. Biomarkers of biological effect

    PubMed Central

    Ogden, Michael W.; Marano, Kristin M.; Jones, Bobbette A.; Morgan, Walter T.; Stiles, Mitchell F.

    2015-01-01

    Abstract A randomized, multi-center study of adult cigarette smokers switched to tobacco-heating cigarettes, snus or ultra-low machine yield tobacco-burning cigarettes (50/group) for 24 weeks was conducted. Evaluation of biomarkers of biological effect (e.g. inflammation, lipids, hypercoaguable state) indicated that the majority of consistent and statistically significant improvements over time within each group were observed in markers of inflammation. Consistent and statistically significant differences in pairwise comparisons between product groups were not observed. These findings are relevant to the understanding of biomarkers of biological effect related to cigarette smoking as well as the risk continuum across various tobacco products (ClinicalTrials.gov Identifier: NCT02061917). PMID:26525962

  7. Statistical Literacy in the Data Science Workplace

    ERIC Educational Resources Information Center

    Grant, Robert

    2017-01-01

    Statistical literacy, the ability to understand and make use of statistical information including methods, has particular relevance in the age of data science, when complex analyses are undertaken by teams from diverse backgrounds. Not only is it essential to communicate to the consumers of information but also within the team. Writing from the…

  8. Which Type of Risk Information to Use for Whom? Moderating Role of Outcome-Relevant Involvement in the Effects of Statistical and Exemplified Risk Information on Risk Perceptions.

    PubMed

    So, Jiyeon; Jeong, Se-Hoon; Hwang, Yoori

    2017-04-01

    The extant empirical research examining the effectiveness of statistical and exemplar-based health information is largely inconsistent. Under the premise that the inconsistency may be due to an unacknowledged moderator (O'Keefe, 2002), this study examined a moderating role of outcome-relevant involvement (Johnson & Eagly, 1989) in the effects of statistical and exemplified risk information on risk perception. Consistent with predictions based on elaboration likelihood model (Petty & Cacioppo, 1984), findings from an experiment (N = 237) concerning alcohol consumption risks showed that statistical risk information predicted risk perceptions of individuals with high, rather than low, involvement, while exemplified risk information predicted risk perceptions of those with low, rather than high, involvement. Moreover, statistical risk information contributed to negative attitude toward drinking via increased risk perception only for highly involved individuals, while exemplified risk information influenced the attitude through the same mechanism only for individuals with low involvement. Theoretical and practical implications for health risk communication are discussed.

  9. Effect of reverse shoulder design philosophy on muscle moment arms.

    PubMed

    Hamilton, Matthew A; Diep, Phong; Roche, Chris; Flurin, Pierre Henri; Wright, Thomas W; Zuckerman, Joseph D; Routman, Howard

    2015-04-01

    This study analyzes the muscle moment arms of three different reverse shoulder design philosophies using a previously published method. Digital bone models of the shoulder were imported into a 3D modeling software and markers placed for the origin and insertion of relevant muscles. The anatomic model was used as a baseline for moment arm calculations. Subsequently, three different reverse shoulder designs were virtually implanted and moment arms were analyzed in abduction and external rotation. The results indicate that the lateral offset between the joint center and the axis of the humerus specific to one reverse shoulder design increased the external rotation moment arms of the posterior deltoid relative to the other reverse shoulder designs. The other muscles analyzed demonstrated differences in the moment arms, but none of the differences reached statistical significance. This study demonstrated how the combination of variables making up different reverse shoulder designs can affect the moment arms of the muscles in different and statistically significant ways. The role of humeral offset in reverse shoulder design has not been previously reported and could have an impact on external rotation and stability achieved post-operatively. © 2015 Orthopaedic Research Society. Published by Wiley Periodicals, Inc.

  10. [Donepezil in patients with Alzheimer's disease--a critical appraisal of the AD2000 study].

    PubMed

    Kaiser, Thomas; Florack, Christiane; Franz, Heinrich; Sawicki, Peter T

    2005-03-15

    The AD2000 study was a randomized placebo-controlled trial, the effects of donepezil, a cholinesterase inhibitor, in patients with Alzheimer's disease. It was the first long-term RCT not sponsored by the pharmaceutical industry. The study did not show any significant effect on patient-relevant outcomes. However, donepezil had a significant effect on cognitive scores. More patients taking donepezil stopped treatment due to adverse events, even when taking only 5 mg once daily. There are major concerns regarding the conduction of the AD2000 study as well as the presentation of the results. Much less patients than previously planned have been recruited, resulting in a low statistical power to detect a significant difference between both treatments. In addition, no true intention-to-treat analysis based on the first randomization is presented. The validity of the AD2000 trial has to be questioned. However, there is still insufficient evidence to support the claim that cholinesterase inhibitors have beneficial effects on patient-relevant outcomes in patients with Alzheimer's disease. The change of cognitive performance as measured by different scales does not necessarily correspond to substantial changes in patient-relevant outcomes. In conclusion, the widespread use of cholinesterase inhibitors in patients with Alzheimer's disease is not supported by current evidence. Long-term-randomized controlled trials focusing on patient-relevant outcomes instead of cognitive scores are urgently needed.

  11. Patch test results in children and adolescents. Study from the Santa Casa de Belo Horizonte Dermatology Clinic, Brazil, from 2003 to 2010*

    PubMed Central

    Rodrigues, Dulcilea Ferraz; Goulart, Eugênio Marcos Andrade

    2015-01-01

    BACKGROUND Patch testing is an efficient method to identify the allergen responsible for allergic contact dermatitis. OBJECTIVE To evaluate the results of patch tests in children and adolescents comparing these two age groups' results. METHODS Cross-sectional study to assess patch test results of 125 children and adolescents aged 1-19 years, with suspected allergic contact dermatitis, in a dermatology clinic in Brazil. Two Brazilian standardized series were used. RESULTS Seventy four (59.2%) patients had "at least one positive reaction" to the patch test. Among these positive tests, 77.0% were deemed relevant. The most frequent allergens were nickel (36.8%), thimerosal (18.4%), tosylamide formaldehyde resin (6.8%), neomycin (6.4%), cobalt (4.0%) and fragrance mix I (4.0%). The most frequent positive tests came from adolescents (p=0.0014) and females (p=0.0002). There was no relevant statistical difference concerning contact sensitizations among patients with or without atopic history. However, there were significant differences regarding sensitization to nickel (p=0.029) and thimerosal (p=0.042) between the two age groups under study, while adolescents were the most affected. CONCLUSION Nickel and fragrances were the only positive (and relevant) allergens in children. Nickel and tosylamide formaldehyde resin were the most frequent and relevant allergens among adolescents. PMID:26560213

  12. Health significance and statistical uncertainty. The value of P-value.

    PubMed

    Consonni, Dario; Bertazzi, Pier Alberto

    2017-10-27

    The P-value is widely used as a summary statistics of scientific results. Unfortunately, there is a widespread tendency to dichotomize its value in "P<0.05" (defined as "statistically significant") and "P>0.05" ("statistically not significant"), with the former implying a "positive" result and the latter a "negative" one. To show the unsuitability of such an approach when evaluating the effects of environmental and occupational risk factors. We provide examples of distorted use of P-value and of the negative consequences for science and public health of such a black-and-white vision. The rigid interpretation of P-value as a dichotomy favors the confusion between health relevance and statistical significance, discourages thoughtful thinking, and distorts attention from what really matters, the health significance. A much better way to express and communicate scientific results involves reporting effect estimates (e.g., risks, risks ratios or risk differences) and their confidence intervals (CI), which summarize and convey both health significance and statistical uncertainty. Unfortunately, many researchers do not usually consider the whole interval of CI but only examine if it includes the null-value, therefore degrading this procedure to the same P-value dichotomy (statistical significance or not). In reporting statistical results of scientific research present effects estimates with their confidence intervals and do not qualify the P-value as "significant" or "not significant".

  13. Pilot study for the registry of complications in rheumatic diseases from the German Society of Surgery (DGORh): evaluation of methods and data from the first 1000 patients

    PubMed Central

    Kostuj, Tanja; Rehart, Stefan; Matta-Hurtado, Ronald; Biehl, Christoph; Willburger, Roland E; Schmidt, Klaus

    2017-01-01

    Objective Most patients suffering with rheumatic diseases who undergo surgical treatment are receiving immune-modulating therapy. To determine whether these medications affect their outcomes a national registry was established in Germany by the German Society of Surgery (DGORh). Data from the first 1000 patients were used in a pilot study to identify relevant corisk factors and to determine whether such a registry is suitable for developing accurate and relevant recommendations. Design and participants Data were collected from patients undergoing surgical treatments with their written consent. A second consent form was used, if complications occurred. During this pilot study, in order to obtain a quicker overview, risk factors were considered only in patients with complications. Only descriptive statistical analysis was employed in this pilot study due to limited number of observed complications and inhomogeneous data regarding the surgery and the medications the patients received. Analytical statistics will be performed to confirm the results in a future outcome study. Results Complications occurred in 26 patients and were distributed equally among the different types of surgeries. Twenty one of these patients were receiving immune-modulating therapy at the time, while five were not. Infections were observed in 2.3% of patients receiving and in 5.1% not receiving immunosuppression. Conclusions Due to the low number of cases, inhomogeneity in the diseases and the treatments received by the patients in this pilot study, it is not possible to develop standardised best-practice recommendations to optimise their care. Based on this observation we conclude that in order to be suitable to develop accurate and relevant recommendations a national registry must include the most important and relevant variables that impact the care and outcomes of these patients. PMID:29018066

  14. Acupuncture for peripheral joint osteoarthritis

    PubMed Central

    Manheimer, Eric; Cheng, Ke; Linde, Klaus; Lao, Lixing; Yoo, Junghee; Wieland, Susan; van der Windt, Daniëlle AWM; Berman, Brian M; Bouter, Lex M

    2011-01-01

    Background Peripheral joint osteoarthritis is a major cause of pain and functional limitation. Few treatments are safe and effective. Objectives To assess the effects of acupuncture for treating peripheral joint osteoarthritis. Search strategy We searched the Cochrane Central Register of Controlled Trials (The Cochrane Library 2008, Issue 1), MEDLINE, and EMBASE (both through December 2007), and scanned reference lists of articles. Selection criteria Randomized controlled trials (RCTs) comparing needle acupuncture with a sham, another active treatment, or a waiting list control group in people with osteoarthritis of the knee, hip, or hand. Data collection and analysis Two authors independently assessed trial quality and extracted data. We contacted study authors for additional information. We calculated standardized mean differences using the differences in improvements between groups. Main results Sixteen trials involving 3498 people were included. Twelve of the RCTs included only people with OA of the knee, 3 only OA of the hip, and 1 a mix of people with OA of the hip and/or knee. In comparison with a sham control, acupuncture showed statistically significant, short-term improvements in osteoarthritis pain (standardized mean difference -0.28, 95% confidence interval -0.45 to -0.11; 0.9 point greater improvement than sham on 20 point scale; absolute percent change 4.59%; relative percent change 10.32%; 9 trials; 1835 participants) and function (-0.28, -0.46 to -0.09; 2.7 point greater improvement on 68 point scale; absolute percent change 3.97%; relative percent change 8.63%); however, these pooled short-term benefits did not meet our predefined thresholds for clinical relevance (i.e. 1.3 points for pain; 3.57 points for function) and there was substantial statistical heterogeneity. Additionally, restriction to sham-controlled trials using shams judged most likely to adequately blind participants to treatment assignment (which were also the same shams judged most likely to have physiological activity), reduced heterogeneity and resulted in pooled short-term benefits of acupuncture that were smaller and non-significant. In comparison with sham acupuncture at the six-month follow-up, acupuncture showed borderline statistically significant, clinically irrelevant improvements in osteoarthritis pain (-0.10, -0.21 to 0.01; 0.4 point greater improvement than sham on 20 point scale; absolute percent change 1.81%; relative percent change 4.06%; 4 trials;1399 participants) and function (-0.11, -0.22 to 0.00; 1.2 point greater improvement than sham on 68 point scale; absolute percent change 1.79%; relative percent change 3.89%). In a secondary analysis versus a waiting list control, acupuncture was associated with statistically significant, clinically relevant short-term improvements in osteoarthritis pain (-0.96, -1.19 to -0.72; 14.5 point greater improvement than sham on 100 point scale; absolute percent change 14.5%; relative percent change 29.14%; 4 trials; 884 participants) and function (-0.89, -1.18 to -0.60; 13.0 point greater improvement than sham on 100 point scale; absolute percent change 13.0%; relative percent change 25.21%). In the head-on comparisons of acupuncture with the ‘supervised osteoarthritis education’ and the ‘physician consultation’ control groups, acupuncture was associated with clinically relevant short- and long-term improvements in pain and function. In the head on comparisons of acupuncture with ‘home exercises/advice leaflet’ and ‘supervised exercise’, acupuncture was associated with similar treatment effects as the controls. Acupuncture as an adjuvant to an exercise based physiotherapy program did not result in any greater improvements than the exercise program alone. Information on safety was reported in only 8 trials and even in these trials there was limited reporting and heterogeneous methods. Authors' conclusions Sham-controlled trials show statistically significant benefits; however, these benefits are small, do not meet our pre-defined thresholds for clinical relevance, and are probably due at least partially to placebo effects from incomplete blinding. Waiting list-controlled trials of acupuncture for peripheral joint osteoarthritis suggest statistically significant and clinically relevant benefits, much of which may be due to expectation or placebo effects. PMID:20091527

  15. Computational domain length and Reynolds number effects on large-scale coherent motions in turbulent pipe flow

    NASA Astrophysics Data System (ADS)

    Feldmann, Daniel; Bauer, Christian; Wagner, Claus

    2018-03-01

    We present results from direct numerical simulations (DNS) of turbulent pipe flow at shear Reynolds numbers up to Reτ = 1500 using different computational domains with lengths up to ?. The objectives are to analyse the effect of the finite size of the periodic pipe domain on large flow structures in dependency of Reτ and to assess a minimum ? required for relevant turbulent scales to be captured and a minimum Reτ for very large-scale motions (VLSM) to be analysed. Analysing one-point statistics revealed that the mean velocity profile is invariant for ?. The wall-normal location at which deviations occur in shorter domains changes strongly with increasing Reτ from the near-wall region to the outer layer, where VLSM are believed to live. The root mean square velocity profiles exhibit domain length dependencies for pipes shorter than 14R and 7R depending on Reτ. For all Reτ, the higher-order statistical moments show only weak dependencies and only for the shortest domain considered here. However, the analysis of one- and two-dimensional pre-multiplied energy spectra revealed that even for larger ?, not all physically relevant scales are fully captured, even though the aforementioned statistics are in good agreement with the literature. We found ? to be sufficiently large to capture VLSM-relevant turbulent scales in the considered range of Reτ based on our definition of an integral energy threshold of 10%. The requirement to capture at least 1/10 of the global maximum energy level is justified by a 14% increase of the streamwise turbulence intensity in the outer region between Reτ = 720 and 1500, which can be related to VLSM-relevant length scales. Based on this scaling anomaly, we found Reτ⪆1500 to be a necessary minimum requirement to investigate VLSM-related effects in pipe flow, even though the streamwise energy spectra does not yet indicate sufficient scale separation between the most energetic and the very long motions.

  16. Statistical analysis of field data for aircraft warranties

    NASA Astrophysics Data System (ADS)

    Lakey, Mary J.

    Air Force and Navy maintenance data collection systems were researched to determine their scientific applicability to the warranty process. New and unique algorithms were developed to extract failure distributions which were then used to characterize how selected families of equipment typically fails. Families of similar equipment were identified in terms of function, technology and failure patterns. Statistical analyses and applications such as goodness-of-fit test, maximum likelihood estimation and derivation of confidence intervals for the probability density function parameters were applied to characterize the distributions and their failure patterns. Statistical and reliability theory, with relevance to equipment design and operational failures were also determining factors in characterizing the failure patterns of the equipment families. Inferences about the families with relevance to warranty needs were then made.

  17. Constructing and Modifying Sequence Statistics for relevent Using informR in 𝖱

    PubMed Central

    Marcum, Christopher Steven; Butts, Carter T.

    2015-01-01

    The informR package greatly simplifies the analysis of complex event histories in 𝖱 by providing user friendly tools to build sufficient statistics for the relevent package. Historically, building sufficient statistics to model event sequences (of the form a→b) using the egocentric generalization of Butts’ (2008) relational event framework for modeling social action has been cumbersome. The informR package simplifies the construction of the complex list of arrays needed by the rem() model fitting for a variety of cases involving egocentric event data, multiple event types, and/or support constraints. This paper introduces these tools using examples from real data extracted from the American Time Use Survey. PMID:26185488

  18. Inferring general relations between network characteristics from specific network ensembles.

    PubMed

    Cardanobile, Stefano; Pernice, Volker; Deger, Moritz; Rotter, Stefan

    2012-01-01

    Different network models have been suggested for the topology underlying complex interactions in natural systems. These models are aimed at replicating specific statistical features encountered in real-world networks. However, it is rarely considered to which degree the results obtained for one particular network class can be extrapolated to real-world networks. We address this issue by comparing different classical and more recently developed network models with respect to their ability to generate networks with large structural variability. In particular, we consider the statistical constraints which the respective construction scheme imposes on the generated networks. After having identified the most variable networks, we address the issue of which constraints are common to all network classes and are thus suitable candidates for being generic statistical laws of complex networks. In fact, we find that generic, not model-related dependencies between different network characteristics do exist. This makes it possible to infer global features from local ones using regression models trained on networks with high generalization power. Our results confirm and extend previous findings regarding the synchronization properties of neural networks. Our method seems especially relevant for large networks, which are difficult to map completely, like the neural networks in the brain. The structure of such large networks cannot be fully sampled with the present technology. Our approach provides a method to estimate global properties of under-sampled networks in good approximation. Finally, we demonstrate on three different data sets (C. elegans neuronal network, R. prowazekii metabolic network, and a network of synonyms extracted from Roget's Thesaurus) that real-world networks have statistical relations compatible with those obtained using regression models.

  19. Addressing the mischaracterization of extreme rainfall in regional climate model simulations - A synoptic pattern based bias correction approach

    NASA Astrophysics Data System (ADS)

    Li, Jingwan; Sharma, Ashish; Evans, Jason; Johnson, Fiona

    2018-01-01

    Addressing systematic biases in regional climate model simulations of extreme rainfall is a necessary first step before assessing changes in future rainfall extremes. Commonly used bias correction methods are designed to match statistics of the overall simulated rainfall with observations. This assumes that change in the mix of different types of extreme rainfall events (i.e. convective and non-convective) in a warmer climate is of little relevance in the estimation of overall change, an assumption that is not supported by empirical or physical evidence. This study proposes an alternative approach to account for the potential change of alternate rainfall types, characterized here by synoptic weather patterns (SPs) using self-organizing maps classification. The objective of this study is to evaluate the added influence of SPs on the bias correction, which is achieved by comparing the corrected distribution of future extreme rainfall with that using conventional quantile mapping. A comprehensive synthetic experiment is first defined to investigate the conditions under which the additional information of SPs makes a significant difference to the bias correction. Using over 600,000 synthetic cases, statistically significant differences are found to be present in 46% cases. This is followed by a case study over the Sydney region using a high-resolution run of the Weather Research and Forecasting (WRF) regional climate model, which indicates a small change in the proportions of the SPs and a statistically significant change in the extreme rainfall over the region, although the differences between the changes obtained from the two bias correction methods are not statistically significant.

  20. Prevalence of bacteria resistant to antibiotics and/or biocides on meat processing plant surfaces throughout meat chain production.

    PubMed

    Lavilla Lerma, Leyre; Benomar, Nabil; Gálvez, Antonio; Abriouel, Hikmate

    2013-02-01

    In order to investigate the prevalence of resistant bacteria to biocides and/or antibiotics throughout meat chain production from sacrifice till end of production line, samples from various surfaces of a goat and lamb slaughterhouse representative of the region were analyzed by the culture dependent approach. Resistant Psychrotrophs (n=255 strains), Pseudomonas sp. (n=166 strains), E. coli (n=23 strains), Staphylococcus sp. (n=17 strains) and LAB (n=82 represented mainly by Lactobacillus sp.) were isolated. Resistant psychrotrophs and pseudomonads (47 and 29%, respectively) to different antimicrobials were frequently detected in almost all areas of meat processing plant regardless the antimicrobial used, although there was a clear shift in the spectrum of other bacterial groups and for this aim such resistance was determined according to several parameters: antimicrobial tested, sampling zone and the bacterial group. Correlation of different parameters was done using a statistical tool "Principal component analysis" (PCA) which determined that quaternary ammonium compounds and hexadecylpyridinium were the most relevant biocides for resistance in Pseudomonas sp., while ciprofloxacin and hexachlorophene were more relevant for psychrotrophs, LAB, and in lesser extent Staphylococcus sp. and Escherichia coli. On the other hand, PCA of sampling zones determined that sacrifice room (SR) and cutting room (CR) considered as main source of antibiotic and/or biocide resistant bacteria showed an opposite behaviour concerning relevance of antimicrobials to determine resistance being hexadecylpyridinium, cetrimide and chlorhexidine the most relevant in CR, while hexachlorophene, oxonia 6P and PHMG the most relevant in SR. In conclusion, rotational use of the relevant biocides as disinfectants in CR and SR is recommended in an environment which is frequently disinfected. Copyright © 2012 Elsevier B.V. All rights reserved.

  1. [Relevance between writing characteristic and therapeutic effect in schizophrenia].

    PubMed

    Li, Chun-Yan; Cai, Wei-Xiong

    2014-04-01

    To explore the relevance between writing characteristic and therapeutic effect in schizophrenia and to discuss the influence of aggressive behavior on writing characteristic. Recoding the casual and fixed writing in admission, one week, two weeks, four weeks, eight weeks after treatment and rating Positive and Negative Syndrome Scale (PANSS) and Modified Overt Aggression Scale (MOAS). Choosing two characteristics, "relationship between font and grid lines" and "having big strokes or not", and comparing before and after treatment. Eight weeks after treatment, the score of PANSS decreased. The condition of patients and the writing characteristic improved as well. The differences of writing characteristics were statistically significant in patients with aggressive behavior before and after treatment (P < 0.05). The writing characteristic has relation with therapeutic effects and improved with therapeutic effects in aggressive patients.

  2. Ceramic Defects in Metal-Ceramic Fixed Dental Prostheses Made from Co-Cr and Au-Pt Alloys: A Retrospective Study.

    PubMed

    Mikeli, Aikaterini; Boening, Klaus W; Lißke, Benjamin

    2015-01-01

    Ceramic defects in porcelain-fused-to-metal (PFM) restorations may depend on framework alloy type. This study assessed ceramic defects on cobalt-chromium- (Co-Cr-) and gold-platinum- (Au-Pt-) based PFM restorations. In this study, 147 Co-Cr-based and 168 Au-Pt-based PFM restorations inserted between 1998 and 2010 (139 patients) were examined for ceramic defects. Detected defects were assigned to three groups according to clinical defect relevance. Ceramic defect rates (Co-Cr-based: 12.9%; Au-Pt-based: 7.2%) revealed no significant difference but a strong statistical trend (U test, P = .082). Most defects were of little clinical relevance. Co-Cr PFM restorations may be at higher risk for ceramic defects compared to Au-Pt-based restorations.

  3. Relevance of the c-statistic when evaluating risk-adjustment models in surgery.

    PubMed

    Merkow, Ryan P; Hall, Bruce L; Cohen, Mark E; Dimick, Justin B; Wang, Edward; Chow, Warren B; Ko, Clifford Y; Bilimoria, Karl Y

    2012-05-01

    The measurement of hospital quality based on outcomes requires risk adjustment. The c-statistic is a popular tool used to judge model performance, but can be limited, particularly when evaluating specific operations in focused populations. Our objectives were to examine the interpretation and relevance of the c-statistic when used in models with increasingly similar case mix and to consider an alternative perspective on model calibration based on a graphical depiction of model fit. From the American College of Surgeons National Surgical Quality Improvement Program (2008-2009), patients were identified who underwent a general surgery procedure, and procedure groups were increasingly restricted: colorectal-all, colorectal-elective cases only, and colorectal-elective cancer cases only. Mortality and serious morbidity outcomes were evaluated using logistic regression-based risk adjustment, and model c-statistics and calibration curves were used to compare model performance. During the study period, 323,427 general, 47,605 colorectal-all, 39,860 colorectal-elective, and 21,680 colorectal cancer patients were studied. Mortality ranged from 1.0% in general surgery to 4.1% in the colorectal-all group, and serious morbidity ranged from 3.9% in general surgery to 12.4% in the colorectal-all procedural group. As case mix was restricted, c-statistics progressively declined from the general to the colorectal cancer surgery cohorts for both mortality and serious morbidity (mortality: 0.949 to 0.866; serious morbidity: 0.861 to 0.668). Calibration was evaluated graphically by examining predicted vs observed number of events over risk deciles. For both mortality and serious morbidity, there was no qualitative difference in calibration identified between the procedure groups. In the present study, we demonstrate how the c-statistic can become less informative and, in certain circumstances, can lead to incorrect model-based conclusions, as case mix is restricted and patients become more homogenous. Although it remains an important tool, caution is advised when the c-statistic is advanced as the sole measure of a model performance. Copyright © 2012 American College of Surgeons. All rights reserved.

  4. Statistical modelling for recurrent events: an application to sports injuries

    PubMed Central

    Ullah, Shahid; Gabbett, Tim J; Finch, Caroline F

    2014-01-01

    Background Injuries are often recurrent, with subsequent injuries influenced by previous occurrences and hence correlation between events needs to be taken into account when analysing such data. Objective This paper compares five different survival models (Cox proportional hazards (CoxPH) model and the following generalisations to recurrent event data: Andersen-Gill (A-G), frailty, Wei-Lin-Weissfeld total time (WLW-TT) marginal, Prentice-Williams-Peterson gap time (PWP-GT) conditional models) for the analysis of recurrent injury data. Methods Empirical evaluation and comparison of different models were performed using model selection criteria and goodness-of-fit statistics. Simulation studies assessed the size and power of each model fit. Results The modelling approach is demonstrated through direct application to Australian National Rugby League recurrent injury data collected over the 2008 playing season. Of the 35 players analysed, 14 (40%) players had more than 1 injury and 47 contact injuries were sustained over 29 matches. The CoxPH model provided the poorest fit to the recurrent sports injury data. The fit was improved with the A-G and frailty models, compared to WLW-TT and PWP-GT models. Conclusions Despite little difference in model fit between the A-G and frailty models, in the interest of fewer statistical assumptions it is recommended that, where relevant, future studies involving modelling of recurrent sports injury data use the frailty model in preference to the CoxPH model or its other generalisations. The paper provides a rationale for future statistical modelling approaches for recurrent sports injury. PMID:22872683

  5. BTS guide to good statistical practice

    DOT National Transportation Integrated Search

    2002-09-01

    Quality of data has many faces. Primarily, it has to be relevant to its users. Relevance is : an outcome that is achieved through a series of steps starting with a planning process that : link user needs to data requirements. It continues through acq...

  6. Intensity invariance properties of auditory neurons compared to the statistics of relevant natural signals in grasshoppers.

    PubMed

    Clemens, Jan; Weschke, Gerroth; Vogel, Astrid; Ronacher, Bernhard

    2010-04-01

    The temporal pattern of amplitude modulations (AM) is often used to recognize acoustic objects. To identify objects reliably, intensity invariant representations have to be formed. We approached this problem within the auditory pathway of grasshoppers. We presented AM patterns modulated at different time scales and intensities. Metric space analysis of neuronal responses allowed us to determine how well, how invariantly, and at which time scales AM frequency is encoded. We find that in some neurons spike-count cues contribute substantially (20-60%) to the decoding of AM frequency at a single intensity. However, such cues are not robust when intensity varies. The general intensity invariance of the system is poor. However, there exists a range of AM frequencies around 83 Hz where intensity invariance of local interneurons is relatively high. In this range, natural communication signals exhibit much variation between species, suggesting an important behavioral role for this frequency band. We hypothesize, just as has been proposed for human speech, that the communication signals might have evolved to match the processing properties of the receivers. This contrasts with optimal coding theory, which postulates that neuronal systems are adapted to the statistics of the relevant signals.

  7. A New Approach for the Calculation of Total Corneal Astigmatism Considering the Magnitude and Orientation of Posterior Corneal Astigmatism and Thickness.

    PubMed

    Piñero, David P; Caballero, María T; Nicolás-Albujer, Juan M; de Fez, Dolores; Camps, Vicent J

    2018-06-01

    To evaluate a new method of calculation of total corneal astigmatism based on Gaussian optics and the power design of a spherocylindrical lens (C) in the healthy eye and to compare it with keratometric (K) and power vector (PV) methods. A total of 92 healthy eyes of 92 patients (age, 17-65 years) were enrolled. Corneal astigmatism was calculated in all cases using K, PV, and our new approach C that considers the contribution of corneal thickness. An evaluation of the interchangeability of our new approach with the other 2 methods was performed using Bland-Altman analysis. Statistically significant differences between methods were found in the magnitude of astigmatism (P < 0.001), with the highest values provided by K. These differences in the magnitude of astigmatism were clinically relevant when K and C were compared [limits of agreement (LoA), -0.40 to 0.62 D), but not for the comparison between PV and C (LoA, -0.03 to 0.01 D). Differences in the axis of astigmatism between methods did not reach statistical significance (P = 0.408). However, they were clinically relevant when comparing K and C (LoA, -5.48 to 15.68 degrees) but not for the comparison between PV and C (LoA, -1.68 to 1.42 degrees). The use of our new approach for the calculation of total corneal astigmatism provides astigmatic results comparable to the PV method, which suggests that the effect of pachymetry on total corneal astigmatism is minimal in healthy eyes.

  8. Electron microscopic changes of detrusor in benign enlargement of prostate and its clinical correlation.

    PubMed

    Yadav, Sher Singh; Bhattar, Rohit; Sharma, Lokesh; Banga, Gautam; Sadasukhi, Trilok Chandra

    2017-01-01

    To study the ultra structural changes in bladder musculature in cases of BPE and their clinical relevance. In this descriptive longitudinal, controlled, observational study patients were enrolled into three groups, group 1, group 2A and group 2B. Control group (group-1) consisted of age matched normal male patients, who underwent surveillance or diagnostic cystoscopy for microscopic hematuria or irritative symptoms. Case group (group-2) comprised of patients with BPE, undergoing TURP. Case group (group-2) was further classified into: Category 2A (patients not on catheter) and cat-egory 2B (patients on catheter). All relevant clinical parameters like IPSS, prostate size, Qmax, PVR were recorded. Cystoscopy and bladder biopsy were performed in all patients. Various ultrastructural parameters like myocytes, fascicular pattern, interstitial tissue, nerve hypertrophy and cell junction pattern were analyzed under electron microscope and they were clinically correlated using appropriate statistical tests. Control group had significant difference as compared to case group in terms of baseline parameters like IPSS, flow rate and prostate size, both preoperatively and postoperatively, except for PVR, which was seen only preoperatively. There was statistically significant difference in ultrastructural patterns between case and control group in all five electron microscopic patterns. However, no significant difference was found between the subcategories of case groups. BPE is responsible for ultra structural changes in detrusor muscle and these changes remain persistent even after TURP. Nerve hypertrophy, which was not thoroughly discussed in previous studies, is also one of the salient feature of this study. Copyright® by the International Brazilian Journal of Urology.

  9. Host Genes and Resistance/Sensitivity to Military Priority Pathogens

    DTIC Science & Technology

    2011-06-01

    tularensis (FT Schu S4) that yields a significantly different outcome to infection in B6 and D2 mice. Both strains succumb to infection at essentially the...Figure 2). Some of the group sizes are too small to yield statistically relevant findings, and additional studies will be performed with these strains as...generated approximately 100-fold coverage of the DBA/2J genome (Table 2) and sequenced 99.96% of the DBA/2J genome (excluding gaps in the reference

  10. On the Way to 2020: Data for Vocational Education and Training Policies. Country Statistical Overviews. Update 2013

    ERIC Educational Resources Information Center

    Cedefop - European Centre for the Development of Vocational Training, 2014

    2014-01-01

    This report provides an updated statistical overview of vocational education and training (VET) and lifelong learning in European countries. These country statistical snapshots illustrate progress on indicators selected for their policy relevance and contribution to Europe 2020 objectives. The indicators take 2010 as the baseline year and present…

  11. Methods to Approach Velocity Data Reduction and Their Effects on Conformation Statistics in Viscoelastic Turbulent Channel Flows

    NASA Astrophysics Data System (ADS)

    Samanta, Gaurab; Beris, Antony; Handler, Robert; Housiadas, Kostas

    2009-03-01

    Karhunen-Loeve (KL) analysis of DNS data of viscoelastic turbulent channel flows helps us to reveal more information on the time-dependent dynamics of viscoelastic modification of turbulence [Samanta et. al., J. Turbulence (in press), 2008]. A selected set of KL modes can be used for a data reduction modeling of these flows. However, it is pertinent that verification be done against established DNS results. For this purpose, we did comparisons of velocity and conformations statistics and probability density functions (PDFs) of relevant quantities obtained from DNS and reconstructed fields using selected KL modes and time-dependent coefficients. While the velocity statistics show good agreement between results from DNS and KL reconstructions even with just hundreds of KL modes, tens of thousands of KL modes are required to adequately capture the trace of polymer conformation resulting from DNS. New modifications to KL method have therefore been attempted to account for the differences in conformation statistics. The applicability and impact of these new modified KL methods will be discussed in the perspective of data reduction modeling.

  12. Evaluating pictogram prediction in a location-aware augmentative and alternative communication system.

    PubMed

    Garcia, Luís Filipe; de Oliveira, Luís Caldas; de Matos, David Martins

    2016-01-01

    This study compared the performance of two statistical location-aware pictogram prediction mechanisms, with an all-purpose (All) pictogram prediction mechanism, having no location knowledge. The All approach had a unique language model under all locations. One of the location-aware alternatives, the location-specific (Spec) approach, made use of specific language models for pictogram prediction in each location of interest. The other location-aware approach resulted from combining the Spec and the All approaches, and was designated the mixed approach (Mix). In this approach, the language models acquired knowledge from all locations, but a higher relevance was assigned to the vocabulary from the associated location. Results from simulations showed that the Mix and Spec approaches could only outperform the baseline in a statistically significant way if pictogram users reuse more than 50% and 75% of their sentences, respectively. Under low sentence reuse conditions there were no statistically significant differences between the location-aware approaches and the All approach. Under these conditions, the Mix approach performed better than the Spec approach in a statistically significant way.

  13. Statistical analysis for validating ACO-KNN algorithm as feature selection in sentiment analysis

    NASA Astrophysics Data System (ADS)

    Ahmad, Siti Rohaidah; Yusop, Nurhafizah Moziyana Mohd; Bakar, Azuraliza Abu; Yaakub, Mohd Ridzwan

    2017-10-01

    This research paper aims to propose a hybrid of ant colony optimization (ACO) and k-nearest neighbor (KNN) algorithms as feature selections for selecting and choosing relevant features from customer review datasets. Information gain (IG), genetic algorithm (GA), and rough set attribute reduction (RSAR) were used as baseline algorithms in a performance comparison with the proposed algorithm. This paper will also discuss the significance test, which was used to evaluate the performance differences between the ACO-KNN, IG-GA, and IG-RSAR algorithms. This study evaluated the performance of the ACO-KNN algorithm using precision, recall, and F-score, which were validated using the parametric statistical significance tests. The evaluation process has statistically proven that this ACO-KNN algorithm has been significantly improved compared to the baseline algorithms. The evaluation process has statistically proven that this ACO-KNN algorithm has been significantly improved compared to the baseline algorithms. In addition, the experimental results have proven that the ACO-KNN can be used as a feature selection technique in sentiment analysis to obtain quality, optimal feature subset that can represent the actual data in customer review data.

  14. Statistics Poster Challenge for Schools

    ERIC Educational Resources Information Center

    Payne, Brad; Freeman, Jenny; Stillman, Eleanor

    2013-01-01

    The analysis and interpretation of data are important life skills. A poster challenge for schoolchildren provides an innovative outlet for these skills and demonstrates their relevance to daily life. We discuss our Statistics Poster Challenge and the lessons we have learned.

  15. Research methodology in dentistry: Part II — The relevance of statistics in research

    PubMed Central

    Krithikadatta, Jogikalmat; Valarmathi, Srinivasan

    2012-01-01

    The lifeline of original research depends on adept statistical analysis. However, there have been reports of statistical misconduct in studies that could arise from the inadequate understanding of the fundamental of statistics. There have been several reports on this across medical and dental literature. This article aims at encouraging the reader to approach statistics from its logic rather than its theoretical perspective. The article also provides information on statistical misuse in the Journal of Conservative Dentistry between the years 2008 and 2011 PMID:22876003

  16. Diagnosis of students' ability in a statistical course based on Rasch probabilistic outcome

    NASA Astrophysics Data System (ADS)

    Mahmud, Zamalia; Ramli, Wan Syahira Wan; Sapri, Shamsiah; Ahmad, Sanizah

    2017-06-01

    Measuring students' ability and performance are important in assessing how well students have learned and mastered the statistical courses. Any improvement in learning will depend on the student's approaches to learning, which are relevant to some factors of learning, namely assessment methods carrying out tasks consisting of quizzes, tests, assignment and final examination. This study has attempted an alternative approach to measure students' ability in an undergraduate statistical course based on the Rasch probabilistic model. Firstly, this study aims to explore the learning outcome patterns of students in a statistics course (Applied Probability and Statistics) based on an Entrance-Exit survey. This is followed by investigating students' perceived learning ability based on four Course Learning Outcomes (CLOs) and students' actual learning ability based on their final examination scores. Rasch analysis revealed that students perceived themselves as lacking the ability to understand about 95% of the statistics concepts at the beginning of the class but eventually they had a good understanding at the end of the 14 weeks class. In terms of students' performance in their final examination, their ability in understanding the topics varies at different probability values given the ability of the students and difficulty of the questions. Majority found the probability and counting rules topic to be the most difficult to learn.

  17. Relevance and reliability of experimental data in human health risk assessment of pesticides.

    PubMed

    Kaltenhäuser, Johanna; Kneuer, Carsten; Marx-Stoelting, Philip; Niemann, Lars; Schubert, Jens; Stein, Bernd; Solecki, Roland

    2017-08-01

    Evaluation of data relevance, reliability and contribution to uncertainty is crucial in regulatory health risk assessment if robust conclusions are to be drawn. Whether a specific study is used as key study, as additional information or not accepted depends in part on the criteria according to which its relevance and reliability are judged. In addition to GLP-compliant regulatory studies following OECD Test Guidelines, data from peer-reviewed scientific literature have to be evaluated in regulatory risk assessment of pesticide active substances. Publications should be taken into account if they are of acceptable relevance and reliability. Their contribution to the overall weight of evidence is influenced by factors including test organism, study design and statistical methods, as well as test item identification, documentation and reporting of results. Various reports make recommendations for improving the quality of risk assessments and different criteria catalogues have been published to support evaluation of data relevance and reliability. Their intention was to guide transparent decision making on the integration of the respective information into the regulatory process. This article describes an approach to assess the relevance and reliability of experimental data from guideline-compliant studies as well as from non-guideline studies published in the scientific literature in the specific context of uncertainty and risk assessment of pesticides. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  18. Effects of spatial frequency bands on perceptual decision: it is not the stimuli but the comparison.

    PubMed

    Rotshtein, Pia; Schofield, Andrew; Funes, María J; Humphreys, Glyn W

    2010-08-24

    Observers performed three between- and two within-category perceptual decisions with hybrid stimuli comprising low and high spatial frequency (SF) images. We manipulated (a) attention to, and (b) congruency of information in the two SF bands. Processing difficulty of the different SF bands varied across different categorization tasks: house-flower, face-house, and valence decisions were easier when based on high SF bands, while flower-face and gender categorizations were easier when based on low SF bands. Larger interference also arose from response relevant distracters that were presented in the "preferred" SF range of the task. Low SF effects were facilitated by short exposure durations. The results demonstrate that decisions are affected by an interaction of task and SF range and that the information from the non-attended SF range interfered at the decision level. A further analysis revealed that overall differences in the statistics of image features, in particular differences of orientation information between two categories, were associated with decision difficulty. We concluded that the advantage of using information from one SF range over another depends on the specific task requirements that built on the differences of the statistical properties between the compared categories.

  19. Improvement of Biocatalysts for Industrial and Environmental Purposes by Saturation Mutagenesis

    PubMed Central

    Valetti, Francesca; Gilardi, Gianfranco

    2013-01-01

    Laboratory evolution techniques are becoming increasingly widespread among protein engineers for the development of novel and designed biocatalysts. The palette of different approaches ranges from complete randomized strategies to rational and structure-guided mutagenesis, with a wide variety of costs, impacts, drawbacks and relevance to biotechnology. A technique that convincingly compromises the extremes of fully randomized vs. rational mutagenesis, with a high benefit/cost ratio, is saturation mutagenesis. Here we will present and discuss this approach in its many facets, also tackling the issue of randomization, statistical evaluation of library completeness and throughput efficiency of screening methods. Successful recent applications covering different classes of enzymes will be presented referring to the literature and to research lines pursued in our group. The focus is put on saturation mutagenesis as a tool for designing novel biocatalysts specifically relevant to production of fine chemicals for improving bulk enzymes for industry and engineering technical enzymes involved in treatment of waste, detoxification and production of clean energy from renewable sources. PMID:24970191

  20. Cognitive capitalism: the effect of cognitive ability on wealth, as mediated through scientific achievement and economic freedom.

    PubMed

    Rindermann, Heiner; Thompson, James

    2011-06-01

    Traditional economic theories stress the relevance of political, institutional, geographic, and historical factors for economic growth. In contrast, human-capital theories suggest that peoples' competences, mediated by technological progress, are the deciding factor in a nation's wealth. Using three large-scale assessments, we calculated cognitive-competence sums for the mean and for upper- and lower-level groups for 90 countries and compared the influence of each group's intellectual ability on gross domestic product. In our cross-national analyses, we applied different statistical methods (path analyses, bootstrapping) and measures developed by different research groups to various country samples and historical periods. Our results underscore the decisive relevance of cognitive ability--particularly of an intellectual class with high cognitive ability and accomplishments in science, technology, engineering, and math--for national wealth. Furthermore, this group's cognitive ability predicts the quality of economic and political institutions, which further determines the economic affluence of the nation. Cognitive resources enable the evolution of capitalism and the rise of wealth.

  1. Rivastigmine for Alzheimer's disease: Canadian interpretation of intermediate outcome measures and cost implications.

    PubMed

    Baladi, J F; Bailey, P A; Black, S; Bouchard, R W; Farcnik, K D; Gauthier, S; Kertesz, A; Mohr, E; Robillard, A

    2000-12-01

    Clinical studies have shown that patients with Alzheimer's disease (AD) who are treated with rivastigmine have statistically significantly better scores on 5 scales used to assess AD than control patients receiving placebo. However, the clinical meaning and cost implications of these differences are not clear. The purpose of this study was to assess the clinical meaning and cost implications of statistically significant results obtained in clinical trials of rivastigmine for the treatment of AD. Potential cost implications for the health care system, caregivers, and society are considered. Data on clinical effects of rivastigmine were obtained from published North American and European clinical studies of patients with mild to moderately severe AD receiving rivastigmine 6 to 12 mg/d (n = 828) or placebo (n = 647). Differences in scores on the Alzheimer's Disease Assessment Scale-Cognitive Function, Clinician's Interview-Based Impression of Change with both clinical and caregiver information considered, Progressive Deterioration Scale, Mini-Mental State Examination (MMSE), and Global Deterioration Scale were assessed. A convenience panel of 9 Canadian specialists experienced in the treatment of AD provided their opinions on the clinical importance of the trial results. Chart review was performed to identify specific behaviors that improved, and cost implications of improvements were assessed. The panel determined that statistically significant differences in scores on all scales except the MMSE were likely associated with functional or cognitive differences that were clinically relevant for patients, reflecting stabilization that would have beneficial consequences for caregivers and health care resource use. Subsequent chart review showed that improvement on specific scale items confirmed the physician panel's opinion. Analysis of possible cost implications to society indicated that medication expenditures would be offset largely by delays in the need for paid home care and institutionalization, positive effects on caregiver health, and less time lost from work for the caregiver. From the perspective of a Canadian specialist panel, rivastigmine treatment for AD produces clinically relevant effects for patients that are beneficial to caregivers. These effects suggest decreased use of caregiver resources and delays in the need for institutionalization, both of which reduce societal costs.

  2. Guide to good statistical practice in the transportation field

    DOT National Transportation Integrated Search

    2003-05-01

    Quality of data has many faces. Primarily, it has to be relevant (i.e., useful) to its users. Relevance is achieved through a series of steps starting with a planning process that links user needs to data requirements. It continues through acquisitio...

  3. Web-Based Survey Application to Collect Contextually Relevant Geographic Data With Exposure Times: Application Development and Feasibility Testing

    PubMed Central

    Tobin, Karin; Rudolph, Jonathan; Latkin, Carl

    2018-01-01

    Background Although studies that characterize the risk environment by linking contextual factors with individual-level data have advanced infectious disease and substance use research, there are opportunities to refine how we define relevant neighborhood exposures; this can in turn reduce the potential for exposure misclassification. For example, for those who do not inject at home, injection risk behaviors may be more influenced by the environment where they inject than where they live. Similarly, among those who spend more time away from home, a measure that accounts for different neighborhood exposures by weighting each unique location proportional to the percentage of time spent there may be more correlated with health behaviors than one’s residential environment. Objective This study aimed to develop a Web-based application that interacts with Google Maps application program interfaces (APIs) to collect contextually relevant locations and the amount of time spent in each. Our analysis examined the extent of overlap across different location types and compared different approaches for classifying neighborhood exposure. Methods Between May 2014 and March 2017, 547 participants enrolled in a Baltimore HIV care and prevention study completed an interviewer-administered Web-based survey that collected information about where participants were recruited, worked, lived, socialized, injected drugs, and spent most of their time. For each location, participants gave an address or intersection which they confirmed using Google Map and Street views. Geographic coordinates (and hours spent in each location) were joined to neighborhood indicators by Community Statistical Area (CSA). We computed a weighted exposure based on the proportion of time spent in each unique location. We compared neighborhood exposures based on each of the different location types with one another and the weighted exposure using analysis of variance with Bonferroni corrections to account for multiple comparisons. Results Participants reported spending the most time at home, followed by the location where they injected drugs. Injection locations overlapped most frequently with locations where people reported socializing and living or sleeping. The least time was spent in the locations where participants reported earning money and being recruited for the study; these locations were also the least likely to overlap with other location types. We observed statistically significant differences in neighborhood exposures according to the approach used. Overall, people reported earning money in higher-income neighborhoods and being recruited for the study and injecting in neighborhoods with more violent crime, abandoned houses, and poverty. Conclusions This analysis revealed statistically significant differences in neighborhood exposures when defined by different locations or weighted based on exposure time. Future analyses are needed to determine which exposure measures are most strongly associated with health and risk behaviors and to explore whether associations between individual-level behaviors and neighborhood exposures are modified by exposure times. PMID:29351899

  4. Dephasing in a 5/2 quantum Hall Mach-Zehnder interferometer due to the presence of neutral edge modes

    NASA Astrophysics Data System (ADS)

    Dinaii, Yehuda; Goldstein, Moshe; Gefen, Yuval

    Non-Abelian statistics is an intriguing feature predicted to characterize quasiparticles in certain topological phases of matter. This property is both fascinating on the theoretical side and the key ingredient for the implementation of future topological quantum computers. A smoking gun manifestation of non-Abelian statistics consists of demonstrating that braiding of quasiparticles leads to transitions among different states in the relevant degenerate Hilbert manifold. This can be achieved utilizing a Mach-Zehnder interferometer, where Coulomb effects can be neglected, and the electric current is expected to carry clear signatures of non-Abelianity. Here we argue that attempts to measure non-Abelian statistics in the prominent quantum Hall fraction of 5/2 may fail; this can be understood by studying the corresponding edge theory at finite temperatures and bias. We find that the presence of neutral modes imposes stronger limitations on the experimental conditions as compared to quantum Hall states that do not support neutral edge modes. We discuss how to overcome this hindrance. Interestingly, neutral-mode-induced dephasing can be quite different in the Pfaffian state as compared to the anti-Pfaffian state, if the neutral and charge velocities are comparable.

  5. IEA Bioenergy Countries' Report: Bioenergy policies and status of implementation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bacovsky, Dina; Ludwiczek, Nikolaus; Pointner, Christian

    2016-08-05

    This report was prepared from IEA statistical data, information from IRENA, and IEA Bioenergy Tasks’ country reports, combined with data provided by the IEA Bioenergy Executive Committee. All individual country reports were reviewed by the national delegates to the IEA Bioenergy Executive Committee, who have approved the content. In the first section of each country report, national renewable energy targets are presented (first table in each country report), and the main pieces of national legislation are discussed. In the second section of each country report the total primary energy supply (TPES) by resources and the contribution of bioenergy are presented.more » All data is taken from IEA statistics for the year 2014. Where 2014 data was not available, 2013 data was used. It is worth noting that data reported in national statistics can differ from the IEA data presented, as the reporting categories and definitions are different. In the third section of each country report, the research focus related to bioenergy is discussed. Relevant funding programs, major research institutes and projects are described. In the fourth section, recent major bioenergy developments are described. Finally, in the fifth section, links to sources of information are provided.« less

  6. Quantitatively measured tremor in hand-arm vibration-exposed workers.

    PubMed

    Edlund, Maria; Burström, Lage; Hagberg, Mats; Lundström, Ronnie; Nilsson, Tohr; Sandén, Helena; Wastensson, Gunilla

    2015-04-01

    The aim of the present study was to investigate the possible increase in hand tremor in relation to hand-arm vibration (HAV) exposure in a cohort of exposed and unexposed workers. Participants were 178 male workers with or without exposure to HAV. The study is cross-sectional regarding the outcome of tremor and has a longitudinal design with respect to exposure. The dose of HAV exposure was collected via questionnaires and measurements at several follow-ups. The CATSYS Tremor Pen(®) was used for measuring postural tremor. Multiple linear regression methods were used to analyze associations between different tremor variables and HAV exposure, along with predictor variables with biological relevance. There were no statistically significant associations between the different tremor variables and cumulative HAV or current exposure. Age was a statistically significant predictor of variation in tremor outcomes for three of the four tremor variables, whereas nicotine use was a statistically significant predictor of either left or right hand or both hands for all four tremor variables. In the present study, there was no evidence of an exposure-response association between HAV exposure and measured postural tremor. Increase in age and nicotine use appeared to be the strongest predictors of tremor.

  7. 12 CFR 348.2 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... means a natural person, corporation, or other business entity. (m) Relevant metropolitan statistical... median family income for the metropolitan statistical area (MSA), if a depository organization is located... exclusively to the business of retail merchandising or manufacturing; (ii) A person whose management functions...

  8. K(3)EDTA Vacuum Tubes Validation for Routine Hematological Testing.

    PubMed

    Lima-Oliveira, Gabriel; Lippi, Giuseppe; Salvagno, Gian Luca; Montagnana, Martina; Poli, Giovanni; Solero, Giovanni Pietro; Picheth, Geraldo; Guidi, Gian Cesare

    2012-01-01

    Background and Objective. Some in vitro diagnostic devices (e.g, blood collection vacuum tubes and syringes for blood analyses) are not validated before the quality laboratory managers decide to start using or to change the brand. Frequently, the laboratory or hospital managers select the vacuum tubes for blood collection based on cost considerations or on relevance of a brand. The aim of this study was to validate two dry K(3)EDTA vacuum tubes of different brands for routine hematological testing. Methods. Blood specimens from 100 volunteers in two different K(3)EDTA vacuum tubes were collected by a single, expert phlebotomist. The routine hematological testing was done on Advia 2120i hematology system. The significance of the differences between samples was assessed by paired Student's t-test after checking for normality. The level of statistical significance was set at P < 0.05. Results and Conclusions. Different brand's tubes evaluated can represent a clinically relevant source of variations only on mean platelet volume (MPV) and platelet distribution width (PDW). Basically, our validation will permit the laboratory or hospital managers to select the brand's vacuum tubes validated according to him/her technical or economical reasons for routine hematological tests.

  9. K3EDTA Vacuum Tubes Validation for Routine Hematological Testing

    PubMed Central

    Lima-Oliveira, Gabriel; Lippi, Giuseppe; Salvagno, Gian Luca; Montagnana, Martina; Poli, Giovanni; Solero, Giovanni Pietro; Picheth, Geraldo; Guidi, Gian Cesare

    2012-01-01

    Background and Objective. Some in vitro diagnostic devices (e.g, blood collection vacuum tubes and syringes for blood analyses) are not validated before the quality laboratory managers decide to start using or to change the brand. Frequently, the laboratory or hospital managers select the vacuum tubes for blood collection based on cost considerations or on relevance of a brand. The aim of this study was to validate two dry K3EDTA vacuum tubes of different brands for routine hematological testing. Methods. Blood specimens from 100 volunteers in two different K3EDTA vacuum tubes were collected by a single, expert phlebotomist. The routine hematological testing was done on Advia 2120i hematology system. The significance of the differences between samples was assessed by paired Student's t-test after checking for normality. The level of statistical significance was set at P < 0.05. Results and Conclusions. Different brand's tubes evaluated can represent a clinically relevant source of variations only on mean platelet volume (MPV) and platelet distribution width (PDW). Basically, our validation will permit the laboratory or hospital managers to select the brand's vacuum tubes validated according to him/her technical or economical reasons for routine hematological tests. PMID:22888448

  10. External cooling methods for treatment of fever in adults: a systematic review.

    PubMed

    Chan, E Y; Chen, W T; Assam, P N

    It is unclear if the use of external cooling to treat fever contributes to better patient outcomes. Despite this, it is a common practice to treat febrile patients using external cooling methods alone or in combination with pharmacological antipyretics. The objective of this systematic review was to evaluate the effectiveness and complications of external cooling methods in febrile adults in acute care settings. We included adults admitted to acute care settings and developed elevated body temperature.We considered any external cooling method compared to no cooling.We considered randomised control trials (RCTs), quasi-randomised trials and controlled trials with concurrent control groups SEARCH STRATEGY: We searched relevant published or unpublished studies up to October 2009 regardless of language. We searched major databases, reference lists, bibliographies of all relevant articles, and contacted experts in the field for additional studies. Two reviewers independently screened titles and abstracts, and retrieved all potentially relevant studies. Two reviewers independently conducted the assessment of methodological quality of included studies. The results of studies where appropriate was quantitatively summarised. Relative risks or weighted mean difference and their 95% confidence intervals were calculated using the random effects model in Review Manager 5. For each pooled comparison, heterogeneity was assessed using the chi-squared test at the 5% level of statistical significance, with I statistic used to assess the impact of statistical heterogeneity on study results. Where statistical summary was not appropriate or possible, the findings were summarised in narrative form. We found six RCTs that compared the effectiveness and complications of external cooling methods against no external cooling. There was wide variation in the outcome measures between the included trials. We performed meta-analyses on data from two RCTs totalling 356 patients testing external cooling combined with antipyretics versus antipyretics alone, for the resolution of fever. The results did not show a statistically significant reduction in fever (relative risk 1.12, 95% CI 0.95 to 1.31; P=0.35; I =0%).The evidence from four trials suggested that there was no difference in the mean drop in body temperature post treatment initiation, between external cooling and no cooling groups. The results of most other outcomes also did not demonstrate a statistically significant difference. However summarising the results of five trials consisting of 371 patients found that the external cooling group was more likely to shiver when compared to the no cooling group (relative risk 6.37, 95% CI 2.01 to 20.11; P=0.61; I =0%).Overall this review suggested that external cooling methods (whether used alone or in combination with pharmacologic methods) were not effective in treating fever among adults admitted to acute care settings. Yet they were associated with higher incidences of shivering. These results should be interpreted in light of the methodological limitations of available trials. Given the current available evidence, the routine use of external cooling methods to treat fever in adults may not be warranted until further evidence is available. They could be considered for patients whose conditions are unable to tolerate even slight increase in temperature or who request for them. Whenever they are used, shivering should be prevented. Well-designed, adequately powered, randomised trials comparing external cooling methods against no cooling are needed.

  11. A guide to statistical analysis in microbial ecology: a community-focused, living review of multivariate data analyses.

    PubMed

    Buttigieg, Pier Luigi; Ramette, Alban

    2014-12-01

    The application of multivariate statistical analyses has become a consistent feature in microbial ecology. However, many microbial ecologists are still in the process of developing a deep understanding of these methods and appreciating their limitations. As a consequence, staying abreast of progress and debate in this arena poses an additional challenge to many microbial ecologists. To address these issues, we present the GUide to STatistical Analysis in Microbial Ecology (GUSTA ME): a dynamic, web-based resource providing accessible descriptions of numerous multivariate techniques relevant to microbial ecologists. A combination of interactive elements allows users to discover and navigate between methods relevant to their needs and examine how they have been used by others in the field. We have designed GUSTA ME to become a community-led and -curated service, which we hope will provide a common reference and forum to discuss and disseminate analytical techniques relevant to the microbial ecology community. © 2014 The Authors. FEMS Microbiology Ecology published by John Wiley & Sons Ltd on behalf of Federation of European Microbiological Societies.

  12. Regional projection of climate impact indices over the Mediterranean region

    NASA Astrophysics Data System (ADS)

    Casanueva, Ana; Frías, M.; Dolores; Herrera, Sixto; Bedia, Joaquín; San Martín, Daniel; Gutiérrez, José Manuel; Zaninovic, Ksenija

    2014-05-01

    Climate Impact Indices (CIIs) are being increasingly used in different socioeconomic sectors to transfer information about climate change impacts and risks to stakeholders. CIIs are typically based on different weather variables such as temperature, wind speed, precipitation or humidity and comprise, in a single index, the relevant meteorological information for the particular impact sector (in this study wildfires and tourism). This dependence on several climate variables poses important limitations to the application of statistical downscaling techniques, since physical consistency among variables is required in most cases to obtain reliable local projections. The present study assesses the suitability of the "direct" downscaling approach, in which the downscaling method is directly applied to the CII. In particular, for illustrative purposes, we consider two popular indices used in the wildfire and tourism sectors, the Fire Weather Index (FWI) and the Physiological Equivalent Temperature (PET), respectively. As an example, two case studies are analysed over two representative Mediterranean regions of interest for the EU CLIM-RUN project: continental Spain for the FWI and Croatia for the PET. Results obtained with this "direct" downscaling approach are similar to those found from the application of the statistical downscaling to the individual meteorological drivers prior to the index calculation ("component" downscaling) thus, a wider range of statistical downscaling methods could be used. As an illustration, future changes in both indices are projected by applying two direct statistical downscaling methods, analogs and linear regression, to the ECHAM5 model. Larger differences were found between the two direct statistical downscaling approaches than between the direct and the component approaches with a single downscaling method. While these examples focus on particular indices and Mediterranean regions of interest for CLIM-RUN stakeholders, the same study could be extended to other indices and regions.

  13. Impact of surgery for endometriomas on pregnancy outcomes following in vitro fertilization-intracytoplasmic sperm injection. Who should be the preferred laparoscopists: gynecologists or reproductive surgeons?

    PubMed

    Cai, He; Guan, Jing; Shen, Huan; Han, Hongjing; Yu, Xiaoming

    2017-08-01

    To investigate whether laparoscopic excision of ovarian endometriomas pretreated with operation by gynecologists or reproductive surgeons exerts different effects on in vitro fertilization-intracytoplasmic sperm injection results. Retrospective case control study. Relevant information was collected from the electronic records of women who underwent IVF/ICSI from 01/01/2013 to 30/12/2015 in our unit. The study group consisted of 35 women who previously had laparoscopic endometrioma excision by reproductive surgeons in our unit; the control group included 36 patients who underwent surgery for endometriomas by gynecologists in our hospital. There were slightly higher numbers of AFC and higher pregnancy rate in the study group, although differences did not reach statistical significance. For patients over 35 years old, there were more oocyte retrieved, mature oocytes and two pronucei (2PN) in the study group than the control group although observed differences did not reach statistical significance. Electrocautery is more deleterious on ovarian reserve than hemostatic suture. In procedure of patients who wish to conceive, surgeons should use hemostatic suturing technique preferentially.

  14. Analysis of covariance as a remedy for demographic mismatch of research subject groups: some sobering simulations.

    PubMed

    Adams, K M; Brown, G G; Grant, I

    1985-08-01

    Analysis of Covariance (ANCOVA) is often used in neuropsychological studies to effect ex-post-facto adjustment of performance variables amongst groups of subjects mismatched on some relevant demographic variable. This paper reviews some of the statistical assumptions underlying this usage. In an attempt to illustrate the complexities of this statistical technique, three sham studies using actual patient data are presented. These staged simulations have varying relationships between group test performance differences and levels of covariate discrepancy. The results were robust and consistent in their nature, and were held to support the wisdom of previous cautions by statisticians concerning the employment of ANCOVA to justify comparisons between incomparable groups. ANCOVA should not be used in neuropsychological research to equate groups unequal on variables such as age and education or to exert statistical control whose objective is to eliminate consideration of the covariate as an explanation for results. Finally, the report advocates by example the use of simulation to further our understanding of neuropsychological variables.

  15. A benchmark for statistical microarray data analysis that preserves actual biological and technical variance.

    PubMed

    De Hertogh, Benoît; De Meulder, Bertrand; Berger, Fabrice; Pierre, Michael; Bareke, Eric; Gaigneaux, Anthoula; Depiereux, Eric

    2010-01-11

    Recent reanalysis of spike-in datasets underscored the need for new and more accurate benchmark datasets for statistical microarray analysis. We present here a fresh method using biologically-relevant data to evaluate the performance of statistical methods. Our novel method ranks the probesets from a dataset composed of publicly-available biological microarray data and extracts subset matrices with precise information/noise ratios. Our method can be used to determine the capability of different methods to better estimate variance for a given number of replicates. The mean-variance and mean-fold change relationships of the matrices revealed a closer approximation of biological reality. Performance analysis refined the results from benchmarks published previously.We show that the Shrinkage t test (close to Limma) was the best of the methods tested, except when two replicates were examined, where the Regularized t test and the Window t test performed slightly better. The R scripts used for the analysis are available at http://urbm-cluster.urbm.fundp.ac.be/~bdemeulder/.

  16. Test of the statistical model in {sup 96}Mo with the BaF{sub 2}{gamma} calorimeter DANCE array

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sheets, S. A.; Mitchell, G. E.; Agvaanluvsan, U.

    2009-02-15

    The {gamma}-ray cascades following the {sup 95}Mo(n,{gamma}){sup 96}Mo reaction were studied with the {gamma} calorimeter DANCE (Detector for Advanced Neutron Capture Experiments) consisting of 160 BaF{sub 2} scintillation detectors at the Los Alamos Neutron Science Center. The {gamma}-ray energy spectra for different multiplicities were measured for s- and p-wave resonances below 2 keV. The shapes of these spectra were found to be in very good agreement with simulations using the DICEBOX statistical model code. The relevant model parameters used for the level density and photon strength functions were identical with those that provided the best fit of the data frommore » a recent measurement of the thermal {sup 95}Mo(n,{gamma}){sup 96}Mo reaction with the two-step-cascade method. The reported results strongly suggest that the extreme statistical model works very well in the mass region near A=100.« less

  17. Extending Working Life: Which Competencies are Crucial in Near-Retirement Age?

    PubMed

    Wiktorowicz, Justyna

    2018-01-01

    Nowadays, one of the most important economic and social phenomena is population ageing. Due to the low activity rate of older people, one of the most important challenges is to take various actions involving active ageing, which is supposed to extending working life, and along with it-improve the competencies of older people. The aim of this paper is to evaluate the relevance of different competencies for extending working life, with limiting the analysis for Poland. The paper also assesses the competencies of mature Polish people (aged 50+, but still in working age). In the statistical analysis, I used logistic regression, as well as descriptive statistics and appropriate statistical tests. The results show that among the actions aimed at extending working life, the most important are those related to lifelong learning, targeted at improving the competencies of the older generation. The competencies (both soft and hard) of people aged 50+ are more important than their formal education.

  18. Automated MRI parcellation of the frontal lobe

    PubMed Central

    Ranta, Marin E.; Chen, Min; Crocetti, Deana; Prince, Jerry L.; Subramaniam, Krish; Fischl, Bruce; Kaufmann, Walter E.; Mostofsky, Stewart H.

    2014-01-01

    Examination of associations between specific disorders and physical properties of functionally relevant frontal lobe sub-regions is a fundamental goal in neuropsychiatry. Here we present and evaluate automated methods of frontal lobe parcellation with the programs FreeSurfer(FS) and TOADS-CRUISE(T-C), based on the manual method described in Ranta et al. (2009) in which sulcal-gyral landmarks were used to manually delimit functionally relevant regions within the frontal lobe: i.e., primary motor cortex, anterior cingulate, deep white matter, premotor cortex regions (supplementary motor complex, frontal eye field and lateral premotor cortex) and prefrontal cortex (PFC) regions (medial PFC, dorsolateral PFC, inferior PFC, lateral orbitofrontal cortex (OFC) and medial OFC). Dice's coefficient, a measure of overlap, and percent volume difference were used to measure the reliability between manual and automated delineations for each frontal lobe region. For FS, mean Dice's coefficient for all regions was 0.75 and percent volume difference was 21.2%. For T-C the mean Dice's coefficient was 0.77 and the mean percent volume difference for all regions was 20.2%. These results, along with a high degree of agreement between the two automated methods (mean Dice's coefficient = 0.81, percent volume difference = 12.4%) and a proof-of-principle group difference analysis that highlights the consistency and sensitivity of the automated methods, indicate that the automated methods are valid techniques for parcellation of the frontal lobe into functionally relevant sub-regions. Thus, the methodology has the potential to increase efficiency, statistical power and reproducibility for population analyses of neuropsychiatric disorders with hypothesized frontal lobe contributions. PMID:23897577

  19. A Double-Blind Placebo-Controlled Randomized Clinical Trial With Magnesium Oxide to Reduce Intrafraction Prostate Motion for Prostate Cancer Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lips, Irene M., E-mail: i.m.lips@umcutrecht.nl; Gils, Carla H. van; Kotte, Alexis N.T.J.

    2012-06-01

    Purpose: To investigate whether magnesium oxide during external-beam radiotherapy for prostate cancer reduces intrafraction prostate motion in a double-blind, placebo-controlled randomized trial. Methods and Materials: At the Department of Radiotherapy, prostate cancer patients scheduled for intensity-modulated radiotherapy (77 Gy in 35 fractions) using fiducial marker-based position verification were randomly assigned to receive magnesium oxide (500 mg twice a day) or placebo during radiotherapy. The primary outcome was the proportion of patients with clinically relevant intrafraction prostate motion, defined as the proportion of patients who demonstrated in {>=}50% of the fractions an intrafraction motion outside a range of 2 mm. Secondarymore » outcome measures included quality of life and acute toxicity. Results: In total, 46 patients per treatment arm were enrolled. The primary endpoint did not show a statistically significant difference between the treatment arms with a percentage of patients with clinically relevant intrafraction motion of 83% in the magnesium oxide arm as compared with 80% in the placebo arm (p = 1.00). Concerning the secondary endpoints, exploratory analyses demonstrated a trend towards worsened quality of life and slightly more toxicity in the magnesium oxide arm than in the placebo arm; however, these differences were not statistically significant. Conclusions: Magnesium oxide is not effective in reducing the intrafraction prostate motion during external-beam radiotherapy, and therefore there is no indication to use it in clinical practice for this purpose.« less

  20. Perception of randomness: On the time of streaks.

    PubMed

    Sun, Yanlong; Wang, Hongbin

    2010-12-01

    People tend to think that streaks in random sequential events are rare and remarkable. When they actually encounter streaks, they tend to consider the underlying process as non-random. The present paper examines the time of pattern occurrences in sequences of Bernoulli trials, and shows that among all patterns of the same length, a streak is the most delayed pattern for its first occurrence. It is argued that when time is of essence, how often a pattern is to occur (mean time, or, frequency) and when a pattern is to first occur (waiting time) are different questions and bear different psychological relevance. The waiting time statistics may provide a quantitative measure to the psychological distance when people are expecting a probabilistic event, and such measure is consistent with both of the representativeness and availability heuristics in people's perception of randomness. We discuss some of the recent empirical findings and suggest that people's judgment and generation of random sequences may be guided by their actual experiences of the waiting time statistics. Published by Elsevier Inc.

  1. Preferences of parents of children with autism spectrum disorders concerning oral health and dental treatment.

    PubMed

    Capozza, Lauren E; Bimstein, Enrique

    2012-01-01

    The purpose of this study was to describe the preferences of parents of children with or without autism spectrum disorders (ASDs) concerning oral health and dental treatment. A questionnaire that queried demographics, dental needs, perceptions of dental materials and treatments, and parental concerns regarding relevant ASD issues in medicine and dentistry was distributed in the waiting rooms of a pediatric dental clinic and an autism clinic to parents or legal guardians of children undergoing treatment. The responses for the children with or without ASDs were compared. Statistically significant differences between the ASDs (N=23) and non-ASDs (N=33) groups existed for: parental age; frequency of dental visits per year; supervision of tooth-brushing; and use of a fluoridated toothpaste. Statistically insignificant differences were found in attitudes toward: amalgam; composite; fluoride products; or behavior guidance techniques. Parents or legal guardians of children with autism spectrum disorders are likely to have special beliefs and preferences regarding dental materials and dental behavior guidance.

  2. minet: A R/Bioconductor package for inferring large transcriptional networks using mutual information.

    PubMed

    Meyer, Patrick E; Lafitte, Frédéric; Bontempi, Gianluca

    2008-10-29

    This paper presents the R/Bioconductor package minet (version 1.1.6) which provides a set of functions to infer mutual information networks from a dataset. Once fed with a microarray dataset, the package returns a network where nodes denote genes, edges model statistical dependencies between genes and the weight of an edge quantifies the statistical evidence of a specific (e.g transcriptional) gene-to-gene interaction. Four different entropy estimators are made available in the package minet (empirical, Miller-Madow, Schurmann-Grassberger and shrink) as well as four different inference methods, namely relevance networks, ARACNE, CLR and MRNET. Also, the package integrates accuracy assessment tools, like F-scores, PR-curves and ROC-curves in order to compare the inferred network with a reference one. The package minet provides a series of tools for inferring transcriptional networks from microarray data. It is freely available from the Comprehensive R Archive Network (CRAN) as well as from the Bioconductor website.

  3. Statistical characterization of fluctuations of a laser beam transmitted through a random air-water interface: new results from a laboratory experiment

    NASA Astrophysics Data System (ADS)

    Majumdar, Arun K.; Land, Phillip; Siegenthaler, John

    2014-10-01

    New results for characterizing laser intensity fluctuation statistics of a laser beam transmitted through a random air-water interface relevant to underwater communications are presented. A laboratory watertank experiment is described to investigate the beam wandering effects of the transmitted beam. Preliminary results from the experiment provide information about histograms of the probability density functions of intensity fluctuations for different wind speeds measured by a CMOS camera for the transmitted beam. Angular displacements of the centroids of the fluctuating laser beam generates the beam wander effects. This research develops a probabilistic model for optical propagation at the random air-water interface for a transmission case under different wind speed conditions. Preliminary results for bit-error-rate (BER) estimates as a function of fade margin for an on-off keying (OOK) optical communication through the air-water interface are presented for a communication system where a random air-water interface is a part of the communication channel.

  4. Stochastic Growth of Ion Cyclotron And Mirror Waves In Earth's Magnetosheath

    NASA Technical Reports Server (NTRS)

    Cairns, Iver H.; Grubits, K. A.

    2001-01-01

    Electromagnetic ion cyclotron and mirror waves in Earth's magnetosheath are bursty, have widely variable fields, and are unexpectedly persistent, properties difficult to reconcile with uniform secular growth. Here it is shown for specific periods that stochastic growth theory (SGT) quantitatively accounts for the functional form of the wave statistics and qualitatively explains the wave properties. The wave statistics are inconsistent with uniform secular growth or self-organized criticality, but nonlinear processes sometimes play a role at high fields. The results show SGT's relevance near marginal stability and suggest that it is widely relevant to space and astrophysical plasmas.

  5. Arthroscopic Debridement for Primary Degenerative Osteoarthritis of the Elbow Leads to Significant Improvement in Range of Motion and Clinical Outcomes: A Systematic Review.

    PubMed

    Sochacki, Kyle R; Jack, Robert A; Hirase, Takashi; McCulloch, Patrick C; Lintner, David M; Liberman, Shari R; Harris, Joshua D

    2017-12-01

    The purpose of this investigation was to determine whether arthroscopic debridement of primary elbow osteoarthritis results in statistically significant and clinically relevant improvement in (1) elbow range of motion and (2) clinical outcomes with (3) low complication and reoperation rates. A systematic review was registered with PROSPERO and performed using PRISMA guidelines. Databases were searched for studies that investigated the outcomes of arthroscopic debridement for the treatment of primary osteoarthritis of the elbow in adult human patients. Study methodological quality was analyzed. Studies that included post-traumatic arthritis were excluded. Elbow motion and all elbow-specific patient-reported outcome scores were eligible for analysis. Comparisons between preoperative and postoperative values from each study were made using 2-sample Z-tests (http://in-silico.net/tools/statistics/ztest) using a P value < .05. Nine articles (209 subjects, 213 elbows, 187 males, 22 females, mean age 45.7 ± 7.1 years, mean follow-up 41.7 ± 16.3. months; 75% right, 25% left; 79% dominant elbow, 21% nondominant) were analyzed. Elbow extension (23.4°-10.7°, Δ 12.7°), flexion (115.9°-128.7°, Δ 12.8°), and global arc of motion (94.5°-117.6°, Δ 23.1°) had statistically significant and clinically relevant improvement following arthroscopic debridement (P < .0001 for all). There was also a statistically significant (P < .0001) and clinically relevant improvement in the Mayo Elbow Performance Score (60.7-84.6, Δ 23.9) postoperatively. Six patients (2.8%) had postoperative complications. Nine (4.2%) underwent reoperation. Elbow arthroscopic debridement for primary degenerative osteoarthritis results in statistically significant and clinically relevant improvement in elbow range of motion and clinical outcomes with low complication and reoperation rates. Systematic review of level IV studies. Copyright © 2017 Arthroscopy Association of North America. Published by Elsevier Inc. All rights reserved.

  6. An analysis of the positional distribution of DNA motifs in promoter regions and its biological relevance.

    PubMed

    Casimiro, Ana C; Vinga, Susana; Freitas, Ana T; Oliveira, Arlindo L

    2008-02-07

    Motif finding algorithms have developed in their ability to use computationally efficient methods to detect patterns in biological sequences. However the posterior classification of the output still suffers from some limitations, which makes it difficult to assess the biological significance of the motifs found. Previous work has highlighted the existence of positional bias of motifs in the DNA sequences, which might indicate not only that the pattern is important, but also provide hints of the positions where these patterns occur preferentially. We propose to integrate position uniformity tests and over-representation tests to improve the accuracy of the classification of motifs. Using artificial data, we have compared three different statistical tests (Chi-Square, Kolmogorov-Smirnov and a Chi-Square bootstrap) to assess whether a given motif occurs uniformly in the promoter region of a gene. Using the test that performed better in this dataset, we proceeded to study the positional distribution of several well known cis-regulatory elements, in the promoter sequences of different organisms (S. cerevisiae, H. sapiens, D. melanogaster, E. coli and several Dicotyledons plants). The results show that position conservation is relevant for the transcriptional machinery. We conclude that many biologically relevant motifs appear heterogeneously distributed in the promoter region of genes, and therefore, that non-uniformity is a good indicator of biological relevance and can be used to complement over-representation tests commonly used. In this article we present the results obtained for the S. cerevisiae data sets.

  7. [The GIPSY-RECPAM model: a versatile approach for integrated evaluation in cardiologic care].

    PubMed

    Carinci, F

    2009-01-01

    Tree-structured methodology applied for the GISSI-PSICOLOGIA project, although performed in the framework of earliest GISSI studies, represents a powerful tool to analyze different aspects of cardiologic care. The GISSI-PSICOLOGIA project has delivered a novel methodology based on the joint application of psychometric tools and sophisticated statistical techniques. Its prospective use could allow building effective epidemiological models relevant to the prognosis of the cardiologic patient. The various features of the RECPAM method allow a versatile use in the framework of modern e-health projects. The study used the Cognitive Behavioral Assessment H Form (CBA-H) psychometrics scales. The potential for its future application in the framework of Italian cardiology is relevant and particularly indicated to assist planning of systems for integrated care and routine evaluation of the cardiologic patient.

  8. A bootstrap based Neyman-Pearson test for identifying variable importance.

    PubMed

    Ditzler, Gregory; Polikar, Robi; Rosen, Gail

    2015-04-01

    Selection of most informative features that leads to a small loss on future data are arguably one of the most important steps in classification, data analysis and model selection. Several feature selection (FS) algorithms are available; however, due to noise present in any data set, FS algorithms are typically accompanied by an appropriate cross-validation scheme. In this brief, we propose a statistical hypothesis test derived from the Neyman-Pearson lemma for determining if a feature is statistically relevant. The proposed approach can be applied as a wrapper to any FS algorithm, regardless of the FS criteria used by that algorithm, to determine whether a feature belongs in the relevant set. Perhaps more importantly, this procedure efficiently determines the number of relevant features given an initial starting point. We provide freely available software implementations of the proposed methodology.

  9. Measuring medical students' motivation to learning anatomy by cadaveric dissection.

    PubMed

    Abdel Meguid, Eiman M; Khalil, Mohammed K

    2017-07-01

    Motivation and learning are inter-related. It is well known that motivating learners is clearly a complex endeavor, which can be influenced by the educational program and the learning environment. Limited research has been conducted to examine students' motivation as a method to assess the effectiveness of dissection in medical education. This study aimed to assess and analyze students' motivation following their dissection experience. A 29-item survey was developed based on the Attention, Relevance, Confidence, and Satisfaction model of motivation. Descriptive statistics were undertaken to describe students' motivation to the dissection experience. T-test and ANOVA were used to compare differences in motivational scores between gender and educational characteristics of students. Dissection activities appear to promote students' motivation. Gender difference was statistically significant as males were more motivated by the dissection experience than females. Comparison between students with different knowledge of anatomy was also significantly different. The study is an important step in the motivational design to improve students' motivation to learn. The outcome of this study provides guidance to the selection of specific strategies to increase motivation by generating motivational strategies/tactics to facilitate learning. Anat Sci Educ 10: 363-371. © 2016 American Association of Anatomists. © 2016 American Association of Anatomists.

  10. Some Tests of Randomness with Applications

    DTIC Science & Technology

    1981-02-01

    freedom. For further details, the reader is referred to Gnanadesikan (1977, p. 169) wherein other relevant tests are also given, Graphical tests, as...sample from a gamma distri- bution. J. Am. Statist. Assoc. 71, 480-7. Gnanadesikan , R. (1977). Methods for Statistical Data Analysis of Multivariate

  11. Computer-aided auditing of prescription drug claims.

    PubMed

    Iyengar, Vijay S; Hermiz, Keith B; Natarajan, Ramesh

    2014-09-01

    We describe a methodology for identifying and ranking candidate audit targets from a database of prescription drug claims. The relevant audit targets may include various entities such as prescribers, patients and pharmacies, who exhibit certain statistical behavior indicative of potential fraud and abuse over the prescription claims during a specified period of interest. Our overall approach is consistent with related work in statistical methods for detection of fraud and abuse, but has a relative emphasis on three specific aspects: first, based on the assessment of domain experts, certain focus areas are selected and data elements pertinent to the audit analysis in each focus area are identified; second, specialized statistical models are developed to characterize the normalized baseline behavior in each focus area; and third, statistical hypothesis testing is used to identify entities that diverge significantly from their expected behavior according to the relevant baseline model. The application of this overall methodology to a prescription claims database from a large health plan is considered in detail.

  12. Determinants of Judgments of Explanatory Power: Credibility, Generality, and Statistical Relevance.

    PubMed

    Colombo, Matteo; Bucher, Leandra; Sprenger, Jan

    2017-01-01

    Explanation is a central concept in human psychology. Drawing upon philosophical theories of explanation, psychologists have recently begun to examine the relationship between explanation, probability and causality. Our study advances this growing literature at the intersection of psychology and philosophy of science by systematically investigating how judgments of explanatory power are affected by (i) the prior credibility of an explanatory hypothesis, (ii) the causal framing of the hypothesis, (iii) the perceived generalizability of the explanation, and (iv) the relation of statistical relevance between hypothesis and evidence. Collectively, the results of our five experiments support the hypothesis that the prior credibility of a causal explanation plays a central role in explanatory reasoning: first, because of the presence of strong main effects on judgments of explanatory power, and second, because of the gate-keeping role it has for other factors. Highly credible explanations are not susceptible to causal framing effects, but they are sensitive to the effects of normatively relevant factors: the generalizability of an explanation, and its statistical relevance for the evidence. These results advance current literature in the philosophy and psychology of explanation in three ways. First, they yield a more nuanced understanding of the determinants of judgments of explanatory power, and the interaction between these factors. Second, they show the close relationship between prior beliefs and explanatory power. Third, they elucidate the nature of abductive reasoning.

  13. Determinants of Judgments of Explanatory Power: Credibility, Generality, and Statistical Relevance

    PubMed Central

    Colombo, Matteo; Bucher, Leandra; Sprenger, Jan

    2017-01-01

    Explanation is a central concept in human psychology. Drawing upon philosophical theories of explanation, psychologists have recently begun to examine the relationship between explanation, probability and causality. Our study advances this growing literature at the intersection of psychology and philosophy of science by systematically investigating how judgments of explanatory power are affected by (i) the prior credibility of an explanatory hypothesis, (ii) the causal framing of the hypothesis, (iii) the perceived generalizability of the explanation, and (iv) the relation of statistical relevance between hypothesis and evidence. Collectively, the results of our five experiments support the hypothesis that the prior credibility of a causal explanation plays a central role in explanatory reasoning: first, because of the presence of strong main effects on judgments of explanatory power, and second, because of the gate-keeping role it has for other factors. Highly credible explanations are not susceptible to causal framing effects, but they are sensitive to the effects of normatively relevant factors: the generalizability of an explanation, and its statistical relevance for the evidence. These results advance current literature in the philosophy and psychology of explanation in three ways. First, they yield a more nuanced understanding of the determinants of judgments of explanatory power, and the interaction between these factors. Second, they show the close relationship between prior beliefs and explanatory power. Third, they elucidate the nature of abductive reasoning. PMID:28928679

  14. A Response to White and Gorard: Against Inferential Statistics: How and Why Current Statistics Teaching Gets It Wrong

    ERIC Educational Resources Information Center

    Nicholson, James; Ridgway, Jim

    2017-01-01

    White and Gorard make important and relevant criticisms of some of the methods commonly used in social science research, but go further by criticising the logical basis for inferential statistical tests. This paper comments briefly on matters we broadly agree on with them and more fully on matters where we disagree. We agree that too little…

  15. Randomized clinical trials in implant therapy: relationships among methodological, statistical, clinical, paratextual features and number of citations.

    PubMed

    Nieri, Michele; Clauser, Carlo; Franceschi, Debora; Pagliaro, Umberto; Saletta, Daniele; Pini-Prato, Giovanpaolo

    2007-08-01

    The aim of the present study was to investigate the relationships among reported methodological, statistical, clinical and paratextual variables of randomized clinical trials (RCTs) in implant therapy, and their influence on subsequent research. The material consisted of the RCTs in implant therapy published through the end of the year 2000. Methodological, statistical, clinical and paratextual features of the articles were assessed and recorded. The perceived clinical relevance was subjectively evaluated by an experienced clinician on anonymous abstracts. The impact on research was measured by the number of citations found in the Science Citation Index. A new statistical technique (Structural learning of Bayesian Networks) was used to assess the relationships among the considered variables. Descriptive statistics revealed that the reported methodology and statistics of RCTs in implant therapy were defective. Follow-up of the studies was generally short. The perceived clinical relevance appeared to be associated with the objectives of the studies and with the number of published images in the original articles. The impact on research was related to the nationality of the involved institutions and to the number of published images. RCTs in implant therapy (until 2000) show important methodological and statistical flaws and may not be appropriate for guiding clinicians in their practice. The methodological and statistical quality of the studies did not appear to affect their impact on practice and research. Bayesian Networks suggest new and unexpected relationships among the methodological, statistical, clinical and paratextual features of RCTs.

  16. Reference value sensitivity of measures of unfair health inequality

    PubMed Central

    García-Gómez, Pilar; Schokkaert, Erik; Van Ourti, Tom

    2014-01-01

    Most politicians and ethical observers are not interested in pure health inequalities, as they want to distinguish between different causes of health differences. Measures of “unfair” inequality - direct unfairness and the fairness gap, but also the popular standardized concentration index - therefore neutralize the effects of what are considered to be “legitimate” causes of inequality. This neutralization is performed by putting a subset of the explanatory variables at reference values, e.g. their means. We analyze how the inequality ranking of different policies depends on the specific choice of reference values. We show with mortality data from the Netherlands that the problem is empirically relevant and we suggest a statistical method for fixing the reference values. PMID:24954998

  17. Diagnostic potential of real-time elastography (RTE) and shear wave elastography (SWE) to differentiate benign and malignant thyroid nodules: A systematic review and meta-analysis.

    PubMed

    Hu, Xiangdong; Liu, Yujiang; Qian, Linxue

    2017-10-01

    Real-time elastography (RTE) and shear wave elastography (SWE) are noninvasive and easily available imaging techniques that measure the tissue strain, and it has been reported that the sensitivity and the specificity of elastography were better in differentiating between benign and malignant thyroid nodules than conventional technologies. Relevant articles were searched in multiple databases; the comparison of elasticity index (EI) was conducted with the Review Manager 5.0. Forest plots of the sensitivity and specificity and SROC curve of RTE and SWE were performed with STATA 10.0 software. In addition, sensitivity analysis and bias analysis of the studies were conducted to examine the quality of articles; and to estimate possible publication bias, funnel plot was used and the Egger test was conducted. Finally 22 articles which eventually satisfied the inclusion criteria were included in this study. After eliminating the inefficient, benign and malignant nodules were 2106 and 613, respectively. The meta-analysis suggested that the difference of EI between benign and malignant nodules was statistically significant (SMD = 2.11, 95% CI [1.67, 2.55], P < .00001). The overall sensitivities of RTE and SWE were roughly comparable, whereas the difference of specificities between these 2 methods was statistically significant. In addition, statistically significant difference of AUC between RTE and SWE was observed between RTE and SWE (P < .01). The specificity of RTE was statistically higher than that of SWE; which suggests that compared with SWE, RTE may be more accurate on differentiating benign and malignant thyroid nodules.

  18. Diagnostic potential of real-time elastography (RTE) and shear wave elastography (SWE) to differentiate benign and malignant thyroid nodules

    PubMed Central

    Hu, Xiangdong; Liu, Yujiang; Qian, Linxue

    2017-01-01

    Abstract Background: Real-time elastography (RTE) and shear wave elastography (SWE) are noninvasive and easily available imaging techniques that measure the tissue strain, and it has been reported that the sensitivity and the specificity of elastography were better in differentiating between benign and malignant thyroid nodules than conventional technologies. Methods: Relevant articles were searched in multiple databases; the comparison of elasticity index (EI) was conducted with the Review Manager 5.0. Forest plots of the sensitivity and specificity and SROC curve of RTE and SWE were performed with STATA 10.0 software. In addition, sensitivity analysis and bias analysis of the studies were conducted to examine the quality of articles; and to estimate possible publication bias, funnel plot was used and the Egger test was conducted. Results: Finally 22 articles which eventually satisfied the inclusion criteria were included in this study. After eliminating the inefficient, benign and malignant nodules were 2106 and 613, respectively. The meta-analysis suggested that the difference of EI between benign and malignant nodules was statistically significant (SMD = 2.11, 95% CI [1.67, 2.55], P < .00001). The overall sensitivities of RTE and SWE were roughly comparable, whereas the difference of specificities between these 2 methods was statistically significant. In addition, statistically significant difference of AUC between RTE and SWE was observed between RTE and SWE (P < .01). Conclusion: The specificity of RTE was statistically higher than that of SWE; which suggests that compared with SWE, RTE may be more accurate on differentiating benign and malignant thyroid nodules. PMID:29068996

  19. On the statistical assessment of classifiers using DNA microarray data

    PubMed Central

    Ancona, N; Maglietta, R; Piepoli, A; D'Addabbo, A; Cotugno, R; Savino, M; Liuni, S; Carella, M; Pesole, G; Perri, F

    2006-01-01

    Background In this paper we present a method for the statistical assessment of cancer predictors which make use of gene expression profiles. The methodology is applied to a new data set of microarray gene expression data collected in Casa Sollievo della Sofferenza Hospital, Foggia – Italy. The data set is made up of normal (22) and tumor (25) specimens extracted from 25 patients affected by colon cancer. We propose to give answers to some questions which are relevant for the automatic diagnosis of cancer such as: Is the size of the available data set sufficient to build accurate classifiers? What is the statistical significance of the associated error rates? In what ways can accuracy be considered dependant on the adopted classification scheme? How many genes are correlated with the pathology and how many are sufficient for an accurate colon cancer classification? The method we propose answers these questions whilst avoiding the potential pitfalls hidden in the analysis and interpretation of microarray data. Results We estimate the generalization error, evaluated through the Leave-K-Out Cross Validation error, for three different classification schemes by varying the number of training examples and the number of the genes used. The statistical significance of the error rate is measured by using a permutation test. We provide a statistical analysis in terms of the frequencies of the genes involved in the classification. Using the whole set of genes, we found that the Weighted Voting Algorithm (WVA) classifier learns the distinction between normal and tumor specimens with 25 training examples, providing e = 21% (p = 0.045) as an error rate. This remains constant even when the number of examples increases. Moreover, Regularized Least Squares (RLS) and Support Vector Machines (SVM) classifiers can learn with only 15 training examples, with an error rate of e = 19% (p = 0.035) and e = 18% (p = 0.037) respectively. Moreover, the error rate decreases as the training set size increases, reaching its best performances with 35 training examples. In this case, RLS and SVM have error rates of e = 14% (p = 0.027) and e = 11% (p = 0.019). Concerning the number of genes, we found about 6000 genes (p < 0.05) correlated with the pathology, resulting from the signal-to-noise statistic. Moreover the performances of RLS and SVM classifiers do not change when 74% of genes is used. They progressively reduce up to e = 16% (p < 0.05) when only 2 genes are employed. The biological relevance of a set of genes determined by our statistical analysis and the major roles they play in colorectal tumorigenesis is discussed. Conclusions The method proposed provides statistically significant answers to precise questions relevant for the diagnosis and prognosis of cancer. We found that, with as few as 15 examples, it is possible to train statistically significant classifiers for colon cancer diagnosis. As for the definition of the number of genes sufficient for a reliable classification of colon cancer, our results suggest that it depends on the accuracy required. PMID:16919171

  20. Meta-analysis of gene-level associations for rare variants based on single-variant statistics.

    PubMed

    Hu, Yi-Juan; Berndt, Sonja I; Gustafsson, Stefan; Ganna, Andrea; Hirschhorn, Joel; North, Kari E; Ingelsson, Erik; Lin, Dan-Yu

    2013-08-08

    Meta-analysis of genome-wide association studies (GWASs) has led to the discoveries of many common variants associated with complex human diseases. There is a growing recognition that identifying "causal" rare variants also requires large-scale meta-analysis. The fact that association tests with rare variants are performed at the gene level rather than at the variant level poses unprecedented challenges in the meta-analysis. First, different studies may adopt different gene-level tests, so the results are not compatible. Second, gene-level tests require multivariate statistics (i.e., components of the test statistic and their covariance matrix), which are difficult to obtain. To overcome these challenges, we propose to perform gene-level tests for rare variants by combining the results of single-variant analysis (i.e., p values of association tests and effect estimates) from participating studies. This simple strategy is possible because of an insight that multivariate statistics can be recovered from single-variant statistics, together with the correlation matrix of the single-variant test statistics, which can be estimated from one of the participating studies or from a publicly available database. We show both theoretically and numerically that the proposed meta-analysis approach provides accurate control of the type I error and is as powerful as joint analysis of individual participant data. This approach accommodates any disease phenotype and any study design and produces all commonly used gene-level tests. An application to the GWAS summary results of the Genetic Investigation of ANthropometric Traits (GIANT) consortium reveals rare and low-frequency variants associated with human height. The relevant software is freely available. Copyright © 2013 The American Society of Human Genetics. Published by Elsevier Inc. All rights reserved.

  1. Computational and Statistical Models: A Comparison for Policy Modeling of Childhood Obesity

    NASA Astrophysics Data System (ADS)

    Mabry, Patricia L.; Hammond, Ross; Ip, Edward Hak-Sing; Huang, Terry T.-K.

    As systems science methodologies have begun to emerge as a set of innovative approaches to address complex problems in behavioral, social science, and public health research, some apparent conflicts with traditional statistical methodologies for public health have arisen. Computational modeling is an approach set in context that integrates diverse sources of data to test the plausibility of working hypotheses and to elicit novel ones. Statistical models are reductionist approaches geared towards proving the null hypothesis. While these two approaches may seem contrary to each other, we propose that they are in fact complementary and can be used jointly to advance solutions to complex problems. Outputs from statistical models can be fed into computational models, and outputs from computational models can lead to further empirical data collection and statistical models. Together, this presents an iterative process that refines the models and contributes to a greater understanding of the problem and its potential solutions. The purpose of this panel is to foster communication and understanding between statistical and computational modelers. Our goal is to shed light on the differences between the approaches and convey what kinds of research inquiries each one is best for addressing and how they can serve complementary (and synergistic) roles in the research process, to mutual benefit. For each approach the panel will cover the relevant "assumptions" and how the differences in what is assumed can foster misunderstandings. The interpretations of the results from each approach will be compared and contrasted and the limitations for each approach will be delineated. We will use illustrative examples from CompMod, the Comparative Modeling Network for Childhood Obesity Policy. The panel will also incorporate interactive discussions with the audience on the issues raised here.

  2. RAId_aPS: MS/MS Analysis with Multiple Scoring Functions and Spectrum-Specific Statistics

    PubMed Central

    Alves, Gelio; Ogurtsov, Aleksey Y.; Yu, Yi-Kuo

    2010-01-01

    Statistically meaningful comparison/combination of peptide identification results from various search methods is impeded by the lack of a universal statistical standard. Providing an -value calibration protocol, we demonstrated earlier the feasibility of translating either the score or heuristic -value reported by any method into the textbook-defined -value, which may serve as the universal statistical standard. This protocol, although robust, may lose spectrum-specific statistics and might require a new calibration when changes in experimental setup occur. To mitigate these issues, we developed a new MS/MS search tool, RAId_aPS, that is able to provide spectrum-specific -values for additive scoring functions. Given a selection of scoring functions out of RAId score, K-score, Hyperscore and XCorr, RAId_aPS generates the corresponding score histograms of all possible peptides using dynamic programming. Using these score histograms to assign -values enables a calibration-free protocol for accurate significance assignment for each scoring function. RAId_aPS features four different modes: (i) compute the total number of possible peptides for a given molecular mass range, (ii) generate the score histogram given a MS/MS spectrum and a scoring function, (iii) reassign -values for a list of candidate peptides given a MS/MS spectrum and the scoring functions chosen, and (iv) perform database searches using selected scoring functions. In modes (iii) and (iv), RAId_aPS is also capable of combining results from different scoring functions using spectrum-specific statistics. The web link is http://www.ncbi.nlm.nih.gov/CBBresearch/Yu/raid_aps/index.html. Relevant binaries for Linux, Windows, and Mac OS X are available from the same page. PMID:21103371

  3. The Performance Analysis Based on SAR Sample Covariance Matrix

    PubMed Central

    Erten, Esra

    2012-01-01

    Multi-channel systems appear in several fields of application in science. In the Synthetic Aperture Radar (SAR) context, multi-channel systems may refer to different domains, as multi-polarization, multi-interferometric or multi-temporal data, or even a combination of them. Due to the inherent speckle phenomenon present in SAR images, the statistical description of the data is almost mandatory for its utilization. The complex images acquired over natural media present in general zero-mean circular Gaussian characteristics. In this case, second order statistics as the multi-channel covariance matrix fully describe the data. For practical situations however, the covariance matrix has to be estimated using a limited number of samples, and this sample covariance matrix follow the complex Wishart distribution. In this context, the eigendecomposition of the multi-channel covariance matrix has been shown in different areas of high relevance regarding the physical properties of the imaged scene. Specifically, the maximum eigenvalue of the covariance matrix has been frequently used in different applications as target or change detection, estimation of the dominant scattering mechanism in polarimetric data, moving target indication, etc. In this paper, the statistical behavior of the maximum eigenvalue derived from the eigendecomposition of the sample multi-channel covariance matrix in terms of multi-channel SAR images is simplified for SAR community. Validation is performed against simulated data and examples of estimation and detection problems using the analytical expressions are as well given. PMID:22736976

  4. Nurse versus physician-led care for the management of asthma.

    PubMed

    Kuethe, Maarten C; Vaessen-Verberne, Anja A P H; Elbers, Roy G; Van Aalderen, Wim M C

    2013-02-28

    Asthma is the most common chronic disease in childhood and prevalence is also high in adulthood, thereby placing a considerable burden on healthcare resources. Therefore, effective asthma management is important to reduce morbidity and to optimise utilisation of healthcare facilities. To review the effectiveness of nurse-led asthma care provided by a specialised asthma nurse, a nurse practitioner, a physician assistant or an otherwise specifically trained nursing professional, working relatively independently from a physician, compared to traditional care provided by a physician. Our scope included all outpatient care for asthma, both in primary care and in hospital settings. We carried out a comprehensive search of databases including The Cochrane Library, MEDLINE and EMBASE to identify trials up to August 2012. Bibliographies of relevant papers were searched, and handsearching of relevant publications was undertaken to identify additional trials. Randomised controlled trials comparing nurse-led care versus physician-led care in asthma for the same aspect of asthma care. We used standard methodological procedures expected by The Cochrane Collaboration. Five studies on 588 adults and children were included concerning nurse-led care versus physician-led care. One study included 154 patients with uncontrolled asthma, while the other four studies including 434 patients with controlled or partly controlled asthma. The studies were of good methodological quality (although it is not possible to blind people giving or receiving the intervention to which group they are in). There was no statistically significant difference in the number of asthma exacerbations and asthma severity after treatment (duration of follow-up from six months to two years). Only one study had healthcare costs as an outcome parameter, no statistical differences were found. Although not a primary outcome, quality of life is a patient-important outcome and in the three trials on 380 subjects that reported on this outcome, there was no statistically significant difference (standardised mean difference (SMD) -0.03; 95% confidence interval (CI) -0.23 to 0.17). We found no significant difference between nurse-led care for patients with asthma compared to physician-led care for the outcomes assessed. Based on the relatively small number of studies in this review, nurse-led care may be appropriate in patients with well-controlled asthma. More studies in varied settings and among people with varying levels of asthma control are needed with data on adverse events and health-care costs.

  5. Statistically Validated Networks in Bipartite Complex Systems

    PubMed Central

    Tumminello, Michele; Miccichè, Salvatore; Lillo, Fabrizio; Piilo, Jyrki; Mantegna, Rosario N.

    2011-01-01

    Many complex systems present an intrinsic bipartite structure where elements of one set link to elements of the second set. In these complex systems, such as the system of actors and movies, elements of one set are qualitatively different than elements of the other set. The properties of these complex systems are typically investigated by constructing and analyzing a projected network on one of the two sets (for example the actor network or the movie network). Complex systems are often very heterogeneous in the number of relationships that the elements of one set establish with the elements of the other set, and this heterogeneity makes it very difficult to discriminate links of the projected network that are just reflecting system's heterogeneity from links relevant to unveil the properties of the system. Here we introduce an unsupervised method to statistically validate each link of a projected network against a null hypothesis that takes into account system heterogeneity. We apply the method to a biological, an economic and a social complex system. The method we propose is able to detect network structures which are very informative about the organization and specialization of the investigated systems, and identifies those relationships between elements of the projected network that cannot be explained simply by system heterogeneity. We also show that our method applies to bipartite systems in which different relationships might have different qualitative nature, generating statistically validated networks in which such difference is preserved. PMID:21483858

  6. Rapid COJEC Induction Therapy for High-risk Neuroblastoma Patients - Cochrane Review.

    PubMed

    Peinemann, F; van Dalen, E C; Berthold, F

    2016-04-01

    Neuroblastoma is a rare malignant disease and patients with high-risk neuroblastoma have a poor prognosis. Rapid COJEC induction chemotherapy means (almost) the same total doses given within a shorter time period. In theory, rapid COJEC could reduce the risk of drug resistance and it has been considered as a potential candidate for improving the outcome. The objective was to evaluate effects of rapid COJEC compared to standard induction chemotherapy in patients with high-risk neuroblastoma. We searched the databases CENTRAL, MEDLINE, and EMBASE from inception to 11 November 2014 and included randomized controlled trials. We identified one relevant randomized controlled trial with 130 participants receiving rapid COJEC and 132 participants receiving standard OPEC/COJEC induction chemotherapy. There was no statistically significant difference between the treatment groups in complete response (risk ratio 0.99, 95% confidence interval 0.71 to 1.38, P=0.94) and treatment-related mortality (risk ratio 1.21, 95% confidence interval 0.33 to 4.39, P=0.77). A statistically significant difference in favor of the standard treatment arm was identified for the following early toxicities: febrile neutropenia, septicemia, and renal toxicity. The differences in complete response and treatment-related mortality between treatment alternatives were not statistically significantly different. Based on the currently available evidence, we are uncertain about the effects of rapid COJEC induction chemotherapy in patients with high-risk neuroblastoma. © Georg Thieme Verlag KG Stuttgart · New York.

  7. Long-term variability of global statistical properties of epileptic brain networks

    NASA Astrophysics Data System (ADS)

    Kuhnert, Marie-Therese; Elger, Christian E.; Lehnertz, Klaus

    2010-12-01

    We investigate the influence of various pathophysiologic and physiologic processes on global statistical properties of epileptic brain networks. We construct binary functional networks from long-term, multichannel electroencephalographic data recorded from 13 epilepsy patients, and the average shortest path length and the clustering coefficient serve as global statistical network characteristics. For time-resolved estimates of these characteristics we observe large fluctuations over time, however, with some periodic temporal structure. These fluctuations can—to a large extent—be attributed to daily rhythms while relevant aspects of the epileptic process contribute only marginally. Particularly, we could not observe clear cut changes in network states that can be regarded as predictive of an impending seizure. Our findings are of particular relevance for studies aiming at an improved understanding of the epileptic process with graph-theoretical approaches.

  8. Geomatic Methods for the Analysis of Data in the Earth Sciences: Lecture Notes in Earth Sciences, Vol. 95

    NASA Astrophysics Data System (ADS)

    Pavlis, Nikolaos K.

    Geomatics is a trendy term that has been used in recent years to describe academic departments that teach and research theories, methods, algorithms, and practices used in processing and analyzing data related to the Earth and other planets. Naming trends aside, geomatics could be considered as the mathematical and statistical “toolbox” that allows Earth scientists to extract information about physically relevant parameters from the available data and accompany such information with some measure of its reliability. This book is an attempt to present the mathematical-statistical methods used in data analysis within various disciplines—geodesy, geophysics, photogrammetry and remote sensing—from a unifying perspective that inverse problem formalism permits. At the same time, it allows us to stretch the relevance of statistical methods in achieving an optimal solution.

  9. The therapeutic effect of balneotherapy: evaluation of the evidence from randomised controlled trials.

    PubMed

    Falagas, M E; Zarkadoulia, E; Rafailidis, P I

    2009-07-01

    Systematic review. There is widespread popular belief that balneotherapy is effective in the treatment of various ailments. We searched PubMed (1950-2006), Scopus and Cochrane library for randomised controlled trials (RCTs), examining the clinical effect of balneotherapy (both as a solitary approach and in the context of spa) on various diseases. A total of 203 potentially relevant articles were identified. In all, 29 RCTs were further evaluated; 22 of them (75.8%) investigated the use of balneotherapy in rheumatological diseases and eight osteoarthritis, six fibromyalgia, four ankylosing spondylitis, four rheumatoid arthritis and three RCTs (10.3%) in other musculoskeletal system diseases (chronic low back pain). In addition, three relevant studies focused on psoriasis and one on Parkinson's disease. A total of 1720 patients with rheumatological and other musculoskeletal diseases were evaluated in these studies. Balneotherapy did result in more pain improvement (statistically different) in patients with rheumatological diseases and chronic low back pain in comparison to the control group in 17 (68%) of the 25 RCTs examined. In the remaining eight studies, pain was improved in the balneotherapy treatment arm, but this improvement was statistically not different than that of the comparator treatment arm(s). This beneficial effect lasted for different periods of time: 10 days in one study, 2 weeks in one study, 3 weeks in one study, 12 weeks in 2 studies, 3 months in 11 studies, 16-20 weeks in one study, 24 weeks in three studies, 6 months in three studies, 40 weeks in one study and 1 year in one study. The available data suggest that balneotherapy may be truly associated with improvement in several rheumatological diseases. However, existing research is not sufficiently strong to draw firm conclusions.

  10. Measuring the food and built environments in urban centres: reliability and validity of the EURO-PREVOB Community Questionnaire.

    PubMed

    Pomerleau, J; Knai, C; Foster, C; Rutter, H; Darmon, N; Derflerova Brazdova, Z; Hadziomeragic, A F; Pekcan, G; Pudule, I; Robertson, A; Brunner, E; Suhrcke, M; Gabrijelcic Blenkus, M; Lhotska, L; Maiani, G; Mistura, L; Lobstein, T; Martin, B W; Elinder, L S; Logstrup, S; Racioppi, F; McKee, M

    2013-03-01

    The authors designed an instrument to measure objectively aspects of the built and food environments in urban areas, the EURO-PREVOB Community Questionnaire, within the EU-funded project 'Tackling the social and economic determinants of nutrition and physical activity for the prevention of obesity across Europe' (EURO-PREVOB). This paper describes its development, reliability, validity, feasibility and relevance to public health and obesity research. The Community Questionnaire is designed to measure key aspects of the food and built environments in urban areas of varying levels of affluence or deprivation, within different countries. The questionnaire assesses (1) the food environment and (2) the built environment. Pilot tests of the EURO-PREVOB Community Questionnaire were conducted in five to 10 purposively sampled urban areas of different socio-economic status in each of Ankara, Brno, Marseille, Riga, and Sarajevo. Inter-rater reliability was compared between two pairs of fieldworkers in each city centre using three methods: inter-observer agreement (IOA), kappa statistics, and intraclass correlation coefficients (ICCs). Data were collected successfully in all five cities. Overall reliability of the EURO-PREVOB Community Questionnaire was excellent (inter-observer agreement (IOA) > 0.87; intraclass correlation coefficients (ICC)s > 0.91 and kappa statistics > 0.7. However, assessment of certain aspects of the quality of the built environment yielded slightly lower IOA coefficients than the quantitative aspects. The EURO-PREVOB Community Questionnaire was found to be a reliable and practical observational tool for measuring differences in community-level data on environmental factors that can impact on dietary intake and physical activity. The next step is to evaluate its predictive power by collecting behavioural and anthropometric data relevant to obesity and its determinants. Copyright © 2013 The Royal Society for Public Health. Published by Elsevier Ltd. All rights reserved.

  11. Exploring non-stationarity patterns in schizophrenia: neural reorganization abnormalities in the alpha band

    NASA Astrophysics Data System (ADS)

    Núñez, Pablo; Poza, Jesús; Bachiller, Alejandro; Gomez-Pilar, Javier; Lubeiro, Alba; Molina, Vicente; Hornero, Roberto

    2017-08-01

    Objective. The aim of this paper was to characterize brain non-stationarity during an auditory oddball task in schizophrenia (SCH). The level of non-stationarity was measured in the baseline and response windows of relevant tones in SCH patients and healthy controls. Approach. Event-related potentials were recorded from 28 SCH patients and 51 controls. Non-stationarity was estimated in the conventional electroencephalography frequency bands by means of Kullback-Leibler divergence (KLD). Relative power (RP) was also computed to assess a possible complementarity with KLD. Main results. Results showed a widespread statistically significant increase in the level of non-stationarity from baseline to response in all frequency bands for both groups. Statistically significant differences in non-stationarity were found between SCH patients and controls in beta-2 and in the alpha band. SCH patients showed more non-stationarity in the left parieto-occipital region during the baseline window in the beta-2 band. A leave-one-out cross validation classification study with feature selection based on binary stepwise logistic regression to discriminate between SCH patients and controls provided a positive predictive value of 72.73% and negative predictive value of 78.95%. Significance. KLD can characterize transient neural reorganization during an attentional task in response to novelty and relevance. Our findings suggest anomalous reorganization of neural dynamics in SCH during an oddball task. The abnormal frequency-dependent modulation found in SCH patients during relevant tones is in agreement with the hypothesis of aberrant salience detection in SCH. The increase in non-stationarity in the alpha band during the active task supports the notion that this band is involved in top-down processing. The baseline differences in the beta-2 band suggest that hyperactivation of the default mode network during attention tasks may be related to SCH symptoms. Furthermore, the classification improved when features from both KLD and RP were used, supporting the idea that these measures can be complementary.

  12. Pitfalls of national routine death statistics for maternal mortality study.

    PubMed

    Saucedo, Monica; Bouvier-Colle, Marie-Hélène; Chantry, Anne A; Lamarche-Vadel, Agathe; Rey, Grégoire; Deneux-Tharaux, Catherine

    2014-11-01

    The lessons learned from the study of maternal deaths depend on the accuracy of data. Our objective was to assess time trends in the underestimation of maternal mortality (MM) in the national routine death statistics in France and to evaluate their current accuracy for the selection and causes of maternal deaths. National data obtained by enhanced methods in 1989, 1999, and 2007-09 were used as the gold standard to assess time trends in the underestimation of MM ratios (MMRs) in death statistics. Enhanced data and death statistics for 2007-09 were further compared by characterising false negatives (FNs) and false positives (FPs). The distribution of cause-specific MMRs, as assessed by each system, was described. Underestimation of MM in death statistics decreased from 55.6% in 1989 to 11.4% in 2007-09 (P < 0.001). In 2007-09, of 787 pregnancy-associated deaths, 254 were classified as maternal by the enhanced system and 211 by the death statistics; 34% of maternal deaths in the enhanced system were FNs in the death statistics, and 20% of maternal deaths in the death statistics were FPs. The hierarchy of causes of MM differed between the two systems. The discordances were mainly explained by the lack of precision in the drafting of death certificates by clinicians. Although the underestimation of MM in routine death statistics has decreased substantially over time, one third of maternal deaths remain unidentified, and the main causes of death are incorrectly identified in these data. Defining relevant priorities in maternal health requires the use of enhanced methods for MM study. © 2014 John Wiley & Sons Ltd.

  13. Statistical learning algorithms for identifying contrasting tillage practices with landsat thematic mapper data

    USDA-ARS?s Scientific Manuscript database

    Tillage management practices have direct impact on water holding capacity, evaporation, carbon sequestration, and water quality. This study examines the feasibility of two statistical learning algorithms, such as Least Square Support Vector Machine (LSSVM) and Relevance Vector Machine (RVM), for cla...

  14. 50 CFR 600.133 - Scientific and Statistical Committee (SSC).

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... information as is relevant to such Council's development and amendment of any fishery management plan. (b...). 600.133 Section 600.133 Wildlife and Fisheries FISHERY CONSERVATION AND MANAGEMENT, NATIONAL OCEANIC... Fishery Management Councils § 600.133 Scientific and Statistical Committee (SSC). (a) Each Council shall...

  15. 50 CFR 600.133 - Scientific and Statistical Committee (SSC).

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... information as is relevant to such Council's development and amendment of any fishery management plan. (b...). 600.133 Section 600.133 Wildlife and Fisheries FISHERY CONSERVATION AND MANAGEMENT, NATIONAL OCEANIC... Fishery Management Councils § 600.133 Scientific and Statistical Committee (SSC). (a) Each Council shall...

  16. In pursuit of a science of agriculture: the role of statistics in field experiments.

    PubMed

    Parolini, Giuditta

    2015-09-01

    Since the beginning of the twentieth century statistics has reshaped the experimental cultures of agricultural research taking part in the subtle dialectic between the epistemic and the material that is proper to experimental systems. This transformation has become especially relevant in field trials and the paper will examine the British agricultural institution, Rothamsted Experimental Station, where statistical methods nowadays popular in the planning and analysis of field experiments were developed in the 1920s. At Rothamsted statistics promoted randomisation over systematic arrangements, factorisation over one-question trials, and emphasised the importance of the experimental error in assessing field trials. These changes in methodology transformed also the material culture of agricultural science, and a new body, the Field Plots Committee, was created to manage the field research of the agricultural institution. Although successful, the vision of field experimentation proposed by the Rothamsted statisticians was not unproblematic. Experimental scientists closely linked to the farming community questioned it in favour of a field research that could be more easily understood by farmers. The clash between the two agendas reveals how the role attributed to statistics in field experimentation defined different pursuits of agricultural research, alternately conceived of as a scientists' science or as a farmers' science.

  17. Chiral Symmetry Breaking in Crystal Growth: Is Hydrodynamic Convection Relevant?

    NASA Technical Reports Server (NTRS)

    Martin, B.; Tharrington, A.; Wu, Xiao-Lun

    1996-01-01

    The effects of mechanical stirring on nucleation and chiral symmetry breaking have been investigated for a simple inorganic molecule, sodium chlorate (NaClO3). In contrast to earlier findings, our experiment suggests that the symmetry breaking may have little to do with hydrodynamic convection. Rather the effect can be reasonably accounted for by mechanical damage to incipient crystals. The catastrophic events, creating numerous small 'secondary' crystals, produce statistical domination of one chiral species over the other. Our conclusion is supported by a number of observations using different mixing mechanisms.

  18. Machine Learning Prediction of the Energy Gap of Graphene Nanoflakes Using Topological Autocorrelation Vectors.

    PubMed

    Fernandez, Michael; Abreu, Jose I; Shi, Hongqing; Barnard, Amanda S

    2016-11-14

    The possibility of band gap engineering in graphene opens countless new opportunities for application in nanoelectronics. In this work, the energy gaps of 622 computationally optimized graphene nanoflakes were mapped to topological autocorrelation vectors using machine learning techniques. Machine learning modeling revealed that the most relevant correlations appear at topological distances in the range of 1 to 42 with prediction accuracy higher than 80%. The data-driven model can statistically discriminate between graphene nanoflakes with different energy gaps on the basis of their molecular topology.

  19. Characterization of exopolymers of aquatic bacteria by pyrolysis-mass spectrometry

    NASA Technical Reports Server (NTRS)

    Ford, T.; Sacco, E.; Black, J.; Kelley, T.; Goodacre, R.; Berkeley, R. C.; Mitchell, R.

    1991-01-01

    Exopolymers from a diverse collection of marine and freshwater bacteria were characterized by pyrolysis-mass spectrometry (Py-MS). Py-MS provides spectra of pyrolysis fragments that are characteristic of the original material. Analysis of the spectra by multivariate statistical techniques (principal component and canonical variate analysis) separated these exopolymers into distinct groups. Py-MS clearly distinguished characteristic fragments, which may be derived from components responsible for functional differences between polymers. The importance of these distinctions and the relevance of pyrolysis information to exopolysaccharide function in aquatic bacteria is discussed.

  20. Keyword extraction by nonextensivity measure.

    PubMed

    Mehri, Ali; Darooneh, Amir H

    2011-05-01

    The presence of a long-range correlation in the spatial distribution of a relevant word type, in spite of random occurrences of an irrelevant word type, is an important feature of human-written texts. We classify the correlation between the occurrences of words by nonextensive statistical mechanics for the word-ranking process. In particular, we look at the nonextensivity parameter as an alternative metric to measure the spatial correlation in the text, from which the words may be ranked in terms of this measure. Finally, we compare different methods for keyword extraction. © 2011 American Physical Society

  1. Metrology in health: a pilot study

    NASA Astrophysics Data System (ADS)

    Ferreira, M.; Matos, A.

    2015-02-01

    The purpose of this paper is to identify and analyze some relevant issues which arise when the concept of metrological traceability is applied to health care facilities. Discussion is structured around the results that were obtained through a characterization and comparative description of the practices applied in 45 different Portuguese health entities. Following a qualitative exploratory approach, the information collected was the support for the initial research hypotheses and the development of the questionnaire survey. It was also applied a quantitative methodology that included a descriptive and inferential statistical analysis of the experimental data set.

  2. A multimodal wave spectrum-based approach for statistical downscaling of local wave climate

    USGS Publications Warehouse

    Hegermiller, Christie; Antolinez, Jose A A; Rueda, Ana C.; Camus, Paula; Perez, Jorge; Erikson, Li; Barnard, Patrick; Mendez, Fernando J.

    2017-01-01

    Characterization of wave climate by bulk wave parameters is insufficient for many coastal studies, including those focused on assessing coastal hazards and long-term wave climate influences on coastal evolution. This issue is particularly relevant for studies using statistical downscaling of atmospheric fields to local wave conditions, which are often multimodal in large ocean basins (e.g. the Pacific). Swell may be generated in vastly different wave generation regions, yielding complex wave spectra that are inadequately represented by a single set of bulk wave parameters. Furthermore, the relationship between atmospheric systems and local wave conditions is complicated by variations in arrival time of wave groups from different parts of the basin. Here, we address these two challenges by improving upon the spatiotemporal definition of the atmospheric predictor used in statistical downscaling of local wave climate. The improved methodology separates the local wave spectrum into “wave families,” defined by spectral peaks and discrete generation regions, and relates atmospheric conditions in distant regions of the ocean basin to local wave conditions by incorporating travel times computed from effective energy flux across the ocean basin. When applied to locations with multimodal wave spectra, including Southern California and Trujillo, Peru, the new methodology improves the ability of the statistical model to project significant wave height, peak period, and direction for each wave family, retaining more information from the full wave spectrum. This work is the base of statistical downscaling by weather types, which has recently been applied to coastal flooding and morphodynamic applications.

  3. SPATIO-TEMPORAL MODELING OF AGRICULTURAL YIELD DATA WITH AN APPLICATION TO PRICING CROP INSURANCE CONTRACTS

    PubMed Central

    Ozaki, Vitor A.; Ghosh, Sujit K.; Goodwin, Barry K.; Shirota, Ricardo

    2009-01-01

    This article presents a statistical model of agricultural yield data based on a set of hierarchical Bayesian models that allows joint modeling of temporal and spatial autocorrelation. This method captures a comprehensive range of the various uncertainties involved in predicting crop insurance premium rates as opposed to the more traditional ad hoc, two-stage methods that are typically based on independent estimation and prediction. A panel data set of county-average yield data was analyzed for 290 counties in the State of Paraná (Brazil) for the period of 1990 through 2002. Posterior predictive criteria are used to evaluate different model specifications. This article provides substantial improvements in the statistical and actuarial methods often applied to the calculation of insurance premium rates. These improvements are especially relevant to situations where data are limited. PMID:19890450

  4. Medical Emergency Exceptions in State Abortion Statutes: The Statistical Record.

    PubMed

    Linton, Paul Benjamin

    2016-01-01

    This article attempts to determine, first, whether emergency exceptions in statutes regulating abortion have been abused and, second, whether the standard used in such an exception--subjective or objective--makes a difference in the reported incidence of such emergencies. A review of the statistical data supports two conclusions. First, physicians who perform abortions and have complied with state reporting requirements have not relied upon the medical emergency exceptions in state abortion statutes to evade the requirements of those statutes. Second, the use of an objective standard for evaluating medical emergencies ("reasonable medical judgment") has not been associated with fewer reported emergencies (per number of abortions performed) than the use of a subjective standard ("good faith clinical judgment"). Both of these conclusions may be relevant in drafting other abortion statutes including prohibitions (e.g., post-viability abortions).

  5. Privacy enhancing techniques - the key to secure communication and management of clinical and genomic data.

    PubMed

    De Moor, G J E; Claerhout, B; De Meyer, F

    2003-01-01

    To introduce some of the privacy protection problems related to genomics based medicine and to highlight the relevance of Trusted Third Parties (TTPs) and of Privacy Enhancing Techniques (PETs) in the restricted context of clinical research and statistics. Practical approaches based on two different pseudonymisation models, both for batch and interactive data collection and exchange, are described and analysed. The growing need of managing both clinical and genetic data raises important legal and ethical challenges. Protecting human rights in the realm of privacy, while optimising research potential and other statistical activities is a challenge that can easily be overcome with the assistance of a trust service provider offering advanced privacy enabling/enhancing solutions. As such, the use of pseudonymisation and other innovative Privacy Enhancing Techniques can unlock valuable data sources.

  6. [Pitfalls in informed consent: a statistical analysis of malpractice law suits].

    PubMed

    Echigo, Junko

    2014-05-01

    In medical malpractice law suits, the notion of informed consent is often relevant in assessing whether negligence can be attributed to the medical practitioner who has caused injury to a patient. Furthermore, it is not rare that courts award damages for a lack of appropriate informed consent alone. In this study, two results were arrived at from a statistical analysis of medical malpractice law suits. One, unexpectedly, was that the severity of a patient's illness made no significant difference to whether damages were awarded. The other was that cases of typical medical treatment that national medical insurance does not cover were involved significantly more often than insured treatment cases. In cases where damages were awarded, the courts required more disclosure and written documents of information by medical practitioners, especially about complications and adverse effects that the patient might suffer.

  7. Medical literature searches: a comparison of PubMed and Google Scholar.

    PubMed

    Nourbakhsh, Eva; Nugent, Rebecca; Wang, Helen; Cevik, Cihan; Nugent, Kenneth

    2012-09-01

    Medical literature searches provide critical information for clinicians. However, the best strategy for identifying relevant high-quality literature is unknown. We compared search results using PubMed and Google Scholar on four clinical questions and analysed these results with respect to article relevance and quality. Abstracts from the first 20 citations for each search were classified into three relevance categories. We used the weighted kappa statistic to analyse reviewer agreement and nonparametric rank tests to compare the number of citations for each article and the corresponding journals' impact factors. Reviewers ranked 67.6% of PubMed articles and 80% of Google Scholar articles as at least possibly relevant (P = 0.116) with high agreement (all kappa P-values < 0.01). Google Scholar articles had a higher median number of citations (34 vs. 1.5, P < 0.0001) and came from higher impact factor journals (5.17 vs. 3.55, P = 0.036). PubMed searches and Google Scholar searches often identify different articles. In this study, Google Scholar articles were more likely to be classified as relevant, had higher numbers of citations and were published in higher impact factor journals. The identification of frequently cited articles using Google Scholar for searches probably has value for initial literature searches. © 2012 The authors. Health Information and Libraries Journal © 2012 Health Libraries Group.

  8. Application of statistical downscaling technique for the production of wine grapes (Vitis vinifera L.) in Spain

    NASA Astrophysics Data System (ADS)

    Gaitán Fernández, E.; García Moreno, R.; Pino Otín, M. R.; Ribalaygua Batalla, J.

    2012-04-01

    Climate and soil are two of the most important limiting factors for agricultural production. Nowadays climate change has been documented in many geographical locations affecting different cropping systems. The General Circulation Models (GCM) has become important tools to simulate the more relevant aspects of the climate expected for the XXI century in the frame of climatic change. These models are able to reproduce the general features of the atmospheric dynamic but their low resolution (about 200 Km) avoids a proper simulation of lower scale meteorological effects. Downscaling techniques allow overcoming this problem by adapting the model outcomes to local scale. In this context, FIC (Fundación para la Investigación del Clima) has developed a statistical downscaling technique based on a two step analogue methods. This methodology has been broadly tested on national and international environments leading to excellent results on future climate models. In a collaboration project, this statistical downscaling technique was applied to predict future scenarios for the grape growing systems in Spain. The application of such model is very important to predict expected climate for the different growing crops, mainly for grape, where the success of different varieties are highly related to climate and soil. The model allowed the implementation of agricultural conservation practices in the crop production, detecting highly sensible areas to negative impacts produced by any modification of climate in the different regions, mainly those protected with protected designation of origin, and the definition of new production areas with optimal edaphoclimatic conditions for the different varieties.

  9. Prediction of phenotypes of missense mutations in human proteins from biological assemblies.

    PubMed

    Wei, Qiong; Xu, Qifang; Dunbrack, Roland L

    2013-02-01

    Single nucleotide polymorphisms (SNPs) are the most frequent variation in the human genome. Nonsynonymous SNPs that lead to missense mutations can be neutral or deleterious, and several computational methods have been presented that predict the phenotype of human missense mutations. These methods use sequence-based and structure-based features in various combinations, relying on different statistical distributions of these features for deleterious and neutral mutations. One structure-based feature that has not been studied significantly is the accessible surface area within biologically relevant oligomeric assemblies. These assemblies are different from the crystallographic asymmetric unit for more than half of X-ray crystal structures. We find that mutations in the core of proteins or in the interfaces in biological assemblies are significantly more likely to be disease-associated than those on the surface of the biological assemblies. For structures with more than one protein in the biological assembly (whether the same sequence or different), we find the accessible surface area from biological assemblies provides a statistically significant improvement in prediction over the accessible surface area of monomers from protein crystal structures (P = 6e-5). When adding this information to sequence-based features such as the difference between wildtype and mutant position-specific profile scores, the improvement from biological assemblies is statistically significant but much smaller (P = 0.018). Combining this information with sequence-based features in a support vector machine leads to 82% accuracy on a balanced dataset of 50% disease-associated mutations from SwissVar and 50% neutral mutations from human/primate sequence differences in orthologous proteins. Copyright © 2012 Wiley Periodicals, Inc.

  10. Technical Note: Higher-order statistical moments and a procedure that detects potentially anomalous years as two alternative methods describing alterations in continuous environmental data

    Treesearch

    I. Arismendi; S. L. Johnson; J. B. Dunham

    2015-01-01

    Statistics of central tendency and dispersion may not capture relevant or desired characteristics of the distribution of continuous phenomena and, thus, they may not adequately describe temporal patterns of change. Here, we present two methodological approaches that can help to identify temporal changes in environmental regimes. First, we use higher-order statistical...

  11. Learning predictive statistics from temporal sequences: Dynamics and strategies.

    PubMed

    Wang, Rui; Shen, Yuan; Tino, Peter; Welchman, Andrew E; Kourtzi, Zoe

    2017-10-01

    Human behavior is guided by our expectations about the future. Often, we make predictions by monitoring how event sequences unfold, even though such sequences may appear incomprehensible. Event structures in the natural environment typically vary in complexity, from simple repetition to complex probabilistic combinations. How do we learn these structures? Here we investigate the dynamics of structure learning by tracking human responses to temporal sequences that change in structure unbeknownst to the participants. Participants were asked to predict the upcoming item following a probabilistic sequence of symbols. Using a Markov process, we created a family of sequences, from simple frequency statistics (e.g., some symbols are more probable than others) to context-based statistics (e.g., symbol probability is contingent on preceding symbols). We demonstrate the dynamics with which individuals adapt to changes in the environment's statistics-that is, they extract the behaviorally relevant structures to make predictions about upcoming events. Further, we show that this structure learning relates to individual decision strategy; faster learning of complex structures relates to selection of the most probable outcome in a given context (maximizing) rather than matching of the exact sequence statistics. Our findings provide evidence for alternate routes to learning of behaviorally relevant statistics that facilitate our ability to predict future events in variable environments.

  12. A Guideline to Univariate Statistical Analysis for LC/MS-Based Untargeted Metabolomics-Derived Data

    PubMed Central

    Vinaixa, Maria; Samino, Sara; Saez, Isabel; Duran, Jordi; Guinovart, Joan J.; Yanes, Oscar

    2012-01-01

    Several metabolomic software programs provide methods for peak picking, retention time alignment and quantification of metabolite features in LC/MS-based metabolomics. Statistical analysis, however, is needed in order to discover those features significantly altered between samples. By comparing the retention time and MS/MS data of a model compound to that from the altered feature of interest in the research sample, metabolites can be then unequivocally identified. This paper reports on a comprehensive overview of a workflow for statistical analysis to rank relevant metabolite features that will be selected for further MS/MS experiments. We focus on univariate data analysis applied in parallel on all detected features. Characteristics and challenges of this analysis are discussed and illustrated using four different real LC/MS untargeted metabolomic datasets. We demonstrate the influence of considering or violating mathematical assumptions on which univariate statistical test rely, using high-dimensional LC/MS datasets. Issues in data analysis such as determination of sample size, analytical variation, assumption of normality and homocedasticity, or correction for multiple testing are discussed and illustrated in the context of our four untargeted LC/MS working examples. PMID:24957762

  13. A Guideline to Univariate Statistical Analysis for LC/MS-Based Untargeted Metabolomics-Derived Data.

    PubMed

    Vinaixa, Maria; Samino, Sara; Saez, Isabel; Duran, Jordi; Guinovart, Joan J; Yanes, Oscar

    2012-10-18

    Several metabolomic software programs provide methods for peak picking, retention time alignment and quantification of metabolite features in LC/MS-based metabolomics. Statistical analysis, however, is needed in order to discover those features significantly altered between samples. By comparing the retention time and MS/MS data of a model compound to that from the altered feature of interest in the research sample, metabolites can be then unequivocally identified. This paper reports on a comprehensive overview of a workflow for statistical analysis to rank relevant metabolite features that will be selected for further MS/MS experiments. We focus on univariate data analysis applied in parallel on all detected features. Characteristics and challenges of this analysis are discussed and illustrated using four different real LC/MS untargeted metabolomic datasets. We demonstrate the influence of considering or violating mathematical assumptions on which univariate statistical test rely, using high-dimensional LC/MS datasets. Issues in data analysis such as determination of sample size, analytical variation, assumption of normality and homocedasticity, or correction for multiple testing are discussed and illustrated in the context of our four untargeted LC/MS working examples.

  14. Constructing a Reward-Related Quality of Life Statistic in Daily Life—a Proof of Concept Study Using Positive Affect

    PubMed Central

    Verhagen, Simone J. W.; Simons, Claudia J. P.; van Zelst, Catherine; Delespaul, Philippe A. E. G.

    2017-01-01

    Background: Mental healthcare needs person-tailored interventions. Experience Sampling Method (ESM) can provide daily life monitoring of personal experiences. This study aims to operationalize and test a measure of momentary reward-related Quality of Life (rQoL). Intuitively, quality of life improves by spending more time on rewarding experiences. ESM clinical interventions can use this information to coach patients to find a realistic, optimal balance of positive experiences (maximize reward) in daily life. rQoL combines the frequency of engaging in a relevant context (a ‘behavior setting’) with concurrent (positive) affect. High rQoL occurs when the most frequent behavior settings are combined with positive affect or infrequent behavior settings co-occur with low positive affect. Methods: Resampling procedures (Monte Carlo experiments) were applied to assess the reliability of rQoL using various behavior setting definitions under different sampling circumstances, for real or virtual subjects with low-, average- and high contextual variability. Furthermore, resampling was used to assess whether rQoL is a distinct concept from positive affect. Virtual ESM beep datasets were extracted from 1,058 valid ESM observations for virtual and real subjects. Results: Behavior settings defined by Who-What contextual information were most informative. Simulations of at least 100 ESM observations are needed for reliable assessment. Virtual ESM beep datasets of a real subject can be defined by Who-What-Where behavior setting combinations. Large sample sizes are necessary for reliable rQoL assessments, except for subjects with low contextual variability. rQoL is distinct from positive affect. Conclusion: rQoL is a feasible concept. Monte Carlo experiments should be used to assess the reliable implementation of an ESM statistic. Future research in ESM should asses the behavior of summary statistics under different sampling situations. This exploration is especially relevant in clinical implementation, where often only small datasets are available. PMID:29163294

  15. Constructing a Reward-Related Quality of Life Statistic in Daily Life-a Proof of Concept Study Using Positive Affect.

    PubMed

    Verhagen, Simone J W; Simons, Claudia J P; van Zelst, Catherine; Delespaul, Philippe A E G

    2017-01-01

    Background: Mental healthcare needs person-tailored interventions. Experience Sampling Method (ESM) can provide daily life monitoring of personal experiences. This study aims to operationalize and test a measure of momentary reward-related Quality of Life (rQoL). Intuitively, quality of life improves by spending more time on rewarding experiences. ESM clinical interventions can use this information to coach patients to find a realistic, optimal balance of positive experiences (maximize reward) in daily life. rQoL combines the frequency of engaging in a relevant context (a 'behavior setting') with concurrent (positive) affect. High rQoL occurs when the most frequent behavior settings are combined with positive affect or infrequent behavior settings co-occur with low positive affect. Methods: Resampling procedures (Monte Carlo experiments) were applied to assess the reliability of rQoL using various behavior setting definitions under different sampling circumstances, for real or virtual subjects with low-, average- and high contextual variability. Furthermore, resampling was used to assess whether rQoL is a distinct concept from positive affect. Virtual ESM beep datasets were extracted from 1,058 valid ESM observations for virtual and real subjects. Results: Behavior settings defined by Who-What contextual information were most informative. Simulations of at least 100 ESM observations are needed for reliable assessment. Virtual ESM beep datasets of a real subject can be defined by Who-What-Where behavior setting combinations. Large sample sizes are necessary for reliable rQoL assessments, except for subjects with low contextual variability. rQoL is distinct from positive affect. Conclusion: rQoL is a feasible concept. Monte Carlo experiments should be used to assess the reliable implementation of an ESM statistic. Future research in ESM should asses the behavior of summary statistics under different sampling situations. This exploration is especially relevant in clinical implementation, where often only small datasets are available.

  16. Library Statistical Data Base Formats and Definitions.

    ERIC Educational Resources Information Center

    Jones, Dennis; And Others

    Represented are the detailed set of data structures relevant to the categorization of information, terminology, and definitions employed in the design of the library statistical data base. The data base, or management information system, provides administrators with a framework of information and standardized data for library management, planning,…

  17. Understanding Broadscale Wildfire Risks in a Human-Dominated Landscape

    Treesearch

    Jeffrey P. Prestemon; John M. Pye; David T. Butry; Thomas P. Holmes; D. Evan Mercer

    2002-01-01

    Broadscale statistical evaluations of wildfire incidence can answer policy relevant questions about the effectiveness of microlevel vegetation management and can identify subjects needing further study. A dynamic time series cross-sectional model was used to evaluate the statistical links between forest wildfire and vegetation management, human land use, and climatic...

  18. A Statistical Decision Model for Periodical Selection for a Specialized Information Center

    ERIC Educational Resources Information Center

    Dym, Eleanor D.; Shirey, Donald L.

    1973-01-01

    An experiment is described which attempts to define a quantitative methodology for the identification and evaluation of all possibly relevant periodical titles containing toxicological-biological information. A statistical decision model was designed and employed, along with yes/no criteria questions, a training technique and a quality control…

  19. Introductory Statistics and Fish Management.

    ERIC Educational Resources Information Center

    Jardine, Dick

    2002-01-01

    Describes how fisheries research and management data (available on a website) have been incorporated into an Introductory Statistics course. In addition to the motivation gained from seeing the practical relevance of the course, some students have participated in the data collection and analysis for the New Hampshire Fish and Game Department. (MM)

  20. Teaching Primary School Mathematics and Statistics: Evidence-Based Practice

    ERIC Educational Resources Information Center

    Averill, Robin; Harvey, Roger

    2010-01-01

    Here is the only reference book you will ever need for teaching primary school mathematics and statistics. It is full of exciting and engaging snapshots of excellent classroom practice relevant to "The New Zealand Curriculum" and national mathematics standards. There are many fascinating examples of investigative learning experiences,…

  1. Forecast in foreign exchange markets

    NASA Astrophysics Data System (ADS)

    Baviera, R.; Pasquini, M.; Serva, M.; Vergni, D.; Vulpiani, A.

    2001-04-01

    We perform a statistical study of weak efficiency in Deutschemark/US dollar exchange rates using high frequency data. The presence of correlations in the returns sequence implies the possibility of a statistical forecast of market behavior. We show the existence of correlations and how information theory can be relevant in this context.

  2. Price Analysis of Railway Freight Transport under Marketing Mechanism

    NASA Astrophysics Data System (ADS)

    Shi, Ying; Fang, Xiaoping; Chen, Zhiya

    Regarding the problems in the reform of the railway tariff system and the pricing of the transport, by means of assaying the influence of the price elasticity on the artifice used for price, this article proposed multiple regressive model which analyzed price elasticity quantitatively. This model conclude multi-factors which influences on the price elasticity, such as the averagely railway freight charge, the averagely freight haulage of proximate supersede transportation mode, the GDP per capita in the point of origin, and a series of dummy variable which can reflect the features of some productive and consume demesne. It can calculate the price elasticity of different classes in different domains, and predict the freight traffic volume on different rate levels. It can calculate confidence-level, and evaluate the relevance of each parameter to get rid of irrelevant or little relevant variables. It supplied a good theoretical basis for directing the pricing of transport enterprises in market economic conditions, which is suitable for railway freight, passenger traffic and other transportation manner as well. SPSS (Statistical Package for the Social Science) software was used to calculate and analysis the example. This article realized the calculation by HYFX system(Ministry of Railways fund).

  3. Improving secondary prevention screening in clinical encounters using mhealth among prelicensure master's entry clinical nursing students.

    PubMed

    FitzGerald, Leah Z; Rorie, Anne; Salem, Benissa E

    2015-04-01

    To determine the feasibility and acceptability of a mHealth application among nursing students for health promotion and secondary prevention health recommendations for hospitalized adult patients. A pretest-posttest design with a convenience sample of 169 prelicensure master's entry clinical nursing students in a large urban public university. Survey questions assessed intention to use, perceived usefulness, perceived ease of use, subjective norm, voluntariness, clinical area relevance, output quality, and result demonstrability of the United States Preventive Services Task Force (USPSTF) evidence-based practice guidelines via the mHealth application. Descriptive statistics and frequencies were used to explore sociodemographics; paired t-tests were used to evaluate pre- and posttest differences. Pre- and posttest significant differences (p < .01) were found between intention to use, perceived usefulness, subjective norm, voluntariness, image, clinical relevance, result demonstrability, and output quality (p < .02). Ease use of a mHealth application was not significantly different. These findings highlight the need to integrate evidence-based practice tools using mHealth technology among prelicensure master's entry clinical nursing students in order to engage and foster translational learning and improve dissemination of secondary prevention screening guidelines among hospitalized patients. © 2015 Sigma Theta Tau International.

  4. Modeling of pharmacokinetics, efficacy, and hemodynamic effects of macitentan in patients with pulmonary arterial hypertension.

    PubMed

    Krause, Andreas; Zisowsky, Jochen; Dingemanse, Jasper

    2018-04-01

    Macitentan is the first endothelin receptor antagonist with demonstrated efficacy on morbidity and mortality in pulmonary arterial hypertension (PAH) in the pivotal study SERAPHIN. The pharmacokinetics (PK) of macitentan and its active metabolite, ACT-132577, were characterized in a population model. Efficacy and hemodynamics (pharmacodynamics, PD) were related to PK based on PK/PD modeling. Sex, age, and body weight influenced the PK to a statistically significant extent. Model-based simulations showed that these variables are clinically not relevant. Concomitant use of PAH medication (PDE-5 inhibitors) did not influence macitentan trough concentration to a relevant extent. Efficacy and hemodynamics showed clear differences from placebo for macitentan concentrations on 3 and 10 mg with consistent superior effects for 10 mg. After 6 months, PAH patients showed model-predicted 6-min walk distance (6-MWD) improvements of 1.0 m on placebo compared to 29.8 and 34.1 m on 3 and 10 mg of macitentan, respectively. Higher macitentan concentrations were associated with reductions in pulmonary vascular resistance (PVR), mean right atrial and pulmonary arterial pressure, and total pulmonary resistance (TPR) and increases in cardiac index (CI) and mixed venous oxygen saturation. Statistical significance was determined for PVR, TPR, and CI but not for 6-MWD. In addition, PVR showed more pronounced differences between active treatment and placebo than 6-MWD. Modeling identified statistically significant inter-patient differences; simulations to assess the magnitude of the effects permitted clinical judgment. The same approach will allow for extrapolation to children. Hemodynamic markers might be better markers of treatment effects than 6-MWD. The SERAPHIN study and its open-label extension are registered with ClinicalTrials.gov with identifiers NCT00660179 (https://www.clinicaltrials.gov/ct2/show/NCT00660179) and NCT00667823 (https://clinicaltrials.gov/ct2/show/NCT00667823) and with EudraCT with identifiers 2007-002440-14 (https://www.clinicaltrialsregister.eu/ctr-search/search?query=2007-002440-14) and 2007-003694-27 (https://www.clinicaltrialsregister.eu/ctr-search/search?query=2007-003694-27). Copyright © 2018 Elsevier Ltd. All rights reserved.

  5. Creating Near-Term Climate Scenarios for AgMIP

    NASA Astrophysics Data System (ADS)

    Goddard, L.; Greene, A. M.; Baethgen, W.

    2012-12-01

    For the next assessment report of the IPCC (AR5), attention is being given to development of climate information that is appropriate for adaptation, such as decadal-scale and near-term predictions intended to capture the combined effects of natural climate variability and the emerging climate change signal. While the science and practice evolve for the production and use of dynamic decadal prediction, information relevant to agricultural decision-makers can be gained from analysis of past decadal-scale trends and variability. Statistical approaches that mimic the characteristics of observed year-to-year variability can indicate the range of possibilities and their likelihood. In this talk we present work towards development of near-term climate scenarios, which are needed to engage decision-makers and stakeholders in the regions in current decision-making. The work includes analyses of decadal-scale variability and trends in the AgMIP regions, and statistical approaches that capture year-to-year variability and the associated persistence of wet and dry years. We will outline the general methodology and some of the specific considerations in the regional application of the methodology for different AgMIP regions, such those for Western Africa versus southern Africa. We will also show some examples of quality checks and informational summaries of the generated data, including (1) metrics of information quality such as probabilistic reliability for a suite of relevant climate variables and indices important for agriculture; (2) quality checks relative to the use of this climate data in crop models; and, (3) summary statistics (e.g., for 5-10-year periods or across given spatial scales).

  6. Statistical Study in the mid-altitude cusp region: wave and particle data comparison using a normalized cusp crossing duration

    NASA Astrophysics Data System (ADS)

    Grison, B.; Escoubet, C. P.; Pitout, F.; Cornilleau-Wehrlin, N.; Dandouras, I.; Lucek, E.

    2009-04-01

    In the mid altitude cusp region the DC magnetic field presents a diamagnetic cavity due to intense ion earthward flux coming from the magnetosheath. A strong ultra low frequency (ULF) magnetic activity is also commonly observed in this region. Most of the mid altitude cusp statistical studies have focused on the location of the cusp and its dependence and response to solar wind, interplanetary magnetic field, dipole tilt angle parameters. In our study we use the database build by Pitout et al. (2006) in order to study the link of wave power in the ULF range (0.35-10Hz) measured by STAFF SC instrument with the ion plasma properties as measured by CIS (and CODIF) instrument as well as the diamagnetic cavity in the mid-altitude cusp region with FGM data. To compare the different crossings we don`t use the cusp position and dynamics but we use a normalized cusp crossing duration that permits to easily average the properties over a large number of crossings. As usual in the cusp, it is particularly relevant to sort the crossings by the corresponding interplanetary magnetic field (IMF) orientation in order to analyse the results. In particular we try to find out what is the most relevant parameter to link the strong wave activity with. The global statistic confirms previous single case observations that have noticed a simultaneity between ion injections and wave activity enhancements. We will also present results concerning other ion parameters and the diamagnetic cavity observed in the mid altitude cusp region.

  7. Using Algal Metrics and Biomass to Evaluate Multiple Ways of Defining Concentration-Based Nutrient Criteria in Streams and their Ecological Relevance

    EPA Science Inventory

    We examined the utility of nutrient criteria derived solely from total phosphorus (TP) concentrations in streams (regression models and percentile distributions) and evaluated their ecological relevance to diatom and algal biomass responses. We used a variety of statistics to cha...

  8. Biometrics in the Medical School Curriculum: Making the Necessary Relevant.

    ERIC Educational Resources Information Center

    Murphy, James R.

    1980-01-01

    Because a student is more likely to learn and retain course content perceived as relevant, an attempt was made to change medical students' perceptions of a biometrics course by introducing statistical methods as a means of solving problems in the interpretation of clinical lab data. Retrospective analysis of student course evaluations indicates a…

  9. Spatially characterizing visitor use and its association with informal trails in Yosemite Valley meadows.

    PubMed

    Walden-Schreiner, Chelsey; Leung, Yu-Fai

    2013-07-01

    Ecological impacts associated with nature-based recreation and tourism can compromise park and protected area goals if left unrestricted. Protected area agencies are increasingly incorporating indicator-based management frameworks into their management plans to address visitor impacts. Development of indicators requires empirical evaluation of indicator measures and examining their ecological and social relevance. This study addresses the development of the informal trail indicator in Yosemite National Park by spatially characterizing visitor use in open landscapes and integrating use patterns with informal trail condition data to examine their spatial association. Informal trail and visitor use data were collected concurrently during July and August of 2011 in three, high-use meadows of Yosemite Valley. Visitor use was clustered at statistically significant levels in all three study meadows. Spatial data integration found no statistically significant differences between use patterns and trail condition class. However, statistically significant differences were found between the distance visitors were observed from informal trails and visitor activity type with active activities occurring closer to trail corridors. Gender was also found to be significant with male visitors observed further from trail corridors. Results highlight the utility of integrated spatial analysis in supporting indicator-based monitoring and informing management of open landscapes. Additional variables for future analysis and methodological improvements are discussed.

  10. Spatially Characterizing Visitor Use and Its Association with Informal Trails in Yosemite Valley Meadows

    NASA Astrophysics Data System (ADS)

    Walden-Schreiner, Chelsey; Leung, Yu-Fai

    2013-07-01

    Ecological impacts associated with nature-based recreation and tourism can compromise park and protected area goals if left unrestricted. Protected area agencies are increasingly incorporating indicator-based management frameworks into their management plans to address visitor impacts. Development of indicators requires empirical evaluation of indicator measures and examining their ecological and social relevance. This study addresses the development of the informal trail indicator in Yosemite National Park by spatially characterizing visitor use in open landscapes and integrating use patterns with informal trail condition data to examine their spatial association. Informal trail and visitor use data were collected concurrently during July and August of 2011 in three, high-use meadows of Yosemite Valley. Visitor use was clustered at statistically significant levels in all three study meadows. Spatial data integration found no statistically significant differences between use patterns and trail condition class. However, statistically significant differences were found between the distance visitors were observed from informal trails and visitor activity type with active activities occurring closer to trail corridors. Gender was also found to be significant with male visitors observed further from trail corridors. Results highlight the utility of integrated spatial analysis in supporting indicator-based monitoring and informing management of open landscapes. Additional variables for future analysis and methodological improvements are discussed.

  11. Statistical analysis of magnetically soft particles in magnetorheological elastomers

    NASA Astrophysics Data System (ADS)

    Gundermann, T.; Cremer, P.; Löwen, H.; Menzel, A. M.; Odenbach, S.

    2017-04-01

    The physical properties of magnetorheological elastomers (MRE) are a complex issue and can be influenced and controlled in many ways, e.g. by applying a magnetic field, by external mechanical stimuli, or by an electric potential. In general, the response of MRE materials to these stimuli is crucially dependent on the distribution of the magnetic particles inside the elastomer. Specific knowledge of the interactions between particles or particle clusters is of high relevance for understanding the macroscopic rheological properties and provides an important input for theoretical calculations. In order to gain a better insight into the correlation between the macroscopic effects and microstructure and to generate a database for theoretical analysis, x-ray micro-computed tomography (X-μCT) investigations as a base for a statistical analysis of the particle configurations were carried out. Different MREs with quantities of 2-15 wt% (0.27-2.3 vol%) of iron powder and different allocations of the particles inside the matrix were prepared. The X-μCT results were edited by an image processing software regarding the geometrical properties of the particles with and without the influence of an external magnetic field. Pair correlation functions for the positions of the particles inside the elastomer were calculated to statistically characterize the distributions of the particles in the samples.

  12. Statistical physics of crime: a review.

    PubMed

    D'Orsogna, Maria R; Perc, Matjaž

    2015-03-01

    Containing the spread of crime in urban societies remains a major challenge. Empirical evidence suggests that, if left unchecked, crimes may be recurrent and proliferate. On the other hand, eradicating a culture of crime may be difficult, especially under extreme social circumstances that impair the creation of a shared sense of social responsibility. Although our understanding of the mechanisms that drive the emergence and diffusion of crime is still incomplete, recent research highlights applied mathematics and methods of statistical physics as valuable theoretical resources that may help us better understand criminal activity. We review different approaches aimed at modeling and improving our understanding of crime, focusing on the nucleation of crime hotspots using partial differential equations, self-exciting point process and agent-based modeling, adversarial evolutionary games, and the network science behind the formation of gangs and large-scale organized crime. We emphasize that statistical physics of crime can relevantly inform the design of successful crime prevention strategies, as well as improve the accuracy of expectations about how different policing interventions should impact malicious human activity that deviates from social norms. We also outline possible directions for future research, related to the effects of social and coevolving networks and to the hierarchical growth of criminal structures due to self-organization. Copyright © 2014 Elsevier B.V. All rights reserved.

  13. Velopharyngeal function of patients with cleft palate after primary palatoplasty: relevance of sex, age, and cleft type.

    PubMed

    Yang, Yunqiang; Li, Yang; Wu, Yeke; Gu, Yifei; Yin, Heng; Long, Hu; Shi, Bing; Zheng, Qian

    2013-05-01

    The aim of this study was to investigate the relevance of sex, age, and cleft type to velopharyngeal function after primary Sommerlad palatoplasty so as to improve velopharyngeal function after the procedure. Records of 503 patients with nonsyndromic cleft palate after primary Sommerlad palatoplasty were included in the retrospective study. Relevance between their velopharyngeal function and sex, age, and cleft type was analyzed. Statistical analysis was performed using SPSS 13.0 (SPSS Inc., Chicago, IL). There were no significant differences of velopharyngeal competence (VPC) rates between different sexes (P = 0.635). Specifically, VPC rates were significantly higher in younger-than-2-years groups than in older age groups (P < 0.05) and significantly lower in 6-years-or-older group (P < 0.05). No differences were found among 2- to 6-year-old groups (P > 0.05). The VPC rates were significantly lower in the bilateral complete cleft palate and the unilateral complete cleft palate than in the incomplete cleft palate before 2 years old (P < 0.05), whereas there were no significant differences totally (P = 0.875). Results showed that the disparity of the VPC rate among different cleft types would decrease with age. Moreover, results of multivariate logistic regression also indicated that operation age and cleft type are factors influencing velopharyngeal function. Primary palatoplasty should be completed before 2 years old, and the postoperative velopharygeal function will greatly decreases after 6 years old. The influence of cleft type on velopharyngeal function is limited to young patients. For those who have missed the best surgical timing, appropriate delay of operation age is reasonable, especially for patients with complete cleft palate. For patients 4 to 6 years old, the first choice is still simple palatoplasty no matter which cleft type they are classified into.

  14. Automated MRI parcellation of the frontal lobe.

    PubMed

    Ranta, Marin E; Chen, Min; Crocetti, Deana; Prince, Jerry L; Subramaniam, Krish; Fischl, Bruce; Kaufmann, Walter E; Mostofsky, Stewart H

    2014-05-01

    Examination of associations between specific disorders and physical properties of functionally relevant frontal lobe sub-regions is a fundamental goal in neuropsychiatry. Here, we present and evaluate automated methods of frontal lobe parcellation with the programs FreeSurfer(FS) and TOADS-CRUISE(T-C), based on the manual method described in Ranta et al. [2009]: Psychiatry Res 172:147-154 in which sulcal-gyral landmarks were used to manually delimit functionally relevant regions within the frontal lobe: i.e., primary motor cortex, anterior cingulate, deep white matter, premotor cortex regions (supplementary motor complex, frontal eye field, and lateral premotor cortex) and prefrontal cortex (PFC) regions (medial PFC, dorsolateral PFC, inferior PFC, lateral orbitofrontal cortex [OFC] and medial OFC). Dice's coefficient, a measure of overlap, and percent volume difference were used to measure the reliability between manual and automated delineations for each frontal lobe region. For FS, mean Dice's coefficient for all regions was 0.75 and percent volume difference was 21.2%. For T-C the mean Dice's coefficient was 0.77 and the mean percent volume difference for all regions was 20.2%. These results, along with a high degree of agreement between the two automated methods (mean Dice's coefficient = 0.81, percent volume difference = 12.4%) and a proof-of-principle group difference analysis that highlights the consistency and sensitivity of the automated methods, indicate that the automated methods are valid techniques for parcellation of the frontal lobe into functionally relevant sub-regions. Thus, the methodology has the potential to increase efficiency, statistical power and reproducibility for population analyses of neuropsychiatric disorders with hypothesized frontal lobe contributions. Copyright © 2013 Wiley Periodicals, Inc.

  15. Development of ecological indicator guilds for land management

    USGS Publications Warehouse

    Krzysik, A.J.; Balbach, H.E.; Duda, J.J.; Emlen, J.M.; Freeman, D.C.; Graham, J.H.; Kovacic, D.A.; Smith, L.M.; Zak, J.C.

    2005-01-01

    Agency land-use must be efficiently and cost-effectively monitored to assess conditions and trends in ecosystem processes and natural resources relevant to mission requirements and legal mandates. Ecological Indicators represent important land management tools for tracking ecological changes and preventing irreversible environmental damage in disturbed landscapes. The overall objective of the research was to develop both individual and integrated sets (i.e., statistically derived guilds) of Ecological Indicators to: quantify habitat conditions and trends, track and monitor ecological changes, provide early warning or threshold detection, and provide guidance for land managers. The derivation of Ecological Indicators was based on statistical criteria, ecosystem relevance, reliability and robustness, economy and ease of use for land managers, multi-scale performance, and stress response criteria. The basis for the development of statistically based Ecological Indicators was the identification of ecosystem metrics that analytically tracked a landscape disturbance gradient.

  16. Successful classification of cocaine dependence using brain imaging: a generalizable machine learning approach.

    PubMed

    Mete, Mutlu; Sakoglu, Unal; Spence, Jeffrey S; Devous, Michael D; Harris, Thomas S; Adinoff, Bryon

    2016-10-06

    Neuroimaging studies have yielded significant advances in the understanding of neural processes relevant to the development and persistence of addiction. However, these advances have not explored extensively for diagnostic accuracy in human subjects. The aim of this study was to develop a statistical approach, using a machine learning framework, to correctly classify brain images of cocaine-dependent participants and healthy controls. In this study, a framework suitable for educing potential brain regions that differed between the two groups was developed and implemented. Single Photon Emission Computerized Tomography (SPECT) images obtained during rest or a saline infusion in three cohorts of 2-4 week abstinent cocaine-dependent participants (n = 93) and healthy controls (n = 69) were used to develop a classification model. An information theoretic-based feature selection algorithm was first conducted to reduce the number of voxels. A density-based clustering algorithm was then used to form spatially connected voxel clouds in three-dimensional space. A statistical classifier, Support Vectors Machine (SVM), was then used for participant classification. Statistically insignificant voxels of spatially connected brain regions were removed iteratively and classification accuracy was reported through the iterations. The voxel-based analysis identified 1,500 spatially connected voxels in 30 distinct clusters after a grid search in SVM parameters. Participants were successfully classified with 0.88 and 0.89 F-measure accuracies in 10-fold cross validation (10xCV) and leave-one-out (LOO) approaches, respectively. Sensitivity and specificity were 0.90 and 0.89 for LOO; 0.83 and 0.83 for 10xCV. Many of the 30 selected clusters are highly relevant to the addictive process, including regions relevant to cognitive control, default mode network related self-referential thought, behavioral inhibition, and contextual memories. Relative hyperactivity and hypoactivity of regional cerebral blood flow in brain regions in cocaine-dependent participants are presented with corresponding level of significance. The SVM-based approach successfully classified cocaine-dependent and healthy control participants using voxels selected with information theoretic-based and statistical methods from participants' SPECT data. The regions found in this study align with brain regions reported in the literature. These findings support the future use of brain imaging and SVM-based classifier in the diagnosis of substance use disorders and furthering an understanding of their underlying pathology.

  17. Applying a multiobjective metaheuristic inspired by honey bees to phylogenetic inference.

    PubMed

    Santander-Jiménez, Sergio; Vega-Rodríguez, Miguel A

    2013-10-01

    The development of increasingly popular multiobjective metaheuristics has allowed bioinformaticians to deal with optimization problems in computational biology where multiple objective functions must be taken into account. One of the most relevant research topics that can benefit from these techniques is phylogenetic inference. Throughout the years, different researchers have proposed their own view about the reconstruction of ancestral evolutionary relationships among species. As a result, biologists often report different phylogenetic trees from a same dataset when considering distinct optimality principles. In this work, we detail a multiobjective swarm intelligence approach based on the novel Artificial Bee Colony algorithm for inferring phylogenies. The aim of this paper is to propose a complementary view of phylogenetics according to the maximum parsimony and maximum likelihood criteria, in order to generate a set of phylogenetic trees that represent a compromise between these principles. Experimental results on a variety of nucleotide data sets and statistical studies highlight the relevance of the proposal with regard to other multiobjective algorithms and state-of-the-art biological methods. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  18. Stakeholder opinion of functional communication activities following traumatic brain injury.

    PubMed

    Larkins, B M; Worrall, L E; Hickson, L M

    2004-07-01

    To establish a process whereby assessment of functional communication reflects the authentic communication of the target population. The major functional communication assessments available from the USA may not be as relevant to those who reside elsewhere, nor assessments developed primarily for persons who have had a stroke as relevant for traumatic brain injury rehabilitation. The investigation used the Nominal Group Technique to elicit free opinion and support individuals who have compromised communication ability. A survey mailed out sampled a larger number of stakeholders to test out differences among groups. Five stakeholder groups generated items and the survey determined relative 'importance'. The stakeholder groups in both studies comprised individuals with traumatic brain injury and their families, health professionals, third-party payers, employers, and Maori, the indigenous population of New Zealand. There was no statistically significant difference found between groups for 19 of the 31 items. Only half of the items explicitly appear on a well-known USA functional communication assessment. The present study has implications for whether functional communication assessments are valid across cultures and the type of impairment.

  19. Modularity and the spread of perturbations in complex dynamical systems

    NASA Astrophysics Data System (ADS)

    Kolchinsky, Artemy; Gates, Alexander J.; Rocha, Luis M.

    2015-12-01

    We propose a method to decompose dynamical systems based on the idea that modules constrain the spread of perturbations. We find partitions of system variables that maximize "perturbation modularity," defined as the autocovariance of coarse-grained perturbed trajectories. The measure effectively separates the fast intramodular from the slow intermodular dynamics of perturbation spreading (in this respect, it is a generalization of the "Markov stability" method of network community detection). Our approach captures variation of modular organization across different system states, time scales, and in response to different kinds of perturbations: aspects of modularity which are all relevant to real-world dynamical systems. It offers a principled alternative to detecting communities in networks of statistical dependencies between system variables (e.g., "relevance networks" or "functional networks"). Using coupled logistic maps, we demonstrate that the method uncovers hierarchical modular organization planted in a system's coupling matrix. Additionally, in homogeneously coupled map lattices, it identifies the presence of self-organized modularity that depends on the initial state, dynamical parameters, and type of perturbations. Our approach offers a powerful tool for exploring the modular organization of complex dynamical systems.

  20. Comparison of Selected Weather Translation Products

    NASA Technical Reports Server (NTRS)

    Kulkarni, Deepak

    2017-01-01

    Weather is a primary contributor to the air traffic delays within the National Airspace System (NAS). At present, it is the individual decision makers who use weather information and assess its operational impact in creating effective air traffic management solutions. As a result, the estimation of the impact of forecast weather and the quality of ATM response relies on the skill and experience level of the decision maker. FAA Weather-ATM working groups have developed a Weather-ATM integration framework that consists of weather collection, weather translation, ATM impact conversion and ATM decision support. Some weather translation measures have been developed for hypothetical operations such as decentralized free flight, whereas others are meant to be relevant in current operations. This paper does comparative study of two different weather translation products relevant in current operations and finds that these products have strong correlation with each other. Given inaccuracies in prediction of weather, these differences would not be expected to be of significance in statistical study of a large number of decisions made with a look-ahead time of two hours or more.

  1. Modularity and the spread of perturbations in complex dynamical systems.

    PubMed

    Kolchinsky, Artemy; Gates, Alexander J; Rocha, Luis M

    2015-12-01

    We propose a method to decompose dynamical systems based on the idea that modules constrain the spread of perturbations. We find partitions of system variables that maximize "perturbation modularity," defined as the autocovariance of coarse-grained perturbed trajectories. The measure effectively separates the fast intramodular from the slow intermodular dynamics of perturbation spreading (in this respect, it is a generalization of the "Markov stability" method of network community detection). Our approach captures variation of modular organization across different system states, time scales, and in response to different kinds of perturbations: aspects of modularity which are all relevant to real-world dynamical systems. It offers a principled alternative to detecting communities in networks of statistical dependencies between system variables (e.g., "relevance networks" or "functional networks"). Using coupled logistic maps, we demonstrate that the method uncovers hierarchical modular organization planted in a system's coupling matrix. Additionally, in homogeneously coupled map lattices, it identifies the presence of self-organized modularity that depends on the initial state, dynamical parameters, and type of perturbations. Our approach offers a powerful tool for exploring the modular organization of complex dynamical systems.

  2. A possible parameter for gait clinimetric evaluation in Parkinson’s disease patients

    NASA Astrophysics Data System (ADS)

    Lescano, C. N.; Rodrigo, S. E.; Christian, D. A.

    2016-04-01

    The strength and usefulness of a rating scale for describing disease evolution relies on the accurate determination of variations representing clinically relevant changes. In this sense, the habitually used Hoehn-Yahr (HY) Scale for Parkinson Disease (PD) in its modified version distinguishes between the 2 and 2.5 stages to explain if the bilateral involvement is or is not accompanied by body balance impairment. Nevertheless, this scaling does not allow for differentiating the symptoms and signs associated with each stage accurately. Considering this difference, this work aims at analyzing some gait parameters-stance and swing phase times and magnitude of the vertical component of ground reaction force during the gait cycle-of PD patients classified as HY=2 and HY=2.5 in contrast with healthy subjects (HY=0), with the purpose of assessing whether there is a statistically significant difference among all these HY categories. For all gait parameters evaluated, the results indicated significant differences between HY=0 and HY=2.5. However, only the magnitude of the vertical component of ground reaction force presented relevant differences between HY=2 and 2.5. As expected, therefore, these results show the potential of such parameter to clinimetrically identify the level of gait impairment/disability in PD patients on the Hoehn-Yahr Scale.

  3. Determining Primary Care Physician Information Needs to Inform Ambulatory Visit Note Display

    PubMed Central

    Clarke, M.A.; Steege, L.M.; Moore, J.L.; Koopman, R.J.; Belden, J.L.; Kim, M.S.

    2014-01-01

    Summary Background With the increase in the adoption of electronic health records (EHR) across the US, primary care physicians are experiencing information overload. The purpose of this pilot study was to determine the information needs of primary care physicians (PCPs) as they review clinic visit notes to inform EHR display. Method Data collection was conducted with 15 primary care physicians during semi-structured interviews, including a third party observer to control bias. Physicians reviewed major sections of an artificial but typical acute and chronic care visit note to identify the note sections that were relevant to their information needs. Statistical methods used were McNemar-Mosteller’s and Cochran Q. Results Physicians identified History of Present Illness (HPI), Assessment, and Plan (A&P) as the most important sections of a visit note. In contrast, they largely judged the Review of Systems (ROS) to be superfluous. There was also a statistical difference in physicians’ highlighting among all seven major note sections in acute (p = 0.00) and chronic (p = 0.00) care visit notes. Conclusion A&P and HPI sections were most frequently identified as important which suggests that physicians may have to identify a few key sections out of a long, unnecessarily verbose visit note. ROS is viewed by doctors as mostly “not needed,” but can have relevant information. The ROS can contain information needed for patient care when other sections of the Visit note, such as the HPI, lack the relevant information. Future studies should include producing a display that provides only relevant information to increase physician efficiency at the point of care. Also, research on moving A&P to the top of visit notes instead of having A&P at the bottom of the page is needed, since those are usually the first sections physicians refer to and reviewing from top to bottom may cause cognitive load. PMID:24734131

  4. Pilot study for the registry of complications in rheumatic diseases from the German Society of Surgery (DGORh): evaluation of methods and data from the first 1000 patients.

    PubMed

    Kostuj, Tanja; Rehart, Stefan; Matta-Hurtado, Ronald; Biehl, Christoph; Willburger, Roland E; Schmidt, Klaus

    2017-10-10

    Most patients suffering with rheumatic diseases who undergo surgical treatment are receiving immune-modulating therapy. To determine whether these medications affect their outcomes a national registry was established in Germany by the German Society of Surgery (DGORh). Data from the first 1000 patients were used in a pilot study to identify relevant corisk factors and to determine whether such a registry is suitable for developing accurate and relevant recommendations. Data were collected from patients undergoing surgical treatments with their written consent. A second consent form was used, if complications occurred. During this pilot study, in order to obtain a quicker overview, risk factors were considered only in patients with complications. Only descriptive statistical analysis was employed in this pilot study due to limited number of observed complications and inhomogeneous data regarding the surgery and the medications the patients received. Analytical statistics will be performed to confirm the results in a future outcome study. Complications occurred in 26 patients and were distributed equally among the different types of surgeries. Twenty one of these patients were receiving immune-modulating therapy at the time, while five were not. Infections were observed in 2.3% of patients receiving and in 5.1% not receiving immunosuppression. Due to the low number of cases, inhomogeneity in the diseases and the treatments received by the patients in this pilot study, it is not possible to develop standardised best-practice recommendations to optimise their care. Based on this observation we conclude that in order to be suitable to develop accurate and relevant recommendations a national registry must include the most important and relevant variables that impact the care and outcomes of these patients. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  5. Biomechanical influence of crown-to-implant ratio on stress distribution over internal hexagon short implant: 3-D finite element analysis with statistical test.

    PubMed

    Ramos Verri, Fellippo; Santiago Junior, Joel Ferreira; de Faria Almeida, Daniel Augusto; de Oliveira, Guilherme Bérgamo Brandão; de Souza Batista, Victor Eduardo; Marques Honório, Heitor; Noritomi, Pedro Yoshito; Pellizzer, Eduardo Piza

    2015-01-02

    The study of short implants is relevant to the biomechanics of dental implants, and research on crown increase has implications for the daily clinic. The aim of this study was to analyze the biomechanical interactions of a singular implant-supported prosthesis of different crown heights under vertical and oblique force, using the 3-D finite element method. Six 3-D models were designed with Invesalius 3.0, Rhinoceros 3D 4.0, and Solidworks 2010 software. Each model was constructed with a mandibular segment of bone block, including an implant supporting a screwed metal-ceramic crown. The crown height was set at 10, 12.5, and 15 mm. The applied force was 200 N (axial) and 100 N (oblique). We performed an ANOVA statistical test and Tukey tests; p<0.05 was considered statistically significant. The increase of crown height did not influence the stress distribution on screw prosthetic (p>0.05) under axial load. However, crown heights of 12.5 and 15 mm caused statistically significant damage to the stress distribution of screws and to the cortical bone (p<0.001) under oblique load. High crown to implant (C/I) ratio harmed microstrain distribution on bone tissue under axial and oblique loads (p<0.001). Crown increase was a possible deleterious factor to the screws and to the different regions of bone tissue. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. Uniting statistical and individual-based approaches for animal movement modelling.

    PubMed

    Latombe, Guillaume; Parrott, Lael; Basille, Mathieu; Fortin, Daniel

    2014-01-01

    The dynamic nature of their internal states and the environment directly shape animals' spatial behaviours and give rise to emergent properties at broader scales in natural systems. However, integrating these dynamic features into habitat selection studies remains challenging, due to practically impossible field work to access internal states and the inability of current statistical models to produce dynamic outputs. To address these issues, we developed a robust method, which combines statistical and individual-based modelling. Using a statistical technique for forward modelling of the IBM has the advantage of being faster for parameterization than a pure inverse modelling technique and allows for robust selection of parameters. Using GPS locations from caribou monitored in Québec, caribou movements were modelled based on generative mechanisms accounting for dynamic variables at a low level of emergence. These variables were accessed by replicating real individuals' movements in parallel sub-models, and movement parameters were then empirically parameterized using Step Selection Functions. The final IBM model was validated using both k-fold cross-validation and emergent patterns validation and was tested for two different scenarios, with varying hardwood encroachment. Our results highlighted a functional response in habitat selection, which suggests that our method was able to capture the complexity of the natural system, and adequately provided projections on future possible states of the system in response to different management plans. This is especially relevant for testing the long-term impact of scenarios corresponding to environmental configurations that have yet to be observed in real systems.

  7. On the Statistical Properties of Cospectra

    NASA Astrophysics Data System (ADS)

    Huppenkothen, D.; Bachetti, M.

    2018-05-01

    In recent years, the cross-spectrum has received considerable attention as a means of characterizing the variability of astronomical sources as a function of wavelength. The cospectrum has only recently been understood as a means of mitigating instrumental effects dependent on temporal frequency in astronomical detectors, as well as a method of characterizing the coherent variability in two wavelength ranges on different timescales. In this paper, we lay out the statistical foundations of the cospectrum, starting with the simplest case of detecting a periodic signal in the presence of white noise, under the assumption that the same source is observed simultaneously in independent detectors in the same energy range. This case is especially relevant for detecting faint X-ray pulsars in detectors heavily affected by instrumental effects, including NuSTAR, Astrosat, and IXPE, which allow for even sampling and where the cospectrum can act as an effective way to mitigate dead time. We show that the statistical distributions of both single and averaged cospectra differ considerably from those for standard periodograms. While a single cospectrum follows a Laplace distribution exactly, averaged cospectra are approximated by a Gaussian distribution only for more than ∼30 averaged segments, dependent on the number of trials. We provide an instructive example of a quasi-periodic oscillation in NuSTAR and show that applying standard periodogram statistics leads to underestimated tail probabilities for period detection. We also demonstrate the application of these distributions to a NuSTAR observation of the X-ray pulsar Hercules X-1.

  8. Uniting Statistical and Individual-Based Approaches for Animal Movement Modelling

    PubMed Central

    Latombe, Guillaume; Parrott, Lael; Basille, Mathieu; Fortin, Daniel

    2014-01-01

    The dynamic nature of their internal states and the environment directly shape animals' spatial behaviours and give rise to emergent properties at broader scales in natural systems. However, integrating these dynamic features into habitat selection studies remains challenging, due to practically impossible field work to access internal states and the inability of current statistical models to produce dynamic outputs. To address these issues, we developed a robust method, which combines statistical and individual-based modelling. Using a statistical technique for forward modelling of the IBM has the advantage of being faster for parameterization than a pure inverse modelling technique and allows for robust selection of parameters. Using GPS locations from caribou monitored in Québec, caribou movements were modelled based on generative mechanisms accounting for dynamic variables at a low level of emergence. These variables were accessed by replicating real individuals' movements in parallel sub-models, and movement parameters were then empirically parameterized using Step Selection Functions. The final IBM model was validated using both k-fold cross-validation and emergent patterns validation and was tested for two different scenarios, with varying hardwood encroachment. Our results highlighted a functional response in habitat selection, which suggests that our method was able to capture the complexity of the natural system, and adequately provided projections on future possible states of the system in response to different management plans. This is especially relevant for testing the long-term impact of scenarios corresponding to environmental configurations that have yet to be observed in real systems. PMID:24979047

  9. Attitudes towards statistics of graduate entry medical students: the role of prior learning experiences

    PubMed Central

    2014-01-01

    Background While statistics is increasingly taught as part of the medical curriculum, it can be an unpopular subject and feedback from students indicates that some find it more difficult than other subjects. Understanding attitudes towards statistics on entry to graduate entry medical programmes is particularly important, given that many students may have been exposed to quantitative courses in their previous degree and hence bring preconceptions of their ability and interest to their medical education programme. The aim of this study therefore is to explore, for the first time, attitudes towards statistics of graduate entry medical students from a variety of backgrounds and focus on understanding the role of prior learning experiences. Methods 121 first year graduate entry medical students completed the Survey of Attitudes toward Statistics instrument together with information on demographics and prior learning experiences. Results Students tended to appreciate the relevance of statistics in their professional life and be prepared to put effort into learning statistics. They had neutral to positive attitudes about their interest in statistics and their intellectual knowledge and skills when applied to it. Their feelings towards statistics were slightly less positive e.g. feelings of insecurity, stress, fear and frustration and they tended to view statistics as difficult. Even though 85% of students had taken a quantitative course in the past, only 24% of students described it as likely that they would take any course in statistics if the choice was theirs. How well students felt they had performed in mathematics in the past was a strong predictor of many of the components of attitudes. Conclusion The teaching of statistics to medical students should start with addressing the association between students’ past experiences in mathematics and their attitudes towards statistics and encouraging students to recognise the difference between the two disciplines. Addressing these issues may reduce students’ anxiety and perception of difficulty at the start of their learning experience and encourage students to engage with statistics in their future careers. PMID:24708762

  10. Attitudes towards statistics of graduate entry medical students: the role of prior learning experiences.

    PubMed

    Hannigan, Ailish; Hegarty, Avril C; McGrath, Deirdre

    2014-04-04

    While statistics is increasingly taught as part of the medical curriculum, it can be an unpopular subject and feedback from students indicates that some find it more difficult than other subjects. Understanding attitudes towards statistics on entry to graduate entry medical programmes is particularly important, given that many students may have been exposed to quantitative courses in their previous degree and hence bring preconceptions of their ability and interest to their medical education programme. The aim of this study therefore is to explore, for the first time, attitudes towards statistics of graduate entry medical students from a variety of backgrounds and focus on understanding the role of prior learning experiences. 121 first year graduate entry medical students completed the Survey of Attitudes toward Statistics instrument together with information on demographics and prior learning experiences. Students tended to appreciate the relevance of statistics in their professional life and be prepared to put effort into learning statistics. They had neutral to positive attitudes about their interest in statistics and their intellectual knowledge and skills when applied to it. Their feelings towards statistics were slightly less positive e.g. feelings of insecurity, stress, fear and frustration and they tended to view statistics as difficult. Even though 85% of students had taken a quantitative course in the past, only 24% of students described it as likely that they would take any course in statistics if the choice was theirs. How well students felt they had performed in mathematics in the past was a strong predictor of many of the components of attitudes. The teaching of statistics to medical students should start with addressing the association between students' past experiences in mathematics and their attitudes towards statistics and encouraging students to recognise the difference between the two disciplines. Addressing these issues may reduce students' anxiety and perception of difficulty at the start of their learning experience and encourage students to engage with statistics in their future careers.

  11. Process-informed extreme value statistics- Why and how?

    NASA Astrophysics Data System (ADS)

    Schumann, Andreas; Fischer, Svenja

    2017-04-01

    In many parts of the world, annual maximum series (AMS) of runoff consist of flood peaks, which differ in their genesis. There are several aspects why these differences should be considered: Often multivariate flood characteristics (volumes, shapes) are of interest. These characteristics depend on the flood types. For regionalization, the main impacts on the flood regime has to be specified. If this regime depends on different flood types, type-specific hydro-meteorological and/or watershed characteristics are relevant. The ratios between event types often change over the range of observations. If a majority of events, which belongs to certain flood type, dominates the extrapolation of a probability distribution function (pdf), it is a problem if this more frequent type would not be typical for extraordinary large extremes, determining the right tail of the pdf. To consider differences in flood origin, several problems has to be solved. The events have to be separated into different groups according to their genesis. This can be a problem for long past events where e.g. precipitation data are not available. Another problem consists in the flood type-specific statistics. If block maxima are used, the sample of floods belong to a certain type is often incomplete as other events are overlaying smaller events. Some practical useable statistical tools to solve this and other problems are presented in a case study. Seasonal models were developed which differ between winter and summer floods but also between events with long and short timescales. The pdfs of the two groups of summer floods are combined via a new mixing model. The application to German watersheds demonstrates the advantages of the new model, giving specific influence to flood types.

  12. Characterization of Noise Signatures of Involuntary Head Motion in the Autism Brain Imaging Data Exchange Repository

    PubMed Central

    Caballero, Carla; Mistry, Sejal; Vero, Joe; Torres, Elizabeth B

    2018-01-01

    The variability inherently present in biophysical data is partly contributed by disparate sampling resolutions across instrumentations. This poses a potential problem for statistical inference using pooled data in open access repositories. Such repositories combine data collected from multiple research sites using variable sampling resolutions. One example is the Autism Brain Imaging Data Exchange repository containing thousands of imaging and demographic records from participants in the spectrum of autism and age-matched neurotypical controls. Further, statistical analyses of groups from different diagnoses and demographics may be challenging, owing to the disparate number of participants across different clinical subgroups. In this paper, we examine the noise signatures of head motion data extracted from resting state fMRI data harnessed under different sampling resolutions. We characterize the quality of the noise in the variability of the raw linear and angular speeds for different clinical phenotypes in relation to age-matched controls. Further, we use bootstrapping methods to ensure compatible group sizes for statistical comparison and report the ranges of physical involuntary head excursions of these groups. We conclude that different sampling rates do affect the quality of noise in the variability of head motion data and, consequently, the type of random process appropriate to characterize the time series data. Further, given a qualitative range of noise, from pink to brown noise, it is possible to characterize different clinical subtypes and distinguish them in relation to ranges of neurotypical controls. These results may be of relevance to the pre-processing stages of the pipeline of analyses of resting state fMRI data, whereby head motion enters the criteria to clean imaging data from motion artifacts. PMID:29556179

  13. Evaluation of Cepstrum Algorithm with Impact Seeded Fault Data of Helicopter Oil Cooler Fan Bearings and Machine Fault Simulator Data

    DTIC Science & Technology

    2013-02-01

    of a bearing must be put into practice. There are many potential methods, the most traditional being the use of statistical time-domain features...accelerate degradation to test multiples bearings to gain statistical relevance and extrapolate results to scale for field conditions. Temperature...as time statistics , frequency estimation to improve the fault frequency detection. For future investigations, one can further explore the

  14. Accurate and robust genomic prediction of celiac disease using statistical learning.

    PubMed

    Abraham, Gad; Tye-Din, Jason A; Bhalala, Oneil G; Kowalczyk, Adam; Zobel, Justin; Inouye, Michael

    2014-02-01

    Practical application of genomic-based risk stratification to clinical diagnosis is appealing yet performance varies widely depending on the disease and genomic risk score (GRS) method. Celiac disease (CD), a common immune-mediated illness, is strongly genetically determined and requires specific HLA haplotypes. HLA testing can exclude diagnosis but has low specificity, providing little information suitable for clinical risk stratification. Using six European cohorts, we provide a proof-of-concept that statistical learning approaches which simultaneously model all SNPs can generate robust and highly accurate predictive models of CD based on genome-wide SNP profiles. The high predictive capacity replicated both in cross-validation within each cohort (AUC of 0.87-0.89) and in independent replication across cohorts (AUC of 0.86-0.9), despite differences in ethnicity. The models explained 30-35% of disease variance and up to ∼43% of heritability. The GRS's utility was assessed in different clinically relevant settings. Comparable to HLA typing, the GRS can be used to identify individuals without CD with ≥99.6% negative predictive value however, unlike HLA typing, fine-scale stratification of individuals into categories of higher-risk for CD can identify those that would benefit from more invasive and costly definitive testing. The GRS is flexible and its performance can be adapted to the clinical situation by adjusting the threshold cut-off. Despite explaining a minority of disease heritability, our findings indicate a genomic risk score provides clinically relevant information to improve upon current diagnostic pathways for CD and support further studies evaluating the clinical utility of this approach in CD and other complex diseases.

  15. Effects of pH, lactate, hematocrit and potassium level on the accuracy of continuous glucose monitoring (CGM) in pediatric intensive care unit.

    PubMed

    Marics, Gábor; Koncz, Levente; Eitler, Katalin; Vatai, Barbara; Szénási, Boglárka; Zakariás, David; Mikos, Borbála; Körner, Anna; Tóth-Heyn, Péter

    2015-03-19

    Continuous glucose monitoring (CGM) originally was developed for diabetic patients and it may be a useful tool for monitoring glucose changes in pediatric intensive care unit (PICU). Its use is, however, limited by the lack of sufficient data on its reliability at insufficient peripheral perfusion. We aimed to correlate the accuracy of CGM with laboratory markers relevant to disturbed tissue perfusion. In 38 pediatric patients (age range, 0-18 years) requiring intensive care we tested the effect of pH, lactate, hematocrit and serum potassium on the difference between CGM and meter glucose measurements. Guardian® (Medtronic®) CGM results were compared to GEM 3000 (Instrumentation laboratory®) and point-of-care measurements. The clinical accuracy of CGM was evaluated by Clarke Error Grid -, Bland-Altman analysis and Pearson's correlation. We used Friedman test for statistical analysis (statistical significance was established as a p < 0.05). CGM values exhibited a considerable variability without any correlation with the examined laboratory parameters. Clarke, Bland-Altman analysis and Pearson's correlation coefficient demonstrated a good clinical accuracy of CGM (zone A and B = 96%; the mean difference between reference and CGM glucose was 1,3 mg/dL, 48 from the 780 calibration pairs overrunning the 2 standard deviation; Pearson's correlation coefficient: 0.83). The accuracy of CGM measurements is independent of laboratory parameters relevant to tissue hypoperfusion. CGM may prove a reliable tool for continuous monitoring of glucose changes in PICUs, not much influenced by tissue perfusion, but still not appropriate for being the base for clinical decisions.

  16. Perspectives on statistics education: observations from statistical consulting in an academic nursing environment.

    PubMed

    Hayat, Matthew J; Schmiege, Sarah J; Cook, Paul F

    2014-04-01

    Statistics knowledge is essential for understanding the nursing and health care literature, as well as for applying rigorous science in nursing research. Statistical consultants providing services to faculty and students in an academic nursing program have the opportunity to identify gaps and challenges in statistics education for nursing students. This information may be useful to curriculum committees and statistics educators. This article aims to provide perspective on statistics education stemming from the experiences of three experienced statistics educators who regularly collaborate and consult with nurse investigators. The authors share their knowledge and express their views about data management, data screening and manipulation, statistical software, types of scientific investigation, and advanced statistical topics not covered in the usual coursework. The suggestions provided promote a call for data to study these topics. Relevant data about statistics education can assist educators in developing comprehensive statistics coursework for nursing students. Copyright 2014, SLACK Incorporated.

  17. A statistical approach for segregating cognitive task stages from multivariate fMRI BOLD time series.

    PubMed

    Demanuele, Charmaine; Bähner, Florian; Plichta, Michael M; Kirsch, Peter; Tost, Heike; Meyer-Lindenberg, Andreas; Durstewitz, Daniel

    2015-01-01

    Multivariate pattern analysis can reveal new information from neuroimaging data to illuminate human cognition and its disturbances. Here, we develop a methodological approach, based on multivariate statistical/machine learning and time series analysis, to discern cognitive processing stages from functional magnetic resonance imaging (fMRI) blood oxygenation level dependent (BOLD) time series. We apply this method to data recorded from a group of healthy adults whilst performing a virtual reality version of the delayed win-shift radial arm maze (RAM) task. This task has been frequently used to study working memory and decision making in rodents. Using linear classifiers and multivariate test statistics in conjunction with time series bootstraps, we show that different cognitive stages of the task, as defined by the experimenter, namely, the encoding/retrieval, choice, reward and delay stages, can be statistically discriminated from the BOLD time series in brain areas relevant for decision making and working memory. Discrimination of these task stages was significantly reduced during poor behavioral performance in dorsolateral prefrontal cortex (DLPFC), but not in the primary visual cortex (V1). Experimenter-defined dissection of time series into class labels based on task structure was confirmed by an unsupervised, bottom-up approach based on Hidden Markov Models. Furthermore, we show that different groupings of recorded time points into cognitive event classes can be used to test hypotheses about the specific cognitive role of a given brain region during task execution. We found that whilst the DLPFC strongly differentiated between task stages associated with different memory loads, but not between different visual-spatial aspects, the reverse was true for V1. Our methodology illustrates how different aspects of cognitive information processing during one and the same task can be separated and attributed to specific brain regions based on information contained in multivariate patterns of voxel activity.

  18. Bayesian Statistics in Educational Research: A Look at the Current State of Affairs

    ERIC Educational Resources Information Center

    König, Christoph; van de Schoot, Rens

    2018-01-01

    The ability of a scientific discipline to build cumulative knowledge depends on its predominant method of data analysis. A steady accumulation of knowledge requires approaches which allow researchers to consider results from comparable prior research. Bayesian statistics is especially relevant for establishing a cumulative scientific discipline,…

  19. Teaching Biology through Statistics: Application of Statistical Methods in Genetics and Zoology Courses

    ERIC Educational Resources Information Center

    Colon-Berlingeri, Migdalisel; Burrowes, Patricia A.

    2011-01-01

    Incorporation of mathematics into biology curricula is critical to underscore for undergraduate students the relevance of mathematics to most fields of biology and the usefulness of developing quantitative process skills demanded in modern biology. At our institution, we have made significant changes to better integrate mathematics into the…

  20. Different characteristics of circular staplers make the difference in anastomotic tensile strength.

    PubMed

    Giaccaglia, V; Antonelli, M S; Franceschilli, L; Salvi, P F; Gaspari, A L; Sileri, P

    2016-01-01

    Anastomotic leak after gastrointestinal surgery is a severe complication associated with relevant short and long-term sequelae. Most of the anastomoses are currently performed with a surgical stapler that is required to have appropriate characteristics in order to guarantee good performances. The aim of our study was to evaluate, ex vivo, pressure resistance and tensile strength of anastomosis performed with different circular staplers available in the market. We studied 7 circular staplers of 3 different companies, 3 of them used for gastrointestinal anastomosis and 4 staplers for hemorrhoidal prolapse excision. A total of 350 anastomoses, 50 for each of the 7 staplers, were performed using healthy pig fresh intestine, then injected saline solution and recorded the leaking pressure. There were no statistically significant differences between the mean pressure necessary to induce an anastomotic leak in the various instruments (p>0.05). For studying tensile strength, we performed a total of 350 anastomoses with 7 different circular staplers on a special strong paper (Tyvek), and then recorded the maximal tensile force that could open the anastomosis. There were statistically significant differences between one brand stapler vs other 2 companies staplers about the strength necessary to open the staple line (p<0.05). In conclusion, we demonstrated that different circular staplers of three companies available in the market give comparable anastomotic pressure resistance but different tensile strengths. This is probably due to different technical characteristics. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. Stochastic or statistic? Comparing flow duration curve models in ungauged basins and changing climates

    NASA Astrophysics Data System (ADS)

    Müller, M. F.; Thompson, S. E.

    2015-09-01

    The prediction of flow duration curves (FDCs) in ungauged basins remains an important task for hydrologists given the practical relevance of FDCs for water management and infrastructure design. Predicting FDCs in ungauged basins typically requires spatial interpolation of statistical or model parameters. This task is complicated if climate becomes non-stationary, as the prediction challenge now also requires extrapolation through time. In this context, process-based models for FDCs that mechanistically link the streamflow distribution to climate and landscape factors may have an advantage over purely statistical methods to predict FDCs. This study compares a stochastic (process-based) and statistical method for FDC prediction in both stationary and non-stationary contexts, using Nepal as a case study. Under contemporary conditions, both models perform well in predicting FDCs, with Nash-Sutcliffe coefficients above 0.80 in 75 % of the tested catchments. The main drives of uncertainty differ between the models: parameter interpolation was the main source of error for the statistical model, while violations of the assumptions of the process-based model represented the main source of its error. The process-based approach performed better than the statistical approach in numerical simulations with non-stationary climate drivers. The predictions of the statistical method under non-stationary rainfall conditions were poor if (i) local runoff coefficients were not accurately determined from the gauge network, or (ii) streamflow variability was strongly affected by changes in rainfall. A Monte Carlo analysis shows that the streamflow regimes in catchments characterized by a strong wet-season runoff and a rapid, strongly non-linear hydrologic response are particularly sensitive to changes in rainfall statistics. In these cases, process-based prediction approaches are strongly favored over statistical models.

  2. Comparing statistical and process-based flow duration curve models in ungauged basins and changing rain regimes

    NASA Astrophysics Data System (ADS)

    Müller, M. F.; Thompson, S. E.

    2016-02-01

    The prediction of flow duration curves (FDCs) in ungauged basins remains an important task for hydrologists given the practical relevance of FDCs for water management and infrastructure design. Predicting FDCs in ungauged basins typically requires spatial interpolation of statistical or model parameters. This task is complicated if climate becomes non-stationary, as the prediction challenge now also requires extrapolation through time. In this context, process-based models for FDCs that mechanistically link the streamflow distribution to climate and landscape factors may have an advantage over purely statistical methods to predict FDCs. This study compares a stochastic (process-based) and statistical method for FDC prediction in both stationary and non-stationary contexts, using Nepal as a case study. Under contemporary conditions, both models perform well in predicting FDCs, with Nash-Sutcliffe coefficients above 0.80 in 75 % of the tested catchments. The main drivers of uncertainty differ between the models: parameter interpolation was the main source of error for the statistical model, while violations of the assumptions of the process-based model represented the main source of its error. The process-based approach performed better than the statistical approach in numerical simulations with non-stationary climate drivers. The predictions of the statistical method under non-stationary rainfall conditions were poor if (i) local runoff coefficients were not accurately determined from the gauge network, or (ii) streamflow variability was strongly affected by changes in rainfall. A Monte Carlo analysis shows that the streamflow regimes in catchments characterized by frequent wet-season runoff and a rapid, strongly non-linear hydrologic response are particularly sensitive to changes in rainfall statistics. In these cases, process-based prediction approaches are favored over statistical models.

  3. Recovery-promoting professional competencies: perspectives of mental health consumers, consumer-providers and providers.

    PubMed

    Russinova, Zlatka; Rogers, E Sally; Ellison, Marsha Langer; Lyass, Asya

    2011-01-01

    The purpose of this study was to empirically validate a set of conceptually derived recovery-promoting competencies from the perspectives of mental health consumers, consumer-providers and providers. A national sample of 603 consumers, 153 consumer-providers and 239 providers completed an anonymous survey via the Internet. The survey evaluated respondents' perceptions about a set of 37 competencies hypothesized to enhance clients' hope and empowerment and inquired about interactions with providers that enhanced clients' recovery process. We used descriptive statistics and ranking to establish the relevance of each competency and generalized linear models and post-hoc tests to examine differences in the consumers', consumer-providers' and providers' assessments of these competencies. Analyses confirmed the recovery relevance of several competencies and their relative importance within each group of study participants. They also revealed that while most competencies tended to have universal significance, others depended more strongly on the client's preferences. Finally, differences in the perceptions of consumers, consumer-providers and providers about the recovery relevance of these competencies were established. The study highlighted the crucial role practitioners play in enhancing recovery from serious mental illnesses through specific strategies and attitudes that acknowledge clients' personhood and foster their hopefulness, empowerment and illness management. It informed the development of a new instrument measuring providers' recovery-promoting competence and provides guidelines for sharpening the recovery focus of a wide range of mental health and rehabilitation services.

  4. Renal incidental findings on computed tomography

    PubMed Central

    Meyer, Hans Jonas; Pfeil, Alina; Schramm, Dominik; Bach, Andreas Gunter; Surov, Alexey

    2017-01-01

    Abstract Renal incidental findings (IFs) are common. However, previous reports investigated renal IFs were limited to patient selection. The purpose of this study was to estimate the prevalence and distribution of all renal IFs on computed tomography (CT) in a large patient collective. All patients, who underwent CT investigations of the abdominal region at our institution in the time period between January 2006 and February 2014 were included in this study. Inclusion criteria were as follows: no previous history of renal diseases and well image quality. Patients with known kidney disorders were excluded from the study. Overall, 7365 patients meet the inclusion criteria were identified. There were 2924 (39.7%) women and 4441 men (60.3%) with a mean age of 59.8 ± 16.7 years. All CTs were retrospectively analyzed in consensus by 2 radiologists. Collected data were evaluated by means of descriptive statistics. Overall, 2756 patients (37.42% of all included patients) showed 3425 different renal IFs (1.24 findings per patient). Of all renal IFs, 123 (3.6%) findings were clinically relevant, 259 (7.6%) were categorized as possibly clinically relevant, and 3043 (88.8%) were clinically non relevant. Different renal IFs can be detected on CT. The present study provides a real prevalence and proportion of them in daily clinical routine. Kidneys should be thoroughly evaluated because of the fact that incidental renal findings occur frequently. PMID:28658098

  5. Visualising associations between paired ‘omics’ data sets

    PubMed Central

    2012-01-01

    Background Each omics platform is now able to generate a large amount of data. Genomics, proteomics, metabolomics, interactomics are compiled at an ever increasing pace and now form a core part of the fundamental systems biology framework. Recently, several integrative approaches have been proposed to extract meaningful information. However, these approaches lack of visualisation outputs to fully unravel the complex associations between different biological entities. Results The multivariate statistical approaches ‘regularized Canonical Correlation Analysis’ and ‘sparse Partial Least Squares regression’ were recently developed to integrate two types of highly dimensional ‘omics’ data and to select relevant information. Using the results of these methods, we propose to revisit few graphical outputs to better understand the relationships between two ‘omics’ data and to better visualise the correlation structure between the different biological entities. These graphical outputs include Correlation Circle plots, Relevance Networks and Clustered Image Maps. We demonstrate the usefulness of such graphical outputs on several biological data sets and further assess their biological relevance using gene ontology analysis. Conclusions Such graphical outputs are undoubtedly useful to aid the interpretation of these promising integrative analysis tools and will certainly help in addressing fundamental biological questions and understanding systems as a whole. Availability The graphical tools described in this paper are implemented in the freely available R package mixOmics and in its associated web application. PMID:23148523

  6. Weighted analysis of composite endpoints with simultaneous inference for flexible weight constraints.

    PubMed

    Duc, Anh Nguyen; Wolbers, Marcel

    2017-02-10

    Composite endpoints are widely used as primary endpoints of randomized controlled trials across clinical disciplines. A common critique of the conventional analysis of composite endpoints is that all disease events are weighted equally, whereas their clinical relevance may differ substantially. We address this by introducing a framework for the weighted analysis of composite endpoints and interpretable test statistics, which are applicable to both binary and time-to-event data. To cope with the difficulty of selecting an exact set of weights, we propose a method for constructing simultaneous confidence intervals and tests that asymptotically preserve the family-wise type I error in the strong sense across families of weights satisfying flexible inequality or order constraints based on the theory of χ¯2-distributions. We show that the method achieves the nominal simultaneous coverage rate with substantial efficiency gains over Scheffé's procedure in a simulation study and apply it to trials in cardiovascular disease and enteric fever. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  7. A statistical method to estimate low-energy hadronic cross sections

    NASA Astrophysics Data System (ADS)

    Balassa, Gábor; Kovács, Péter; Wolf, György

    2018-02-01

    In this article we propose a model based on the Statistical Bootstrap approach to estimate the cross sections of different hadronic reactions up to a few GeV in c.m.s. energy. The method is based on the idea, when two particles collide a so-called fireball is formed, which after a short time period decays statistically into a specific final state. To calculate the probabilities we use a phase space description extended with quark combinatorial factors and the possibility of more than one fireball formation. In a few simple cases the probability of a specific final state can be calculated analytically, where we show that the model is able to reproduce the ratios of the considered cross sections. We also show that the model is able to describe proton-antiproton annihilation at rest. In the latter case we used a numerical method to calculate the more complicated final state probabilities. Additionally, we examined the formation of strange and charmed mesons as well, where we used existing data to fit the relevant model parameters.

  8. Brain fingerprinting classification concealed information test detects US Navy military medical information with P300

    PubMed Central

    Farwell, Lawrence A.; Richardson, Drew C.; Richardson, Graham M.; Furedy, John J.

    2014-01-01

    A classification concealed information test (CIT) used the “brain fingerprinting” method of applying P300 event-related potential (ERP) in detecting information that is (1) acquired in real life and (2) unique to US Navy experts in military medicine. Military medicine experts and non-experts were asked to push buttons in response to three types of text stimuli. Targets contain known information relevant to military medicine, are identified to subjects as relevant, and require pushing one button. Subjects are told to push another button to all other stimuli. Probes contain concealed information relevant to military medicine, and are not identified to subjects. Irrelevants contain equally plausible, but incorrect/irrelevant information. Error rate was 0%. Median and mean statistical confidences for individual determinations were 99.9% with no indeterminates (results lacking sufficiently high statistical confidence to be classified). We compared error rate and statistical confidence for determinations of both information present and information absent produced by classification CIT (Is a probe ERP more similar to a target or to an irrelevant ERP?) vs. comparison CIT (Does a probe produce a larger ERP than an irrelevant?) using P300 plus the late negative component (LNP; together, P300-MERMER). Comparison CIT produced a significantly higher error rate (20%) and lower statistical confidences: mean 67%; information-absent mean was 28.9%, less than chance (50%). We compared analysis using P300 alone with the P300 + LNP. P300 alone produced the same 0% error rate but significantly lower statistical confidences. These findings add to the evidence that the brain fingerprinting methods as described here provide sufficient conditions to produce less than 1% error rate and greater than 95% median statistical confidence in a CIT on information obtained in the course of real life that is characteristic of individuals with specific training, expertise, or organizational affiliation. PMID:25565941

  9. Teaching statistics to nursing students: an expert panel consensus.

    PubMed

    Hayat, Matthew J; Eckardt, Patricia; Higgins, Melinda; Kim, MyoungJin; Schmiege, Sarah J

    2013-06-01

    Statistics education is a necessary element of nursing education, and its inclusion is recommended in the American Association of Colleges of Nursing guidelines for nurse training at all levels. This article presents a cohesive summary of an expert panel discussion, "Teaching Statistics to Nursing Students," held at the 2012 Joint Statistical Meetings. All panelists were statistics experts, had extensive teaching and consulting experience, and held faculty appointments in a U.S.-based nursing college or school. The panel discussed degree-specific curriculum requirements, course content, how to ensure nursing students understand the relevance of statistics, approaches to integrating statistics consulting knowledge, experience with classroom instruction, use of knowledge from the statistics education research field to make improvements in statistics education for nursing students, and classroom pedagogy and instruction on the use of statistical software. Panelists also discussed the need for evidence to make data-informed decisions about statistics education and training for nurses. Copyright 2013, SLACK Incorporated.

  10. Surgeon Reported Outcome Measure for Spine Trauma: An International Expert Survey Identifying Parameters Relevant for the Outcome of Subaxial Cervical Spine Injuries.

    PubMed

    Sadiqi, Said; Verlaan, Jorrit-Jan; Lehr, A Mechteld; Dvorak, Marcel F; Kandziora, Frank; Rajasekaran, S; Schnake, Klaus J; Vaccaro, Alexander R; Oner, F Cumhur

    2016-12-15

    International web-based survey. To identify clinical and radiological parameters that spine surgeons consider most relevant when evaluating clinical and functional outcomes of subaxial cervical spine trauma patients. Although an outcome instrument that reflects the patients' perspective is imperative, there is also a need for a surgeon reported outcome measure to reflect the clinicians' perspective adequately. A cross-sectional online survey was conducted among a selected number of spine surgeons from all five AOSpine International world regions. They were asked to indicate the relevance of a compilation of 21 parameters, both for the short term (3 mo-2 yr) and long term (≥2 yr), on a five-point scale. The responses were analyzed using descriptive statistics, frequency analysis, and Kruskal-Wallis test. Of the 279 AOSpine International and International Spinal Cord Society members who received the survey, 108 (38.7%) participated in the study. Ten parameters were identified as relevant both for short term and long term by at least 70% of the participants. Neurological status, implant failure within 3 months, and patient satisfaction were most relevant. Bony fusion was the only parameter for the long term, whereas five parameters were identified for the short term. The remaining six parameters were not deemed relevant. Minor differences were observed when analyzing the responses according to each world region, or spine surgeons' degree of experience. The perspective of an international sample of highly experienced spine surgeons was explored on the most relevant parameters to evaluate and predict outcomes of subaxial cervical spine trauma patients. These results form the basis for the development of a disease-specific surgeon reported outcome measure, which will be a helpful tool in research and clinical practice. 4.

  11. Cervical Musculoskeletal Impairments and Temporomandibular Disorders

    PubMed Central

    Magee, David

    2012-01-01

    ABSTRACT Objectives The study of cervical muscles and their significance in the development and perpetuation of Temporomandibular Disorders has not been elucidated. Thus this project was designed to investigate the association between cervical musculoskeletal impairments and Temporomandibular Disorders. Material and Methods A sample of 154 subjects participated in this study. All subjects underwent a series of physical tests and electromyographic assessment (i.e. head and neck posture, maximal cervical muscle strength, cervical flexor and extensor muscles endurance, and cervical flexor muscle performance) to determine cervical musculoskeletal impairments. Results A strong relationship between neck disability and jaw disability was found (r = 0.82). Craniocervical posture was statistically different between patients with myogenous Temporomandibular Disorders (TMD) and healthy subjects. However, the difference was too small (3.3º) to be considered clinically relevant. Maximal cervical flexor muscle strength was not statistically or clinically different between patients with TMD and healthy subjects. No statistically significant differences were found in electromyographic activity of the sternocleidomastoid or the anterior scalene muscles in patients with TMD when compared to healthy subjects while executing the craniocervical flexion test (P = 0.07). However, clinically important effect sizes (0.42 - 0.82) were found. Subjects with TMD presented with reduced cervical flexor as well as extensor muscle endurance while performing the flexor and extensor muscle endurance tests when compared to healthy individuals. Conclusions Subjects with Temporomandibular Disorders presented with impairments of the cervical flexors and extensors muscles. These results could help guide clinicians in the assessment and prescription of more effective interventions for individuals with Temporomandibular Disorders. PMID:24422022

  12. Taking Ockham's razor to enzyme dynamics and catalysis.

    PubMed

    Glowacki, David R; Harvey, Jeremy N; Mulholland, Adrian J

    2012-01-29

    The role of protein dynamics in enzyme catalysis is a matter of intense current debate. Enzyme-catalysed reactions that involve significant quantum tunnelling can give rise to experimental kinetic isotope effects with complex temperature dependences, and it has been suggested that standard statistical rate theories, such as transition-state theory, are inadequate for their explanation. Here we introduce aspects of transition-state theory relevant to the study of enzyme reactivity, taking cues from chemical kinetics and dynamics studies of small molecules in the gas phase and in solution--where breakdowns of statistical theories have received significant attention and their origins are relatively better understood. We discuss recent theoretical approaches to understanding enzyme activity and then show how experimental observations for a number of enzymes may be reproduced using a transition-state-theory framework with physically reasonable parameters. Essential to this simple model is the inclusion of multiple conformations with different reactivity.

  13. Critical Bursts in Filtration

    NASA Astrophysics Data System (ADS)

    Bianchi, Filippo; Thielmann, Marcel; de Arcangelis, Lucilla; Herrmann, Hans Jürgen

    2018-01-01

    Particle detachment bursts during the flow of suspensions through porous media are a phenomenon that can severely affect the efficiency of deep bed filters. Despite the relevance in several industrial fields, little is known about the statistical properties and the temporal organization of these events. We present experiments of suspensions of deionized water carrying quartz particles pushed with a peristaltic pump through a filter of glass beads measuring simultaneously the pressure drop, flux, and suspension solid fraction. We find that the burst size distribution scales consistently with a power law, suggesting that we are in the presence of a novel experimental realization of a self-organized critical system. Temporal correlations are present in the time series, like in other phenomena such as earthquakes or neuronal activity bursts, and also an analog to Omori's law can be shown. The understanding of burst statistics could provide novel insights in different fields, e.g., in the filter and petroleum industries.

  14. Linear models: permutation methods

    USGS Publications Warehouse

    Cade, B.S.; Everitt, B.S.; Howell, D.C.

    2005-01-01

    Permutation tests (see Permutation Based Inference) for the linear model have applications in behavioral studies when traditional parametric assumptions about the error term in a linear model are not tenable. Improved validity of Type I error rates can be achieved with properly constructed permutation tests. Perhaps more importantly, increased statistical power, improved robustness to effects of outliers, and detection of alternative distributional differences can be achieved by coupling permutation inference with alternative linear model estimators. For example, it is well-known that estimates of the mean in linear model are extremely sensitive to even a single outlying value of the dependent variable compared to estimates of the median [7, 19]. Traditionally, linear modeling focused on estimating changes in the center of distributions (means or medians). However, quantile regression allows distributional changes to be estimated in all or any selected part of a distribution or responses, providing a more complete statistical picture that has relevance to many biological questions [6]...

  15. Descriptive epidemiology of breast cancer in China: incidence, mortality, survival and prevalence.

    PubMed

    Li, Tong; Mello-Thoms, Claudia; Brennan, Patrick C

    2016-10-01

    Breast cancer is the most common neoplasm diagnosed amongst women worldwide and is the leading cause of female cancer death. However, breast cancer in China is not comprehensively understood compared with Westernised countries, although the 5-year prevalence statistics indicate that approximately 11 % of worldwide breast cancer occurs in China and that the incidence has increased rapidly in recent decades. This paper reviews the descriptive epidemiology of Chinese breast cancer in terms of incidence, mortality, survival and prevalence, and explores relevant factors such as age of manifestation and geographic locations. The statistics are compared with data from the Westernised world with particular emphasis on the United States and Australia. Potential causal agents responsible for differences in breast cancer epidemiology between Chinese and other populations are also explored. The need to minimise variability and discrepancies in methods of data acquisition, analysis and presentation is highlighted.

  16. Cryotherapy and ankle motion in chronic venous disorders

    PubMed Central

    Kelechi, Teresa J.; Mueller, Martina; Zapka, Jane G.; King, Dana E.

    2013-01-01

    This study compared ankle range of motion (AROM) including dorsiflexion, plantar flexion, inversion and eversion, and venous refill time (VRT) in leg skin inflamed by venous disorders, before and after a new cryotherapy ulcer prevention treatment. Fifty-seven-individuals participated in the randomized clinical trial; 28 in the experimental group and 29 received usual care only. Results revealed no statistically significant differences between the experimental and usual care groups although AROM measures in the experimental group showed a consistent, non-clinically relevant decrease compared to the usual care group except for dorsiflexion. Within treatment group comparisons of VRT results showed a statistically significant increase in both dorsiflexion and plantar flexion for patients with severe VRT in the experimental group (6.9 ± 6.8; p = 0.002 and 5.8 ± 12.6; p = 0.02, respectively). Cryotherapy did not further restrict already compromised AROM, and in some cases, there were minor improvements. PMID:23516043

  17. Applying the Anderson-Darling test to suicide clusters: evidence of contagion at U. S. universities?

    PubMed

    MacKenzie, Donald W

    2013-01-01

    Suicide clusters at Cornell University and the Massachusetts Institute of Technology (MIT) prompted popular and expert speculation of suicide contagion. However, some clustering is to be expected in any random process. This work tested whether suicide clusters at these two universities differed significantly from those expected under a homogeneous Poisson process, in which suicides occur randomly and independently of one another. Suicide dates were collected for MIT and Cornell for 1990-2012. The Anderson-Darling statistic was used to test the goodness-of-fit of the intervals between suicides to distribution expected under the Poisson process. Suicides at MIT were consistent with the homogeneous Poisson process, while those at Cornell showed clustering inconsistent with such a process (p = .05). The Anderson-Darling test provides a statistically powerful means to identify suicide clustering in small samples. Practitioners can use this method to test for clustering in relevant communities. The difference in clustering behavior between the two institutions suggests that more institutions should be studied to determine the prevalence of suicide clustering in universities and its causes.

  18. Galaxy Evolution in the Radio Band: The Role of Star-forming Galaxies and Active Galactic Nuclei

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mancuso, C.; Prandoni, I.; Lapi, A.

    We investigate the astrophysics of radio-emitting star-forming galaxies and active galactic nuclei (AGNs) and elucidate their statistical properties in the radio band, including luminosity functions, redshift distributions, and number counts at sub-mJy flux levels, which will be crucially probed by next-generation radio continuum surveys. Specifically, we exploit the model-independent approach by Mancuso et al. to compute the star formation rate functions, the AGN duty cycles, and the conditional probability of a star-forming galaxy to host an AGN with given bolometric luminosity. Coupling these ingredients with the radio emission properties associated with star formation and nuclear activity, we compute relevant statisticsmore » at different radio frequencies and disentangle the relative contribution of star-forming galaxies and AGNs in different radio luminosity, radio flux, and redshift ranges. Finally, we highlight that radio-emitting star-forming galaxies and AGNs are expected to host supermassive black holes accreting with different Eddington ratio distributions and to occupy different loci in the galaxy main-sequence diagrams. These specific predictions are consistent with current data sets but need to be tested with larger statistics via future radio data with multiband coverage on wide areas, as will become routinely achievable with the advent of the Square Kilometre Array and its precursors.« less

  19. Comparison of the Skin Penetration of 3 Metabolically Stable Chemicals Using Fresh and Frozen Human Skin.

    PubMed

    Jacques-Jamin, Carine; Duplan, Hélène; Rothe, Helga; Vaillant, Ophelie; Eilstein, Joan; Grégoire, Sebastien; Cubberley, Richard; Lange, Daniela; Ellison, Corie; Klaric, Martina; Hewitt, Nicola; Schepky, Andreas

    2017-01-01

    The Cosmetics Europe ADME Task Force is developing in vitro and in silico tools for predicting skin and systemic concentrations after topical application of cosmetic ingredients. There are conflicting reports as to whether the freezing process affects the penetration of chemicals; therefore, we evaluated whether the storage of human skin used in our studies (8-12 weeks at -20°C) affected the penetration of model chemicals. Finite doses of trans-cinnamic acid (TCA), benzoic acid (BA), and 6-methylcoumarin (6MC) (non-volatile, non-protein reactive and metabolically stable in skin) were applied to fresh and thawed frozen skin from the same donors. The amounts of chemicals in different skin compartments were analysed after 24 h. Although there were some statistical differences in some parameters for 1 or 2 donors, the penetration of TCA, BA, and 6MC was essentially the same in fresh and frozen skin, i.e., there were no biologically relevant differences in penetration values. Statistical differences that were evident indicated that penetration was marginally lower in frozen than in fresh skin, indicating that the barrier function of the skin was not lost. The penetration of the 3 chemicals was essentially unaffected by freezing the skin at -20°C for up to 12 weeks. © 2017 S. Karger AG, Basel.

  20. Assessing information content and interactive relationships of subgenomic DNA sequences of the MHC using complexity theory approaches based on the non-extensive statistical mechanics

    NASA Astrophysics Data System (ADS)

    Karakatsanis, L. P.; Pavlos, G. P.; Iliopoulos, A. C.; Pavlos, E. G.; Clark, P. M.; Duke, J. L.; Monos, D. S.

    2018-09-01

    This study combines two independent domains of science, the high throughput DNA sequencing capabilities of Genomics and complexity theory from Physics, to assess the information encoded by the different genomic segments of exonic, intronic and intergenic regions of the Major Histocompatibility Complex (MHC) and identify possible interactive relationships. The dynamic and non-extensive statistical characteristics of two well characterized MHC sequences from the homozygous cell lines, PGF and COX, in addition to two other genomic regions of comparable size, used as controls, have been studied using the reconstructed phase space theorem and the non-extensive statistical theory of Tsallis. The results reveal similar non-linear dynamical behavior as far as complexity and self-organization features. In particular, the low-dimensional deterministic nonlinear chaotic and non-extensive statistical character of the DNA sequences was verified with strong multifractal characteristics and long-range correlations. The nonlinear indices repeatedly verified that MHC sequences, whether exonic, intronic or intergenic include varying levels of information and reveal an interaction of the genes with intergenic regions, whereby the lower the number of genes in a region, the less the complexity and information content of the intergenic region. Finally we showed the significance of the intergenic region in the production of the DNA dynamics. The findings reveal interesting content information in all three genomic elements and interactive relationships of the genes with the intergenic regions. The results most likely are relevant to the whole genome and not only to the MHC. These findings are consistent with the ENCODE project, which has now established that the non-coding regions of the genome remain to be of relevance, as they are functionally important and play a significant role in the regulation of expression of genes and coordination of the many biological processes of the cell.

  1. Does high-flow nasal cannula oxygen improve outcome in acute hypoxemic respiratory failure? A systematic review and meta-analysis.

    PubMed

    Lin, Si-Ming; Liu, Kai-Xiong; Lin, Zhi-Hong; Lin, Pei-Hong

    2017-10-01

    To evaluate the efficacy of high-flow nasal cannula (HFNC) in the rate of intubation and mortality for patients with acute hypoxemic respiratory failure. We searched Pubmed, EMBASE, and the Cochrane Library for relevant studies. Two reviewers extracted data and reviewed the quality of the studies independently. The primary outcome was the rate of intubation; secondary outcome was mortality in the hospital. Study-level data were pooled using a random-effects model when I2 was >50% or a fixed-effects model when I2 was <50%. Eight randomized controlled studies with a total of 1,818patients were considered. Pooled analysis showed that no statistically significant difference was found between groups regarding the rate of intubation (odds ratio [OR] = 0.79; 95% confidence interval [CI]: 0.60-1.04; P = 0.09; I2 = 36%) and no statistically significant difference was found between groups regarding hospital mortality (OR = 0.89; 95% CI: 0.62-127; P = 0.51; I2 = 47%). The use of HFNC showed a trend toward reduction in the intubation rate, which did not meet statistical significance, in patients with acute respiratory failure compared with conventional oxygen therapy (COT) and noninvasive ventilation (NIV). Moreover no difference in mortality. So, Large, well-designed, randomized, multi-center trials are needed to confirm the effects of HFNC in acute hypoxemic respiratory failure patients. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Predicting Cortical Dark/Bright Asymmetries from Natural Image Statistics and Early Visual Transforms

    PubMed Central

    Cooper, Emily A.; Norcia, Anthony M.

    2015-01-01

    The nervous system has evolved in an environment with structure and predictability. One of the ubiquitous principles of sensory systems is the creation of circuits that capitalize on this predictability. Previous work has identified predictable non-uniformities in the distributions of basic visual features in natural images that are relevant to the encoding tasks of the visual system. Here, we report that the well-established statistical distributions of visual features -- such as visual contrast, spatial scale, and depth -- differ between bright and dark image components. Following this analysis, we go on to trace how these differences in natural images translate into different patterns of cortical input that arise from the separate bright (ON) and dark (OFF) pathways originating in the retina. We use models of these early visual pathways to transform natural images into statistical patterns of cortical input. The models include the receptive fields and non-linear response properties of the magnocellular (M) and parvocellular (P) pathways, with their ON and OFF pathway divisions. The results indicate that there are regularities in visual cortical input beyond those that have previously been appreciated from the direct analysis of natural images. In particular, several dark/bright asymmetries provide a potential account for recently discovered asymmetries in how the brain processes visual features, such as violations of classic energy-type models. On the basis of our analysis, we expect that the dark/bright dichotomy in natural images plays a key role in the generation of both cortical and perceptual asymmetries. PMID:26020624

  3. Health-Related Quality of Life up to Six Years After {sup 125}I Brachytherapy for Early-Stage Prostate Cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roeloffzen, Ellen M.A., E-mail: E.M.A.Roeloffzen@UMCUtrecht.n; Lips, Irene M.; Gellekom, Marion P.R. van

    2010-03-15

    Purpose: Health-related quality of life (HRQOL) after prostate brachytherapy has been extensively described in published reports but hardly any long-term data are available. The aim of the present study was to prospectively assess long-term HRQOL 6 years after {sup 125}I prostate brachytherapy. Methods and Materials: A total of 127 patients treated with {sup 125}I brachytherapy for early-stage prostate cancer between December 2000 and June 2003 completed a HRQOL questionnaire at five time-points: before treatment and 1 month, 6 months, 1 year, and 6 years after treatment. The questionnaire included the RAND-36 generic health survey, the cancer-specific European Organization for Researchmore » and Treatment of Cancer core questionnaire (EORTCQLQ-C30), and the tumor-specific EORTC prostate cancer module (EORTC-PR25). A change in a score of >=10 points was considered clinically relevant. Results: Overall, the HRQOL at 6 years after {sup 125}I prostate brachytherapy did not significantly differ from baseline. Although a statistically significant deterioration in HRQOL at 6 years was seen for urinary symptoms, bowel symptoms, pain, physical functioning, and sexual activity (p <.01), most changes were not clinically relevant. A statistically significant improvement at 6 years was seen for mental health, emotional functioning, and insomnia (p <.01). The only clinically relevant changes were seen for emotional functioning and sexual activity. Conclusion: This is the first study presenting prospective HRQOL data up to 6 years after {sup 125}I prostate brachytherapy. HRQOL scores returned to approximately baseline values at 1 year and remained stable up to 6 years after treatment. {sup 125}I prostate brachytherapy did not adversely affect patients' long-term HRQOL.« less

  4. Problematic smartphone use: A conceptual overview and systematic review of relations with anxiety and depression psychopathology.

    PubMed

    Elhai, Jon D; Dvorak, Robert D; Levine, Jason C; Hall, Brian J

    2017-01-01

    Research literature on problematic smartphone use, or smartphone addiction, has proliferated. However, relationships with existing categories of psychopathology are not well defined. We discuss the concept of problematic smartphone use, including possible causal pathways to such use. We conducted a systematic review of the relationship between problematic use with psychopathology. Using scholarly bibliographic databases, we screened 117 total citations, resulting in 23 peer-reviewer papers examining statistical relations between standardized measures of problematic smartphone use/use severity and the severity of psychopathology. Most papers examined problematic use in relation to depression, anxiety, chronic stress and/or low self-esteem. Across this literature, without statistically adjusting for other relevant variables, depression severity was consistently related to problematic smartphone use, demonstrating at least medium effect sizes. Anxiety was also consistently related to problem use, but with small effect sizes. Stress was somewhat consistently related, with small to medium effects. Self-esteem was inconsistently related, with small to medium effects when found. Statistically adjusting for other relevant variables yielded similar but somewhat smaller effects. We only included correlational studies in our systematic review, but address the few relevant experimental studies also. We discuss causal explanations for relationships between problem smartphone use and psychopathology. Copyright © 2016 Elsevier B.V. All rights reserved.

  5. Statistics teaching in medical school: opinions of practising doctors.

    PubMed

    Miles, Susan; Price, Gill M; Swift, Louise; Shepstone, Lee; Leinster, Sam J

    2010-11-04

    The General Medical Council expects UK medical graduates to gain some statistical knowledge during their undergraduate education; but provides no specific guidance as to amount, content or teaching method. Published work on statistics teaching for medical undergraduates has been dominated by medical statisticians, with little input from the doctors who will actually be using this knowledge and these skills after graduation. Furthermore, doctor's statistical training needs may have changed due to advances in information technology and the increasing importance of evidence-based medicine. Thus there exists a need to investigate the views of practising medical doctors as to the statistical training required for undergraduate medical students, based on their own use of these skills in daily practice. A questionnaire was designed to investigate doctors' views about undergraduate training in statistics and the need for these skills in daily practice, with a view to informing future teaching. The questionnaire was emailed to all clinicians with a link to the University of East Anglia Medical School. Open ended questions were included to elicit doctors' opinions about both their own undergraduate training in statistics and recommendations for the training of current medical students. Content analysis was performed by two of the authors to systematically categorize and describe all the responses provided by participants. 130 doctors responded, including both hospital consultants and general practitioners. The findings indicated that most had not recognised the value of their undergraduate teaching in statistics and probability at the time, but had subsequently found the skills relevant to their career. Suggestions for improving undergraduate teaching in these areas included referring to actual research and ensuring relevance to, and integration with, clinical practice. Grounding the teaching of statistics in the context of real research studies and including examples of typical clinical work may better prepare medical students for their subsequent career.

  6. Examples of sex/gender sensitivity in epidemiological research: results of an evaluation of original articles published in JECH 2006-2014.

    PubMed

    Jahn, Ingeborg; Börnhorst, Claudia; Günther, Frauke; Brand, Tilman

    2017-02-15

    During the last decades, sex and gender biases have been identified in various areas of biomedical and public health research, leading to compromised validity of research findings. As a response, methodological requirements were developed but these are rarely translated into research practice. The aim of this study is to provide good practice examples of sex/gender sensitive health research. We conducted a systematic search of research articles published in JECH between 2006 and 2014. An instrument was constructed to evaluate sex/gender sensitivity in four stages of the research process (background, study design, statistical analysis, discussion). In total, 37 articles covering diverse topics were included. Thereof, 22 were evaluated as good practice example in at least one stage; two articles achieved highest ratings across all stages. Good examples of the background referred to available knowledge on sex/gender differences and sex/gender informed theoretical frameworks. Related to the study design, good examples calculated sample sizes to be able to detect sex/gender differences, selected sex/gender sensitive outcome/exposure indicators, or chose different cut-off values for male and female participants. Good examples of statistical analyses used interaction terms with sex/gender or different shapes of the estimated relationship for men and women. Examples of good discussions interpreted their findings related to social and biological explanatory models or questioned the statistical methods used to detect sex/gender differences. The identified good practice examples may inspire researchers to critically reflect on the relevance of sex/gender issues of their studies and help them to translate methodological recommendations of sex/gender sensitivity into research practice.

  7. Modelling dendritic ecological networks in space: anintegrated network perspective

    USGS Publications Warehouse

    Peterson, Erin E.; Ver Hoef, Jay M.; Isaak, Dan J.; Falke, Jeffrey A.; Fortin, Marie-Josée; Jordon, Chris E.; McNyset, Kristina; Monestiez, Pascal; Ruesch, Aaron S.; Sengupta, Aritra; Som, Nicholas; Steel, E. Ashley; Theobald, David M.; Torgersen, Christian E.; Wenger, Seth J.

    2013-01-01

    the context of stream ecology. Within this context, we summarise the key innovations of a new family of spatial statistical models that describe spatial relationships in DENs. Finally, we discuss how different network analyses may be combined to address more complex and novel research questions. While our main focus is streams, the taxonomy of network analyses is also relevant anywhere spatial patterns in both network and 2-D space can be used to explore the influence of multi-scale processes on biota and their habitat (e.g. plant morphology and pest infestation, or preferential migration along stream or road corridors).

  8. Single-molecule photon emission statistics for systems with explicit time dependence: Generating function approach

    NASA Astrophysics Data System (ADS)

    Peng, Yonggang; Xie, Shijie; Zheng, Yujun; Brown, Frank L. H.

    2009-12-01

    Generating function calculations are extended to allow for laser pulse envelopes of arbitrary shape in numerical applications. We investigate photon emission statistics for two-level and V- and Λ-type three-level systems under time-dependent excitation. Applications relevant to electromagnetically induced transparency and photon emission from single quantum dots are presented.

  9. Making Online Instruction Count: Statistical Reporting of Web-Based Library Instruction Activities

    ERIC Educational Resources Information Center

    Bottorff, Tim; Todd, Andrew

    2012-01-01

    Statistical reporting of library instruction (LI) activities has historically focused on measures relevant to face-to-face (F2F) settings. However, newer forms of LI conducted in the online realm may be difficult to count in traditional ways, leading to inaccurate reporting to both internal and external stakeholders. A thorough literature review…

  10. Learning predictive statistics from temporal sequences: Dynamics and strategies

    PubMed Central

    Wang, Rui; Shen, Yuan; Tino, Peter; Welchman, Andrew E.; Kourtzi, Zoe

    2017-01-01

    Human behavior is guided by our expectations about the future. Often, we make predictions by monitoring how event sequences unfold, even though such sequences may appear incomprehensible. Event structures in the natural environment typically vary in complexity, from simple repetition to complex probabilistic combinations. How do we learn these structures? Here we investigate the dynamics of structure learning by tracking human responses to temporal sequences that change in structure unbeknownst to the participants. Participants were asked to predict the upcoming item following a probabilistic sequence of symbols. Using a Markov process, we created a family of sequences, from simple frequency statistics (e.g., some symbols are more probable than others) to context-based statistics (e.g., symbol probability is contingent on preceding symbols). We demonstrate the dynamics with which individuals adapt to changes in the environment's statistics—that is, they extract the behaviorally relevant structures to make predictions about upcoming events. Further, we show that this structure learning relates to individual decision strategy; faster learning of complex structures relates to selection of the most probable outcome in a given context (maximizing) rather than matching of the exact sequence statistics. Our findings provide evidence for alternate routes to learning of behaviorally relevant statistics that facilitate our ability to predict future events in variable environments. PMID:28973111

  11. Is parenting style a predictor of suicide attempts in a representative sample of adolescents?

    PubMed

    Donath, Carolin; Graessel, Elmar; Baier, Dirk; Bleich, Stefan; Hillemacher, Thomas

    2014-04-26

    Suicidal ideation and suicide attempts are serious but not rare conditions in adolescents. However, there are several research and practical suicide-prevention initiatives that discuss the possibility of preventing serious self-harm. Profound knowledge about risk and protective factors is therefore necessary. The aim of this study is a) to clarify the role of parenting behavior and parenting styles in adolescents' suicide attempts and b) to identify other statistically significant and clinically relevant risk and protective factors for suicide attempts in a representative sample of German adolescents. In the years 2007/2008, a representative written survey of N = 44,610 students in the 9th grade of different school types in Germany was conducted. In this survey, the lifetime prevalence of suicide attempts was investigated as well as potential predictors including parenting behavior. A three-step statistical analysis was carried out: I) As basic model, the association between parenting and suicide attempts was explored via binary logistic regression controlled for age and sex. II) The predictive values of 13 additional potential risk/protective factors were analyzed with single binary logistic regression analyses for each predictor alone. Non-significant predictors were excluded in Step III. III) In a multivariate binary logistic regression analysis, all significant predictor variables from Step II and the parenting styles were included after testing for multicollinearity. Three parental variables showed a relevant association with suicide attempts in adolescents - (all protective): mother's warmth and father's warmth in childhood and mother's control in adolescence (Step I). In the full model (Step III), Authoritative parenting (protective: OR: .79) and Rejecting-Neglecting parenting (risk: OR: 1.63) were identified as significant predictors (p < .001) for suicidal attempts. Seven further variables were interpreted to be statistically significant and clinically relevant: ADHD, female sex, smoking, Binge Drinking, absenteeism/truancy, migration background, and parental separation events. Parenting style does matter. While children of Authoritative parents profit, children of Rejecting-Neglecting parents are put at risk - as we were able to show for suicide attempts in adolescence. Some of the identified risk factors contribute new knowledge and potential areas of intervention for special groups such as migrants or children diagnosed with ADHD.

  12. Preferential sampling and Bayesian geostatistics: Statistical modeling and examples.

    PubMed

    Cecconi, Lorenzo; Grisotto, Laura; Catelan, Dolores; Lagazio, Corrado; Berrocal, Veronica; Biggeri, Annibale

    2016-08-01

    Preferential sampling refers to any situation in which the spatial process and the sampling locations are not stochastically independent. In this paper, we present two examples of geostatistical analysis in which the usual assumption of stochastic independence between the point process and the measurement process is violated. To account for preferential sampling, we specify a flexible and general Bayesian geostatistical model that includes a shared spatial random component. We apply the proposed model to two different case studies that allow us to highlight three different modeling and inferential aspects of geostatistical modeling under preferential sampling: (1) continuous or finite spatial sampling frame; (2) underlying causal model and relevant covariates; and (3) inferential goals related to mean prediction surface or prediction uncertainty. © The Author(s) 2016.

  13. Epigenetic DNA Methylation Profiling with MSRE: A Quantitative NGS Approach Using a Parkinson's Disease Test Case

    PubMed Central

    Marsh, Adam G.; Cottrell, Matthew T.; Goldman, Morton F.

    2016-01-01

    Epigenetics is a rapidly developing field focused on deciphering chemical fingerprints that accumulate on human genomes over time. As the nascent idea of precision medicine expands to encompass epigenetic signatures of diagnostic and prognostic relevance, there is a need for methodologies that provide high-throughput DNA methylation profiling measurements. Here we report a novel quantification methodology for computationally reconstructing site-specific CpG methylation status from next generation sequencing (NGS) data using methyl-sensitive restriction endonucleases (MSRE). An integrated pipeline efficiently incorporates raw NGS metrics into a statistical discrimination platform to identify functional linkages between shifts in epigenetic DNA methylation and disease phenotypes in samples being analyzed. In this pilot proof-of-concept study we quantify and compare DNA methylation in blood serum of individuals with Parkinson's Disease relative to matched healthy blood profiles. Even with a small study of only six samples, a high degree of statistical discrimination was achieved based on CpG methylation profiles between groups, with 1008 statistically different CpG sites (p < 0.0025, after false discovery rate correction). A methylation load calculation was used to assess higher order impacts of methylation shifts on genes and pathways and most notably identified FGF3, FGF8, HTT, KMTA5, MIR8073, and YWHAG as differentially methylated genes with high relevance to Parkinson's Disease and neurodegeneration (based on PubMed literature citations). Of these, KMTA5 is a histone methyl-transferase gene and HTT is Huntington Disease Protein or Huntingtin, for which there are well established neurodegenerative impacts. The future need for precision diagnostics now requires more tools for exploring epigenetic processes that may be linked to cellular dysfunction and subsequent disease progression. PMID:27853465

  14. In vivo clinical and radiological effects of platelet-rich plasma on interstitial supraspinatus lesion: Case series.

    PubMed

    Lädermann, A; Zumstein, M A; Kolo, F C; Grosclaude, M; Koglin, L; Schwitzguebel, A J P

    2016-12-01

    Rotator cuff tear (RCT) is a frequent condition of clinical relevance that can be managed with a symptomatic conservative treatment, but surgery is often needed. Biological components like leukocytes and platelet rich plasma (L-PRP) could represent an alternative curative method for interstitial RCT. It has been hypothesized that an ultrasound guided L-PRP injection in supraspinatus interstitial RCT could induce radiological healing. A prospective case series including 25 patients was performed in order to assess the effect of L-PRP infiltration into supraspinatus interstitial RCTs. Primary outcome was tear size change determined by magnetic resonance imaging arthrogram (MRA) before and 6 months after L-PRP infiltration. Secondary outcomes were Constant score, SANE score, and pain visual analog scale (VAS) after L-PRP infiltration. Tear volume diminution was statistically significant (P=.007), and a >50% tear volume diminution was observed in 15 patients. A statistically significant improvement of Constant score (P<.001), SANE score (P=.001), and VAS (P<.001) was observed. In 21 patients, Constant score improvement reached the minimal clinical important difference of 10.4 points. We observed a statistically significant and clinically relevant effect on RCT size and clinical parameters after L-PRP infiltration. Such an important improvement of supraspinatus interstitial RCT with conservative management is uncommon, therefore intratendinous L-PRP infiltrations could have been beneficial. This encouraging result could pave the way for future randomized studies in order to formally determinate whether L-PRP infiltrations are a possible alternative to surgical treatment of interstitial RCT. Prospective observational study; Level of evidence II. Copyright © 2016 Elsevier Masson SAS. All rights reserved.

  15. Inhibition of TNF-α production in LPS-activated THP-1 monocytic cells by the crude extracts of seven Bhutanese medicinal plants.

    PubMed

    Wangchuk, Phurpa; Keller, Paul A; Pyne, Stephen G; Taweechotipatr, Malai

    2013-07-30

    Seven studied medicinal plants; Aconitum laciniatum, Ajania nubigena, Codonopsis bhutanica, Corydalis crispa, Corydalis dubia, Meconopsis simplicifolia and Pleurospermum amabile, are currently used in the Bhutanese Traditional Medicine (BTM) for the management of different types of disorders including the diseases that bore relevance to various inflammatory conditions. This study aimed to evaluate the inhibition of TNF-α production in LPS-activated THP-1 monocytic cells by the crude extracts of seven selected Bhutanese medicinal plants. It is expected to; (a) generate a scientific basis for their use in the BTM and (b) form a basis for prioritization of the seven plants for further phytochemical and anti-inflammatory studies. Seven plants were selected using an ethno-directed bio-rational approach and their crude extracts were prepared using four different solvents (methanol, hexane, dichloromethane and chloroform). The TNF-α inhibitory activity of these extracts was determined by cytokine-specific sandwich quantitative enzyme-linked immunosorbent assays (ELISAs). The results were quantified statistically and the statistical significance were evaluated by GraphPad Prism version 5.01 using Student's t-test with one-tailed distribution. A p-value ≤0.05 was considered statistically significant. Of the seven plants studied, the crude extracts of six of them inhibited the production of pro-inflammatory cytokine, TNF-α in LPS-activated THP-1 monocytic cells. Amongst the six plants, Corydalis crispa gave the best inhibitory activity followed by Pleurospermum amabile, Ajania nubigena, Corydalis dubia, Meconopsis simplicifolia and Codonopsis bhutanica. Of the 13 extracts that exhibited statistically significant TNF-α inhibitory activity (p<0.05; p<0.01), five of them showed very strong inhibition when compared to the DMSO control and RPMI media. Six medicinal plants studied here showed promising TNF-α inhibitory activity. These findings rationalize the traditional use of these selected medicinal plants in the BTM as an individual plant or in combination with other ingredients for the treatment of disorders bearing relevance to the inflammatory conditions. The results forms a good preliminary basis for the prioritization of candidate plant species for an in-depth phytochemical study and anti-inflammatory activity screening of the pure compounds contained within those seven plants. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  16. Pathway-GPS and SIGORA: identifying relevant pathways based on the over-representation of their gene-pair signatures

    PubMed Central

    Foroushani, Amir B.K.; Brinkman, Fiona S.L.

    2013-01-01

    Motivation. Predominant pathway analysis approaches treat pathways as collections of individual genes and consider all pathway members as equally informative. As a result, at times spurious and misleading pathways are inappropriately identified as statistically significant, solely due to components that they share with the more relevant pathways. Results. We introduce the concept of Pathway Gene-Pair Signatures (Pathway-GPS) as pairs of genes that, as a combination, are specific to a single pathway. We devised and implemented a novel approach to pathway analysis, Signature Over-representation Analysis (SIGORA), which focuses on the statistically significant enrichment of Pathway-GPS in a user-specified gene list of interest. In a comparative evaluation of several published datasets, SIGORA outperformed traditional methods by delivering biologically more plausible and relevant results. Availability. An efficient implementation of SIGORA, as an R package with precompiled GPS data for several human and mouse pathway repositories is available for download from http://sigora.googlecode.com/svn/. PMID:24432194

  17. Comparing posteroanterior with lateral and anteroposterior chest radiography in the initial detection of parapneumonic effusions.

    PubMed

    Moffett, Bryan K; Panchabhai, Tanmay S; Nakamatsu, Raul; Arnold, Forest W; Peyrani, Paula; Wiemken, Timothy; Guardiola, Juan; Ramirez, Julio A

    2016-12-01

    It is unclear whether anteroposterior (AP) or posteroanterior with lateral (PA/Lat) chest radiographs are superior in the early detection of clinically relevant parapneumonic effusions (CR-PPEs). The objective of this study was to identify which technique is preferred for detection of PPEs using chest computed tomography (CCT) as a reference standard. A secondary analysis of a pneumonia database was conducted to identify patients who received a CCT within 24 hours of presentation and also received AP or PA/Lat chest radiographs within 24 hours of CCT. Sensitivity and specificity were then calculated by comparing the radiographic diagnosis of PPEs of both types of radiographs compared with CCT by using the existing attending radiologist interpretation. Clinical relevance of effusions was determined by CCT effusion measurement of >2.5 cm or presence of loculation. There was a statistically significant difference between the sensitivity of AP (67.3%) and PA/Lat (83.9%) chest radiography for the initial detection of CR-PPE. Of 16 CR-PPEs initially missed by AP radiography, 7 either required drainage initially or developed empyema within 30 days, whereas no complicated PPE or empyema was found in those missed by PA/Lat radiography. PA/Lat chest radiography should be the initial imaging of choice in pneumonia patients for detection of PPEs because it appears to be statistically superior to AP chest radiography. Published by Elsevier Inc.

  18. Group Influences on Young Adult Warfighters’ Risk Taking

    DTIC Science & Technology

    2016-12-01

    Statistical Analysis Latent linear growth models were fitted using the maximum likelihood estimation method in Mplus (version 7.0; Muthen & Muthen...condition had a higher net score than those in the alone condition (b = 20.53, SE = 6.29, p < .001). Results of the relevant statistical analyses are...8.56 110.86*** 22.01 158.25*** 29.91 Model fit statistics BIC 4004.50 5302.539 5540.58 Chi-square (df) 41.51*** (16) 38.10** (20) 42.19** (20

  19. Metal and physico-chemical variations at a hydroelectric reservoir analyzed by Multivariate Analyses and Artificial Neural Networks: environmental management and policy/decision-making tools.

    PubMed

    Cavalcante, Y L; Hauser-Davis, R A; Saraiva, A C F; Brandão, I L S; Oliveira, T F; Silveira, A M

    2013-01-01

    This paper compared and evaluated seasonal variations in physico-chemical parameters and metals at a hydroelectric power station reservoir by applying Multivariate Analyses and Artificial Neural Networks (ANN) statistical techniques. A Factor Analysis was used to reduce the number of variables: the first factor was composed of elements Ca, K, Mg and Na, and the second by Chemical Oxygen Demand. The ANN showed 100% correct classifications in training and validation samples. Physico-chemical analyses showed that water pH values were not statistically different between the dry and rainy seasons, while temperature, conductivity, alkalinity, ammonia and DO were higher in the dry period. TSS, hardness and COD, on the other hand, were higher during the rainy season. The statistical analyses showed that Ca, K, Mg and Na are directly connected to the Chemical Oxygen Demand, which indicates a possibility of their input into the reservoir system by domestic sewage and agricultural run-offs. These statistical applications, thus, are also relevant in cases of environmental management and policy decision-making processes, to identify which factors should be further studied and/or modified to recover degraded or contaminated water bodies. Copyright © 2012 Elsevier B.V. All rights reserved.

  20. An introduction to medical statistics for health care professionals: Hypothesis tests and estimation.

    PubMed

    Thomas, Elaine

    2005-01-01

    This article is the second in a series of three that will give health care professionals (HCPs) a sound introduction to medical statistics (Thomas, 2004). The objective of research is to find out about the population at large. However, it is generally not possible to study the whole of the population and research questions are addressed in an appropriate study sample. The next crucial step is then to use the information from the sample of individuals to make statements about the wider population of like individuals. This procedure of drawing conclusions about the population, based on study data, is known as inferential statistics. The findings from the study give us the best estimate of what is true for the relevant population, given the sample is representative of the population. It is important to consider how accurate this best estimate is, based on a single sample, when compared to the unknown population figure. Any difference between the observed sample result and the population characteristic is termed the sampling error. This article will cover the two main forms of statistical inference (hypothesis tests and estimation) along with issues that need to be addressed when considering the implications of the study results. Copyright (c) 2005 Whurr Publishers Ltd.

  1. Mesh Dependence on Shear Driven Boundary Layers in Stable Stratification Generated by Large Eddy-Simulation

    NASA Astrophysics Data System (ADS)

    Berg, Jacob; Patton, Edward G.; Sullivan, Peter S.

    2017-11-01

    The effect of mesh resolution and size on shear driven atmospheric boundary layers in a stable stratified environment is investigated with the NCAR pseudo-spectral LES model (J. Atmos. Sci. v68, p2395, 2011 and J. Atmos. Sci. v73, p1815, 2016). The model applies FFT in the two horizontal directions and finite differencing in the vertical direction. With vanishing heat flux at the surface and a capping inversion entraining potential temperature into the boundary layer the situation is often called the conditional neutral atmospheric boundary layer (ABL). Due to its relevance in high wind applications such as wind power meteorology, we emphasize on second order statistics important for wind turbines including spectral information. The simulations range from mesh sizes of 643 to 10243 grid points. Due to the non-stationarity of the problem, different simulations are compared at equal eddy-turnover times. Whereas grid convergence is mostly achieved in the middle portion of the ABL, statistics close to the surface of the ABL, where the presence of the ground limits the growth of the energy containing eddies, second order statistics are not converged on the studies meshes. Higher order structure functions also reveal non-Gaussian statistics highly dependent on the resolution.

  2. Incremental Implicit Learning of Bundles of Statistical Patterns

    PubMed Central

    Qian, Ting; Jaeger, T. Florian; Aslin, Richard N.

    2016-01-01

    Forming an accurate representation of a task environment often takes place incrementally as the information relevant to learning the representation only unfolds over time. This incremental nature of learning poses an important problem: it is usually unclear whether a sequence of stimuli consists of only a single pattern, or multiple patterns that are spliced together. In the former case, the learner can directly use each observed stimulus to continuously revise its representation of the task environment. In the latter case, however, the learner must first parse the sequence of stimuli into different bundles, so as to not conflate the multiple patterns. We created a video-game statistical learning paradigm and investigated 1) whether learners without prior knowledge of the existence of multiple “stimulus bundles” — subsequences of stimuli that define locally coherent statistical patterns — could detect their presence in the input, and 2) whether learners are capable of constructing a rich representation that encodes the various statistical patterns associated with bundles. By comparing human learning behavior to the predictions of three computational models, we find evidence that learners can handle both tasks successfully. In addition, we discuss the underlying reasons for why the learning of stimulus bundles occurs even when such behavior may seem irrational. PMID:27639552

  3. Balloon-type versus non-balloon-type replacement percutaneous endoscopic gastrostomy: which is better?

    PubMed

    Heiser, M; Malaty, H

    2001-01-01

    Percutaneous endoscopic gastrostomy (PEG) has been an established procedure for nearly 20 years. Caring for patients with a PEG has been incorporated into the practice of nurses in most gastroenterology settings. Several practice-related questions have arisen, particularly in relation to replacement PEGs. In an attempt to obtain relevant information for decisions relating to cost-effectiveness and providing optimum care for PEG replacement, two clinical research questions were studied: (1) is there a difference in patient (stomal) response related to two different replacement PEG tubes, and (2) is there a difference in the duration (life-span) between the two types of replacement tubes? A non-experimental, two-group descriptive study was conducted to answer the two clinical research questions. Two types of replacement PEG tubes were evaluated: a balloon type and a non-balloon type. Stoma response (recording skin and insertion site characteristics) and PEG life span were the measures of interest. Differences in the occurrence of skin and insertion site problems between the two groups were not statistically significant. Differences between the life spans of the two tubes were found to be statistically significant at three time intervals. Findings give information to the practitioner involved in making independent and interdependent practice decisions when planning care for patients with a PEG. Suggestions for additional research and replication are included.

  4. How large must a treatment effect be before it matters to practitioners? An estimation method and demonstration.

    PubMed

    Miller, William R; Manuel, Jennifer Knapp

    2008-09-01

    Treatment research is sometimes criticised as lacking in clinical relevance, and one potential source of this friction is a disconnection between statistical significance and what clinicians regard to be a meaningful difference in outcomes. This report demonstrates a novel methodology for estimating what substance abuse practitioners regard to be clinically important differences. To illustrate the estimation method, we surveyed 50 substance abuse treatment providers participating in the National Institute on Drug Abuse (NIDA) Clinical Trials Network. Practitioners identified thresholds for clinically meaningful differences on nine common outcome variables, indicated the size of effect that would justify their learning a new treatment method and estimated current outcomes from their services. Clinicians judged a difference between two treatments to be meaningful if outcomes were improved by about 10 - 12 points on the percentage of patients totally abstaining, arrested for driving while intoxicated, employed or having abnormal liver enzymes. A 5 percentage-point reduction in patient mortality was regarded as clinically significant. On continuous outcome measures (such as percentage of days abstinent or drinks per drinking day), practitioners judged an outcome to be significant when it doubled or halved the base rate. When a new treatment meets such criteria, practitioners were interested in learning it. Effects that are statistically significant in clinical trials may be unimpressive to practitioners. Clinicians' judgements of meaningful differences can inform the powering of clinical trials.

  5. Violent and nonviolent video games differentially affect physical aggression for individuals high vs. low in dispositional anger.

    PubMed

    Engelhardt, Christopher R; Bartholow, Bruce D; Saults, J Scott

    2011-01-01

    Although numerous experiments have shown that exposure to violent video games (VVG) causes increases in aggression, relatively few studies have investigated the extent to which this effect differs as a function of theoretically relevant individual difference factors. This study investigated whether video game content differentially influences aggression as a function of individual differences in trait anger. Participants were randomly assigned to play a violent or nonviolent video game before completing a task in which they could behave aggressively. Results showed that participants high in trait anger were the most aggressive, but only if they first played a VVG. This relationship held while statistically controlling for dimensions other than violent content on which game conditions differed (e.g. frustration, arousal). Implications of these findings for models explaining the effects of video games on behavior are discussed. © 2011 Wiley Periodicals, Inc.

  6. Research on the development efficiency of regional high-end talent in China: A complex network approach

    PubMed Central

    Zhang, Wenbin

    2017-01-01

    In this paper, based on the panel data of 31 provinces and cities in China from 1991 to 2016, the regional development efficiency matrix of high-end talent is obtained by DEA method, and the matrix is converted into a continuous change of complex networks through the construction of sliding window. Using a series of continuous changes in the complex network topology statistics, the characteristics of regional high-end talent development efficiency system are analyzed. And the results show that the average development efficiency of high-end talent in the western region is at a low level. After 2005, the national regional high-end talent development efficiency network has both short-range relevance and long-range relevance in the evolution process. The central region plays an important intermediary role in the national regional high-end talent development system. And the western region has high clustering characteristics. With the implementation of the high-end talent policies with regional characteristics by different provinces and cities, the relevance of high-end talent development efficiency in various provinces and cities presents a weakening trend, and the geographical characteristics of high-end talent are more and more obvious. PMID:29272286

  7. The development of an instrument for evaluating clinical teachers: involving stakeholders to determine content validity.

    PubMed

    Stalmeijer, Renée E; Dolmans, Diana H J M; Wolfhagen, Ineke H A P; Muijtjens, Arno M M; Scherpbier, Albert J J A

    2008-01-01

    Research indicates that the quality of supervision strongly influences the learning of medical students in clinical practice. Clinical teachers need feedback to improve their supervisory skills. The available instruments either lack a clear theoretical framework or are not suitable for providing feedback to individual teachers. We developed an evaluation instrument based on the 'cognitive apprenticeship model'. The aim was to estimate the content validity of the developed instrument. Item relevance was rated on a five-point scale (1 = highly irrelevant, 5 = highly relevant) by three groups of stakeholders in undergraduate clinical teaching: educationalists (N = 12), doctors (N = 16) and students (N = 12). Additionally, stakeholders commented on content, wording and omission of items. The items were generally rated as very relevant (Mean = 4.3, SD = 0.38, response = 95%) and any differences between the stakeholder groups were small. The results led to elimination of 4 items, rewording of 13 items and addition of 1 item. The cognitive apprenticeship model appears to offer a useful framework for the development of an evaluation instrument aimed at providing feedback to individual clinical teachers on the quality of student supervision. Further studies in larger populations will have to establish the instrument's statistical validity and generalizability.

  8. Statistical physics approach to earthquake occurrence and forecasting

    NASA Astrophysics Data System (ADS)

    de Arcangelis, Lucilla; Godano, Cataldo; Grasso, Jean Robert; Lippiello, Eugenio

    2016-04-01

    There is striking evidence that the dynamics of the Earth crust is controlled by a wide variety of mutually dependent mechanisms acting at different spatial and temporal scales. The interplay of these mechanisms produces instabilities in the stress field, leading to abrupt energy releases, i.e., earthquakes. As a consequence, the evolution towards instability before a single event is very difficult to monitor. On the other hand, collective behavior in stress transfer and relaxation within the Earth crust leads to emergent properties described by stable phenomenological laws for a population of many earthquakes in size, time and space domains. This observation has stimulated a statistical mechanics approach to earthquake occurrence, applying ideas and methods as scaling laws, universality, fractal dimension, renormalization group, to characterize the physics of earthquakes. In this review we first present a description of the phenomenological laws of earthquake occurrence which represent the frame of reference for a variety of statistical mechanical models, ranging from the spring-block to more complex fault models. Next, we discuss the problem of seismic forecasting in the general framework of stochastic processes, where seismic occurrence can be described as a branching process implementing space-time-energy correlations between earthquakes. In this context we show how correlations originate from dynamical scaling relations between time and energy, able to account for universality and provide a unifying description for the phenomenological power laws. Then we discuss how branching models can be implemented to forecast the temporal evolution of the earthquake occurrence probability and allow to discriminate among different physical mechanisms responsible for earthquake triggering. In particular, the forecasting problem will be presented in a rigorous mathematical framework, discussing the relevance of the processes acting at different temporal scales for different levels of prediction. In this review we also briefly discuss how the statistical mechanics approach can be applied to non-tectonic earthquakes and to other natural stochastic processes, such as volcanic eruptions and solar flares.

  9. The transfer of analytical procedures.

    PubMed

    Ermer, J; Limberger, M; Lis, K; Wätzig, H

    2013-11-01

    Analytical method transfers are certainly among the most discussed topics in the GMP regulated sector. However, they are surprisingly little regulated in detail. General information is provided by USP, WHO, and ISPE in particular. Most recently, the EU emphasized the importance of analytical transfer by including it in their draft of the revised GMP Guideline. In this article, an overview and comparison of these guidelines is provided. The key to success for method transfers is the excellent communication between sending and receiving unit. In order to facilitate this communication, procedures, flow charts and checklists for responsibilities, success factors, transfer categories, the transfer plan and report, strategies in case of failed transfers, tables with acceptance limits are provided here, together with a comprehensive glossary. Potential pitfalls are described such that they can be avoided. In order to assure an efficient and sustainable transfer of analytical procedures, a practically relevant and scientifically sound evaluation with corresponding acceptance criteria is crucial. Various strategies and statistical tools such as significance tests, absolute acceptance criteria, and equivalence tests are thoroughly descibed and compared in detail giving examples. Significance tests should be avoided. The success criterion is not statistical significance, but rather analytical relevance. Depending on a risk assessment of the analytical procedure in question, statistical equivalence tests are recommended, because they include both, a practically relevant acceptance limit and a direct control of the statistical risks. However, for lower risk procedures, a simple comparison of the transfer performance parameters to absolute limits is also regarded as sufficient. Copyright © 2013 Elsevier B.V. All rights reserved.

  10. A three-dimensional comparison of a morphometric and conventional cephalometric midsagittal planes for craniofacial asymmetry.

    PubMed

    Damstra, Janalt; Fourie, Zacharias; De Wit, Marnix; Ren, Yijin

    2012-02-01

    Morphometric methods are used in biology to study object symmetry in living organisms and to determine the true plane of symmetry. The aim of this study was to determine if there are clinical differences between three-dimensional (3D) cephalometric midsagittal planes used to describe craniofacial asymmetry and a true symmetry plane derived from a morphometric method based on visible facial features. The sample consisted of 14 dry skulls (9 symmetric and 5 asymmetric) with metallic markers which were imaged with cone-beam computed tomography. An error study and statistical analysis were performed to validate the morphometric method. The morphometric and conventional cephalometric planes were constructed and compared. The 3D cephalometric planes constructed as perpendiculars to the Frankfort horizontal plane resembled the morphometric plane the most in both the symmetric and asymmetric groups with mean differences of less than 1.00 mm for most variables. However, the standard deviations were often large and clinically significant for these variables. There were clinically relevant differences (>1.00 mm) between the different 3D cephalometric midsagittal planes and the true plane of symmetry determined by the visible facial features. The difference between 3D cephalometric midsagittal planes and the true plane of symmetry determined by the visible facial features were clinically relevant. Care has to be taken using cephalometric midsagittal planes for diagnosis and treatment planning of craniofacial asymmetry as they might differ from the true plane of symmetry as determined by morphometrics.

  11. Food groups for allergen risk assessment: Combining food consumption data from different countries in Europe.

    PubMed

    Birot, Sophie; Madsen, Charlotte B; Kruizinga, Astrid G; Crépet, Amélie; Christensen, Tue; Brockhoff, Per B

    2018-05-18

    To prevent allergic reactions, food producers have to be able to make a knowledge based decision on whether to label their products with precautionary labelling. As many manufactured food products are sold in different countries across Europe, the allergen risk assessment should be estimated at the European levels. As currently, there are no pan-European food data suitable for food allergy risk assessment. The aim of this paper is to investigate if consumption data, at a meal level, from National Food Consumption Surveys, can be combined to form a common Food Consumption database. In this first attempt we developed a procedure to investigate, if national food consumption data can be combined and grouped using data from Netherlands, France and Denmark. The homogeneity of consumption patterns and the relevance of difference in risk of allergic reaction were compared, using a fixed framework of allergen concentration levels and threshold distribution. Thus, the relevance of using common consumption data across countries was verified. The food groups formed were subsequently evaluated and adjusted based on practical considerations. It resulted in designing 61 food groups that can be used for allergen risk assessment. The summary statistics and descriptive names for each food group are included. Copyright © 2018 Elsevier Ltd. All rights reserved.

  12. Breath biomarkers for lung cancer detection and assessment of smoking related effects--confounding variables, influence of normalization and statistical algorithms.

    PubMed

    Kischkel, Sabine; Miekisch, Wolfram; Sawacki, Annika; Straker, Eva M; Trefz, Phillip; Amann, Anton; Schubert, Jochen K

    2010-11-11

    Up to now, none of the breath biomarkers or marker sets proposed for cancer recognition has reached clinical relevance. Possible reasons are the lack of standardized methods of sampling, analysis and data processing and effects of environmental contaminants. Concentration profiles of endogenous and exogenous breath markers were determined in exhaled breath of 31 lung cancer patients, 31 smokers and 31 healthy controls by means of SPME-GC-MS. Different correcting and normalization algorithms and a principal component analysis were applied to the data. Differences of exhalation profiles in cancer and non-cancer patients did not persist if physiology and confounding variables were taken into account. Smoking history, inspired substance concentrations, age and gender were recognized as the most important confounding variables. Normalization onto PCO2 or BSA or correction for inspired concentrations only partially solved the problem. In contrast, previous smoking behaviour could be recognized unequivocally. Exhaled substance concentrations may depend on a variety of parameters other than the disease under investigation. Normalization and correcting parameters have to be chosen with care as compensating effects may be different from one substance to the other. Only well-founded biomarker identification, normalization and data processing will provide clinically relevant information from breath analysis. 2010 Elsevier B.V. All rights reserved.

  13. diffHic: a Bioconductor package to detect differential genomic interactions in Hi-C data.

    PubMed

    Lun, Aaron T L; Smyth, Gordon K

    2015-08-19

    Chromatin conformation capture with high-throughput sequencing (Hi-C) is a technique that measures the in vivo intensity of interactions between all pairs of loci in the genome. Most conventional analyses of Hi-C data focus on the detection of statistically significant interactions. However, an alternative strategy involves identifying significant changes in the interaction intensity (i.e., differential interactions) between two or more biological conditions. This is more statistically rigorous and may provide more biologically relevant results. Here, we present the diffHic software package for the detection of differential interactions from Hi-C data. diffHic provides methods for read pair alignment and processing, counting into bin pairs, filtering out low-abundance events and normalization of trended or CNV-driven biases. It uses the statistical framework of the edgeR package to model biological variability and to test for significant differences between conditions. Several options for the visualization of results are also included. The use of diffHic is demonstrated with real Hi-C data sets. Performance against existing methods is also evaluated with simulated data. On real data, diffHic is able to successfully detect interactions with significant differences in intensity between biological conditions. It also compares favourably to existing software tools on simulated data sets. These results suggest that diffHic is a viable approach for differential analyses of Hi-C data.

  14. A robust bayesian estimate of the concordance correlation coefficient.

    PubMed

    Feng, Dai; Baumgartner, Richard; Svetnik, Vladimir

    2015-01-01

    A need for assessment of agreement arises in many situations including statistical biomarker qualification or assay or method validation. Concordance correlation coefficient (CCC) is one of the most popular scaled indices reported in evaluation of agreement. Robust methods for CCC estimation currently present an important statistical challenge. Here, we propose a novel Bayesian method of robust estimation of CCC based on multivariate Student's t-distribution and compare it with its alternatives. Furthermore, we extend the method to practically relevant settings, enabling incorporation of confounding covariates and replications. The superiority of the new approach is demonstrated using simulation as well as real datasets from biomarker application in electroencephalography (EEG). This biomarker is relevant in neuroscience for development of treatments for insomnia.

  15. Response to traumatic brain injury neurorehabilitation through an artificial intelligence and statistics hybrid knowledge discovery from databases methodology.

    PubMed

    Gibert, Karina; García-Rudolph, Alejandro; García-Molina, Alberto; Roig-Rovira, Teresa; Bernabeu, Montse; Tormos, José María

    2008-01-01

    Develop a classificatory tool to identify different populations of patients with Traumatic Brain Injury based on the characteristics of deficit and response to treatment. A KDD framework where first, descriptive statistics of every variable was done, data cleaning and selection of relevant variables. Then data was mined using a generalization of Clustering based on rules (CIBR), an hybrid AI and Statistics technique which combines inductive learning (AI) and clustering (Statistics). A prior Knowledge Base (KB) is considered to properly bias the clustering; semantic constraints implied by the KB hold in final clusters, guaranteeing interpretability of the resultis. A generalization (Exogenous Clustering based on rules, ECIBR) is presented, allowing to define the KB in terms of variables which will not be considered in the clustering process itself, to get more flexibility. Several tools as Class panel graph are introduced in the methodology to assist final interpretation. A set of 5 classes was recommended by the system and interpretation permitted profiles labeling. From the medical point of view, composition of classes is well corresponding with different patterns of increasing level of response to rehabilitation treatments. All the patients initially assessable conform a single group. Severe impaired patients are subdivided in four profiles which clearly distinct response patterns. Particularly interesting the partial response profile, where patients could not improve executive functions. Meaningful classes were obtained and, from a semantics point of view, the results were sensibly improved regarding classical clustering, according to our opinion that hybrid AI & Stats techniques are more powerful for KDD than pure ones.

  16. An Update on Statistical Boosting in Biomedicine.

    PubMed

    Mayr, Andreas; Hofner, Benjamin; Waldmann, Elisabeth; Hepp, Tobias; Meyer, Sebastian; Gefeller, Olaf

    2017-01-01

    Statistical boosting algorithms have triggered a lot of research during the last decade. They combine a powerful machine learning approach with classical statistical modelling, offering various practical advantages like automated variable selection and implicit regularization of effect estimates. They are extremely flexible, as the underlying base-learners (regression functions defining the type of effect for the explanatory variables) can be combined with any kind of loss function (target function to be optimized, defining the type of regression setting). In this review article, we highlight the most recent methodological developments on statistical boosting regarding variable selection, functional regression, and advanced time-to-event modelling. Additionally, we provide a short overview on relevant applications of statistical boosting in biomedicine.

  17. Actitudes de Estudiantes Universitarios que Tomaron Cursos Introductorios de Estadistica y su Relacion con el Exito Academico en La Disciplina

    ERIC Educational Resources Information Center

    Colon-Rosa, Hector Wm.

    2012-01-01

    Considering the range of changes in the instruction and learning of statistics, several questions emerge regarding how those changes influence students' attitudes. Equally, other questions emerge to reflect that statistics is a fundamental course in the university academic programs because of its relevance to the professional development of the…

  18. Arab oil and gas directory 1985

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1985-01-01

    The directory provides detailed statistics and information on aspects of oil and gas production, exploration and developments in the 24 Arab countries of the Middle East and North Africa and in Iran. It includes the texts of relevant new laws and official documents, official surveys, current projects and developments, up-to-date statistics covering OPEC and OAPEC member countries, and has 26 maps.

  19. Condensation of an ideal gas obeying non-Abelian statistics.

    PubMed

    Mirza, Behrouz; Mohammadzadeh, Hosein

    2011-09-01

    We consider the thermodynamic geometry of an ideal non-Abelian gas. We show that, for a certain value of the fractional parameter and at the relevant maximum value of fugacity, the thermodynamic curvature has a singular point. This indicates a condensation such as Bose-Einstein condensation for non-Abelian statistics and we work out the phase transition temperature in various dimensions.

  20. Using Twitter to Identify and Respond to Food Poisoning: The Food Safety STL Project.

    PubMed

    Harris, Jenine K; Hawkins, Jared B; Nguyen, Leila; Nsoesie, Elaine O; Tuli, Gaurav; Mansour, Raed; Brownstein, John S

    Foodborne illness affects 1 in 4 US residents each year. Few of those sickened seek medical care or report the illness to public health authorities, complicating prevention efforts. Citizens who report illness identify food establishments with more serious and critical violations than found by regular inspections. New media sources, including online restaurant reviews and social media postings, have the potential to improve reporting. We implemented a Web-based Dashboard (HealthMap Foodborne Dashboard) to identify and respond to tweets about food poisoning from St Louis City residents. This report examines the performance of the Dashboard in its first 7 months after implementation in the City of St Louis Department of Health. We examined the number of relevant tweets captured and replied to, the number of foodborne illness reports received as a result of the new process, and the results of restaurant inspections following each report. In its first 7 months (October 2015-May 2016), the Dashboard captured 193 relevant tweets. Our replies to relevant tweets resulted in more filed reports than several previously existing foodborne illness reporting mechanisms in St Louis during the same time frame. The proportion of restaurants with food safety violations was not statistically different (P = .60) in restaurants inspected after reports from the Dashboard compared with those inspected following reports through other mechanisms. The Dashboard differs from other citizen engagement mechanisms in its use of current data, allowing direct interaction with constituents on issues when relevant to the constituent to provide time-sensitive education and mobilizing information. In doing so, the Dashboard technology has potential for improving foodborne illness reporting and can be implemented in other areas to improve response to public health issues such as suicidality, spread of Zika virus infection, and hospital quality.

  1. Decision making in acquiring medical technologies in Israeli medical centers: a preliminary study.

    PubMed

    Greenberg, Dan; Pliskin, Joseph S; Peterburg, Yitzhak

    2003-01-01

    This preliminary study had two objectives: a) charting the considerations relevant to decisions about acquisition of new medical technology at the hospital level; and b) creating a basis for the development of a research tool that will examine the function of the Israeli health system in assessment of new medical technologies. A comprehensive literature review and in-depth interviews with decision makers at different levels allowed formulation of criteria considered by decision makers when they decide to purchase and use (or disallow the use) of new medical technology. The resulting questionnaire was sent to medical center directors, along with a letter explaining the goals of the study. The questionnaire included 31 possible considerations for decision making concerning the acquisition of new medical technology by medical centers. The interviewees were asked to indicate the relevance of each consideration in the decision-making process. The most relevant criteria for the adoption of new technologies related to the need for a large capital investment, clinical efficacy of the technology as well as its influence on side effects and complication rates, and a formal approval by the Ministry of Health. Most interviewees stated that pressures exerted by the industry, by patients, or by senior physicians in the hospital are less relevant to decision making. Very small and usually not statistically significant differences in the ranking of hospital directors were found according to the hospitals' ownership, size, or location. The present study is a basis for a future study that will map and describe the function of hospital decision makers within the area of new technology assessment and the decision-making process in the adoption of new healthcare technologies.

  2. No-reference image quality assessment based on natural scene statistics and gradient magnitude similarity

    NASA Astrophysics Data System (ADS)

    Jia, Huizhen; Sun, Quansen; Ji, Zexuan; Wang, Tonghan; Chen, Qiang

    2014-11-01

    The goal of no-reference/blind image quality assessment (NR-IQA) is to devise a perceptual model that can accurately predict the quality of a distorted image as human opinions, in which feature extraction is an important issue. However, the features used in the state-of-the-art "general purpose" NR-IQA algorithms are usually natural scene statistics (NSS) based or are perceptually relevant; therefore, the performance of these models is limited. To further improve the performance of NR-IQA, we propose a general purpose NR-IQA algorithm which combines NSS-based features with perceptually relevant features. The new method extracts features in both the spatial and gradient domains. In the spatial domain, we extract the point-wise statistics for single pixel values which are characterized by a generalized Gaussian distribution model to form the underlying features. In the gradient domain, statistical features based on neighboring gradient magnitude similarity are extracted. Then a mapping is learned to predict quality scores using a support vector regression. The experimental results on the benchmark image databases demonstrate that the proposed algorithm correlates highly with human judgments of quality and leads to significant performance improvements over state-of-the-art methods.

  3. Quantifying uncertainty in climate change science through empirical information theory.

    PubMed

    Majda, Andrew J; Gershgorin, Boris

    2010-08-24

    Quantifying the uncertainty for the present climate and the predictions of climate change in the suite of imperfect Atmosphere Ocean Science (AOS) computer models is a central issue in climate change science. Here, a systematic approach to these issues with firm mathematical underpinning is developed through empirical information theory. An information metric to quantify AOS model errors in the climate is proposed here which incorporates both coarse-grained mean model errors as well as covariance ratios in a transformation invariant fashion. The subtle behavior of model errors with this information metric is quantified in an instructive statistically exactly solvable test model with direct relevance to climate change science including the prototype behavior of tracer gases such as CO(2). Formulas for identifying the most sensitive climate change directions using statistics of the present climate or an AOS model approximation are developed here; these formulas just involve finding the eigenvector associated with the largest eigenvalue of a quadratic form computed through suitable unperturbed climate statistics. These climate change concepts are illustrated on a statistically exactly solvable one-dimensional stochastic model with relevance for low frequency variability of the atmosphere. Viable algorithms for implementation of these concepts are discussed throughout the paper.

  4. Investigating differences in health-related quality of life of Greeks and Albanian immigrants with the generic EQ-5D questionnaire.

    PubMed

    Lahana, Eleni; Niakas, Dimitris

    2013-01-01

    Low socioeconomic status (SES) has been related by previous studies to low self-perceived HRQoL. Health is a major determinant of the society's welfare, and few studies have determined the relevant elements that contribute to health and quality of life in Greece. The aim of the study was to evaluate and test for differences in HRQoL of Greek and Albanian immigrant population according to ethnicity and their demographic and SES characteristics. The study was conducted in a sample of 660 age-matched and gender-matched Greeks and Albanian immigrants. Moderate or severe decrease in HRQoL was assessed with the generic tool EQ-5D. Differences were statistically analyzed by t-test and ANOVA. Also, logistic and linear regression analyses were conducted for the dependent variables of the EQ-5D dimensions and VAS scores, respectively. The Albanian immigrants reported better self-perceived health than their Greek counterparts. Health problems increase moderately with age and lower SES and are slightly higher for women than for men. Urbanity and superior education in both Greeks and Albaniansareassociated with worse HRQoL. There are some structural and compositional differences in the self-perceived quality of life between the two ethnicities, as estimated by EQ-5D. The combined information presents to public health providers the relevant data to assess health policies according to health needs.

  5. Toward resolution of the debate regarding purported crypto-Jews in a Spanish-American population: evidence from the Y chromosome.

    PubMed

    Sutton, Wesley K; Knight, Alec; Underhill, Peter A; Neulander, Judith S; Disotell, Todd R; Mountain, Joanna L

    2006-01-01

    The ethnic heritage of northernmost New Spain, including present-day northern New Mexico and southernmost Colorado, USA, is intensely debated. Local Spanish-American folkways and anecdotal narratives led to claims that the region was colonized primarily by secret- or crypto-Jews. Despite ethnographic criticisms, the notion of substantial crypto-Jewish ancestry among Spanish-Americans persists. We tested the null hypothesis that Spanish-Americans of northern New Mexico carry essentially the same profile of paternally inherited DNA variation as the peoples of Iberia, and the relevant alternative hypothesis that the sampled Spanish-Americans possess inherited DNA variation that reflects Jewish ancestry significantly greater than that in present-day Iberia. We report frequencies of 19 Y-chromosome unique event polymorphism (UEP) biallelic markers for 139 men from across northern New Mexico and southern Colorado, USA, who self-identify as 'Spanish-American'. We used three different statistical tests of differentiation to compare frequencies of major UEP-defined clades or haplogroups with published data for Iberians, Jews, and other Mediterranean populations. We also report frequencies of derived UEP markers within each major haplogroup, compared with published data for relevant populations. All tests of differentiation showed that, for frequencies of the major UEP-defined clades, Spanish-Americans and Iberians are statistically indistinguishable. All other pairwise comparisons, including between Spanish-Americans and Jews, and Iberians and Jews, revealed highly significant differences in UEP frequencies. Our results indicate that paternal genetic inheritance of Spanish-Americans is indistinguishable from that of Iberians and refute the popular and widely publicized scenario of significant crypto-Jewish ancestry of the Spanish-American population.

  6. Supervised learning methods for pathological arterial pulse wave differentiation: A SVM and neural networks approach.

    PubMed

    Paiva, Joana S; Cardoso, João; Pereira, Tânia

    2018-01-01

    The main goal of this study was to develop an automatic method based on supervised learning methods, able to distinguish healthy from pathologic arterial pulse wave (APW), and those two from noisy waveforms (non-relevant segments of the signal), from the data acquired during a clinical examination with a novel optical system. The APW dataset analysed was composed by signals acquired in a clinical environment from a total of 213 subjects, including healthy volunteers and non-healthy patients. The signals were parameterised by means of 39pulse features: morphologic, time domain statistics, cross-correlation features, wavelet features. Multiclass Support Vector Machine Recursive Feature Elimination (SVM RFE) method was used to select the most relevant features. A comparative study was performed in order to evaluate the performance of the two classifiers: Support Vector Machine (SVM) and Artificial Neural Network (ANN). SVM achieved a statistically significant better performance for this problem with an average accuracy of 0.9917±0.0024 and a F-Measure of 0.9925±0.0019, in comparison with ANN, which reached the values of 0.9847±0.0032 and 0.9852±0.0031 for Accuracy and F-Measure, respectively. A significant difference was observed between the performances obtained with SVM classifier using a different number of features from the original set available. The comparison between SVM and NN allowed reassert the higher performance of SVM. The results obtained in this study showed the potential of the proposed method to differentiate those three important signal outcomes (healthy, pathologic and noise) and to reduce bias associated with clinical diagnosis of cardiovascular disease using APW. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. 50 CFR 648.260 - Specifications.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Scientific and Statistical Committee (SSC), and any other relevant information, the Red Crab PDT shall... appropriate, concerning the environmental, economic, and social impacts of the recommendations. The Regional...

  8. GeneLab

    NASA Technical Reports Server (NTRS)

    Berrios, Daniel C.; Thompson, Terri G.

    2015-01-01

    NASA GeneLab is expected to capture and distribute omics data and experimental and process conditions most relevant to research community in their statistical and theoretical analysis of NASAs omics data.

  9. Information Selection in Intelligence Processing

    DTIC Science & Technology

    2011-12-01

    given. Edges connecting nodes representing irrelevant persons with either relevant or irrelevant persons are added randomly, as in an Erdos- Renyi ...graph (Erdos at Renyi , 1959): For each irrelevant node i , and another node j (either relevant or irrelevant) there is a predetermined probability that...statistics for engineering and the sciences (7th ed.). Boston: Duxbury Press. Erdos, P., & Renyi , A. (1959). “On Random Graphs,” Publicationes

  10. What Are the Odds? Modern Relevance and Bayes Factor Solutions for MacAlister's Problem from the 1881 "Educational Times"

    ERIC Educational Resources Information Center

    Jamil, Tahira; Marsman, Maarten; Ly, Alexander; Morey, Richard D.; Wagenmakers, Eric-Jan

    2017-01-01

    In 1881, Donald MacAlister posed a problem in the "Educational Times" that remains relevant today. The problem centers on the statistical evidence for the effectiveness of a treatment based on a comparison between two proportions. A brief historical sketch is followed by a discussion of two default Bayesian solutions, one based on a…

  11. Technical characteristics can make the difference in a surgical linear stapler. Or not?

    PubMed

    Giaccaglia, Valentina; Antonelli, Maria Serena; Addario Chieco, Paola; Cocorullo, Gianfranco; Cavallini, Marco; Gulotta, Gaspare

    2015-07-01

    Anastomotic leak (AL) after gastrointestinal surgery is a severe complication associated with relevant short- and long-term sequelae. Most of the anastomosis are currently performed with a surgical stapler that is required to have appropriate characteristics to guarantee good performances. The aim of our study was to evaluate, in the laboratory, pressure resistance and tensile strength of anastomosis performed with different surgical linear staplers, available in the market. We have been studying three linear staplers, with diverse cartridges and staple heights, of three different companies, used for gastrointestinal anastomosis and gastric or intestinal closure. We performed 50 anastomosis for each device, with the pertinent different cartridges, on fresh pig intestine, for a total of 350 anastomosis, then injected saline solution and recorded the pressure that provokes a leak on the staple line. There were no statistically significant differences between the mean pressure necessary to induce an AL in the various instruments (P > 0.05). For studying the tensile strength, we performed a total of 350 anastomosis with the different linear staplers on a special strong paper (Tyvek), then recorded the maximal tensile force that could open the anastomosis. There were no statistically significant differences between the different staplers about the strength necessary to open the staple line (P > 0.05). we demonstrated that different linear staplers of three companies available in the market give comparable anastomotic pressure resistance and tensile strength. This might suggest that small dissimilarities between different devices are not involved, at least as major parameters, in AL etiology. Copyright © 2015 Elsevier Inc. All rights reserved.

  12. Statistical genetics concepts and approaches in schizophrenia and related neuropsychiatric research.

    PubMed

    Schork, Nicholas J; Greenwood, Tiffany A; Braff, David L

    2007-01-01

    Statistical genetics is a research field that focuses on mathematical models and statistical inference methodologies that relate genetic variations (ie, naturally occurring human DNA sequence variations or "polymorphisms") to particular traits or diseases (phenotypes) usually from data collected on large samples of families or individuals. The ultimate goal of such analysis is the identification of genes and genetic variations that influence disease susceptibility. Although of extreme interest and importance, the fact that many genes and environmental factors contribute to neuropsychiatric diseases of public health importance (eg, schizophrenia, bipolar disorder, and depression) complicates relevant studies and suggests that very sophisticated mathematical and statistical modeling may be required. In addition, large-scale contemporary human DNA sequencing and related projects, such as the Human Genome Project and the International HapMap Project, as well as the development of high-throughput DNA sequencing and genotyping technologies have provided statistical geneticists with a great deal of very relevant and appropriate information and resources. Unfortunately, the use of these resources and their interpretation are not straightforward when applied to complex, multifactorial diseases such as schizophrenia. In this brief and largely nonmathematical review of the field of statistical genetics, we describe many of the main concepts, definitions, and issues that motivate contemporary research. We also provide a discussion of the most pressing contemporary problems that demand further research if progress is to be made in the identification of genes and genetic variations that predispose to complex neuropsychiatric diseases.

  13. Methodologic quality and relevance of references in pharmaceutical advertisements in a Canadian medical journal.

    PubMed

    Lexchin, J; Holbrook, A

    1994-07-01

    To evaluate the methodologic quality and relevance of references in pharmaceutical advertisements in the Canadian Medical Association Journal (CMAJ). Analytic study. All 114 references cited in the first 22 distinct pharmaceutical advertisements in volume 146 of CMAJ. Mean methodologic quality score (modified from the 6-point scale used to assess articles in the American College of Physicians' Journal Club) and mean relevance score (based on a new 5-point scale) for all references in each advertisement. Twenty of the 22 companies responded, sending 78 (90%) of the 87 references requested. The mean methodologic quality score was 58% (95% confidence limits [CL] 51% and 65%) and the mean relevance score 76% (95% CL 72% and 80%). The two mean scores were statistically lower than the acceptable score of 80% (p < 0.05), and the methodologic quality score was outside the preset clinically significant difference of 15%. The poor rating for methodologic quality was primarily because of the citation of references to low-quality review articles and "other" sources (i.e., other than reports of clinical trials). Half of the advertisements had a methodologic quality score of less than 65%, but only five had a relevance score of less than 65%. Although the relevance of most of the references was within minimal acceptable limits, the methodologic quality was often unacceptable. Because advertisements are an important part of pharmaceutical marketing and education, we suggest that companies develop written standards for their advertisements and monitor their advertisements for adherence to these standards. We also suggest that the Pharmaceutical Advertising Advisory Board develop more stringent guidelines for advertising and that it enforce these guidelines in a consistent, rigorous fashion.

  14. Methodologic quality and relevance of references in pharmaceutical advertisements in a Canadian medical journal.

    PubMed Central

    Lexchin, J; Holbrook, A

    1994-01-01

    OBJECTIVE: To evaluate the methodologic quality and relevance of references in pharmaceutical advertisements in the Canadian Medical Association Journal (CMAJ). DESIGN: Analytic study. DATA SOURCE: All 114 references cited in the first 22 distinct pharmaceutical advertisements in volume 146 of CMAJ. MAIN OUTCOME MEASURES: Mean methodologic quality score (modified from the 6-point scale used to assess articles in the American College of Physicians' Journal Club) and mean relevance score (based on a new 5-point scale) for all references in each advertisement. MAIN RESULTS: Twenty of the 22 companies responded, sending 78 (90%) of the 87 references requested. The mean methodologic quality score was 58% (95% confidence limits [CL] 51% and 65%) and the mean relevance score 76% (95% CL 72% and 80%). The two mean scores were statistically lower than the acceptable score of 80% (p < 0.05), and the methodologic quality score was outside the preset clinically significant difference of 15%. The poor rating for methodologic quality was primarily because of the citation of references to low-quality review articles and "other" sources (i.e., other than reports of clinical trials). Half of the advertisements had a methodologic quality score of less than 65%, but only five had a relevance score of less than 65%. CONCLUSIONS: Although the relevance of most of the references was within minimal acceptable limits, the methodologic quality was often unacceptable. Because advertisements are an important part of pharmaceutical marketing and education, we suggest that companies develop written standards for their advertisements and monitor their advertisements for adherence to these standards. We also suggest that the Pharmaceutical Advertising Advisory Board develop more stringent guidelines for advertising and that it enforce these guidelines in a consistent, rigorous fashion. PMID:8004560

  15. Mastication Evaluation With Unsupervised Learning: Using an Inertial Sensor-Based System.

    PubMed

    Lucena, Caroline Vieira; Lacerda, Marcelo; Caldas, Rafael; De Lima Neto, Fernando Buarque; Rativa, Diego

    2018-01-01

    There is a direct relationship between the prevalence of musculoskeletal disorders of the temporomandibular joint and orofacial disorders. A well-elaborated analysis of the jaw movements provides relevant information for healthcare professionals to conclude their diagnosis. Different approaches have been explored to track jaw movements such that the mastication analysis is getting less subjective; however, all methods are still highly subjective, and the quality of the assessments depends much on the experience of the health professional. In this paper, an accurate and non-invasive method based on a commercial low-cost inertial sensor (MPU6050) to measure jaw movements is proposed. The jaw-movement feature values are compared to the obtained with clinical analysis, showing no statistically significant difference between both methods. Moreover, We propose to use unsupervised paradigm approaches to cluster mastication patterns of healthy subjects and simulated patients with facial trauma. Two techniques were used in this paper to instantiate the method: Kohonen's Self-Organizing Maps and K-Means Clustering. Both algorithms have excellent performances to process jaw-movements data, showing encouraging results and potential to bring a full assessment of the masticatory function. The proposed method can be applied in real-time providing relevant dynamic information for health-care professionals.

  16. Miki (Mitotic Kinetics Regulator) Immunoexpression in Normal Liver, Cirrhotic Areas and Hepatocellular Carcinomas: a Preliminary Study with Clinical Relevance.

    PubMed

    Fernández-Vega, Iván; Santos-Juanes, Jorge; Camacho-Urkaray, Emma; Lorente-Gea, Laura; García, Beatriz; Gutiérrez-Corres, Francisco Borja; Quirós, Luis M; Guerra-Merino, Isabel; Aguirre, José Javier

    2018-02-12

    Hepatocellular carcinoma (HCC) is the most common type of primary malignant tumor in the liver. One of the main features of cancer survival is the generalized loss of growth control exhibited by cancer cells, and Miki is a protein related to the immunoglobulin superfamily that plays an important role in mitosis. We aim to study protein expression levels of Miki in non-tumoral liver and 20 HCCs recruited from a Pathology Department. Clinical information was also obtained. A tissue microarray was performed, and immunohistochemical techniques applied to study protein expression levels of Miki. In normal liver, Miki was weakly expressed, showing nuclear staining in the hepatocytes. Cirrhotic areas and HCCs showed a variety of staining patterns. Most HCC samples showed positive expression, with three different staining patterns being discernible: nuclear, cytoplasmic and mixed. Statistical analysis showed a significant association between grade of differentiation, Ki-67 proliferative index, survival rates and staining patterns. This study has revealed the positive expression of Miki in normal liver, cirrhotic areas and HCCs. Three different staining patterns of Miki expression with clinical relevance were noted in HCCs.

  17. Benefit and harms of new anti-cancer drugs.

    PubMed

    Vera-Badillo, Francisco E; Al-Mubarak, Mustafa; Templeton, Arnoud J; Amir, Eitan

    2013-06-01

    Phase III randomized controlled trials (RCTs) assess clinically important differences in endpoints that reflect benefit to and harm of patients. Defining benefit of cancer drugs can be difficult. Overall survival and quality of life are the most relevant primary endpoints, but difficulty in measuring these mean that other endpoints are often used, although their surrogacy or clinical relevance has not always been established. In general, advances in drug development have led to numerous new drugs to enter the market. Pivotal RCT of several new drugs have shown that benefit appeared greater for targeted anticancer agents than for chemotherapeutic agents. This effect seems particularly evident with targeted agents evaluated in biomarker-driven studies. Unfortunately, new therapies have also shown an increase in toxicity. Such toxicity is not always evident in the initial reports of RCTs. This may be a result of a statistical inability to detect differences between arms of RCTs, or occasionally due to biased reporting. There are several examples where reports of new toxicities could only be found in drug labels. In some cases, the small improvement in survival has come at a cost of substantial excess toxicity, leading some to consider such therapy as having equipoise.

  18. Measuring hospital efficiency--comparing four European countries.

    PubMed

    Mateus, Céu; Joaquim, Inês; Nunes, Carla

    2015-02-01

    Performing international comparisons on efficiency usually has two main drawbacks: the lack of comparability of data from different countries and the appropriateness and adequacy of data selected for efficiency measurement. With inpatient discharges for four countries, some of the problems of data comparability usually found in international comparisons were mitigated. The objectives are to assess and compare hospital efficiency levels within and between countries, using stochastic frontier analysis with both cross-sectional and panel data. Data from English (2005-2008), Portuguese (2002-2009), Spanish (2003-2009) and Slovenian (2005-2009) hospital discharges and characteristics are used. Weighted hospital discharges were considered as outputs while the number of employees, physicians, nurses and beds were selected as inputs of the production function. Stochastic frontier analysis using both cross-sectional and panel data were performed, as well as ordinary least squares (OLS) analysis. The adequacy of the data was assessed with Kolmogorov-Smirnov and Breusch-Pagan/Cook-Weisberg tests. Data available results were redundant to perform efficiency measurements using stochastic frontier analysis with cross-sectional data. The likelihood ratio test reveals that in cross-sectional data stochastic frontier analysis (SFA) is not statistically different from OLS in Portuguese data, while SFA and OLS estimates are statistically different for Spanish, Slovenian and English data. In the panel data, the inefficiency term is statistically different from 0 in the four countries in analysis, though for Portugal it is still close to 0. Panel data are preferred over cross-section analysis because results are more robust. For all countries except Slovenia, beds and employees are relevant inputs for the production process. © The Author 2015. Published by Oxford University Press on behalf of the European Public Health Association. All rights reserved.

  19. Subjective memory complaints, depressive symptoms and cognition in patients attending a memory outpatient clinic.

    PubMed

    Lehrner, J; Moser, D; Klug, S; Gleiß, A; Auff, E; Dal-Bianco, P; Pusswald, G

    2014-03-01

    The goals of this study were to establish prevalence of subjective memory complaints (SMC) and depressive symptoms (DS) and their relation to cognitive functioning and cognitive status in an outpatient memory clinic cohort. Two hundred forty-eight cognitively healthy controls and 581 consecutive patients with cognitive complaints who fulfilled the inclusion criteria were included in the study. A statistically significant difference (p < 0.001) between control group and patient group regarding mean SMC was detected. 7.7% of controls reported a considerable degree of SMC, whereas 35.8% of patients reported considerable SMC. Additionally, a statistically significant difference (p < 0.001) between controls and patient group regarding Beck depression score was detected. 16.6% of controls showed a clinical relevant degree of DS, whereas 48.5% of patients showed DS. An analysis of variance revealed a statistically significant difference across all four groups (control group, SCI group, naMCI group, aMCI group) (p < 0.001). Whereas 8% of controls reported a considerable degree of SMC, 34% of the SCI group, 31% of the naMCI group, and 54% of the aMCI group reported considerable SMC. A two-factor analysis of variance with the factors cognitive status (controls, SCI group, naMCI group, aMCI group) and depressive status (depressed vs. not depressed) and SMC as dependent variable revealed that both factors were significant (p < 0.001), whereas the interaction was not (p = 0.820). A large proportion of patients seeking help in a memory outpatient clinic report considerable SMC, with an increasing degree from cognitively healthy elderly to aMCI. Depressive status increases SMC consistently across groups with different cognitive status.

  20. Effects of a standardised extract of Trifolium pratense (Promensil) at a dosage of 80mg in the treatment of menopausal hot flushes: A systematic review and meta-analysis.

    PubMed

    Myers, S P; Vigar, V

    2017-01-15

    To critically assess the evidence for a specific standardised extract of Trifolium pratense isoflavones (Promensil) at a dosage of 80mg/day in the treatment of menopausal hot flushes. Systematic literature searches were performed in Medline, Scopus, CINAHL Plus, Cochrane, AMED and InforRMIT and citations obtained from 1996 to March 2016. Reference lists were checked; corresponding authors contacted and the grey literature searched for additional publications. Studies were selected according to predefined inclusion and exclusion criteria. All randomised clinical trials of a specific standardised extract of Trifolium pratense isoflavones (Promensil) used as a mono-component at 80mg/day and measuring vasomotor symptoms were included. The data extraction and quality assessment were performed independently by one reviewer and validated by a second with any disagreements being settled by discussion. Weighted mean differences and 95% confidence intervals were calculated for continuous data using the fixed-effects model. Twenty potentially relevant papers were identified, with only five studies meeting the inclusion criteria. The meta-analysis demonstrated a statistical and clinically relevant reduction in hot flush frequency in the active treatment group compared to placebo. Weighted mean difference 3.63 hot flushes per day: [95% CI 2.70-4.56]; p˂0.00001). Due to a lack of homogeneity a priori defined sub-group analyses were performed demonstrating a substantive difference between cross-over and parallel-arm clinical trial designs. There is evidence for a statistical and clinically significant benefit for using a specific standardised extract of red clover isoflavones (Promensil) at 80mg/day for treating hot flushes in menopausal women across the 3 studies included in the meta-analysis. The preparation was safe over the short-term duration of the studies (3 months). Copyright © 2016 The Authors. Published by Elsevier GmbH.. All rights reserved.

  1. Efficacy of exposure versus cognitive therapy in anxiety disorders: systematic review and meta-analysis

    PubMed Central

    2011-01-01

    Background There is growing evidence of the effectiveness of Cognitive Behavioural Therapy (CBT) for a wide range of psychological disorders. There is a continued controversy about whether challenging maladaptive thoughts rather than use of behavioural interventions alone is associated with the greatest efficacy. However little is known about the relative efficacy of various components of CBT. This review aims to compare the relative efficacy of Cognitive Therapy (CT) versus Exposure (E) for a range of anxiety disorders using the most clinically relevant outcome measures and estimating the summary relative efficacy by combining the studies in a meta-analysis. Methods Psych INFO, MEDLINE and EMBASE were searched from the first available year to May 2010. All randomised controlled studies comparing the efficacy of exposure with cognitive therapy were included. Odds ratios (OR) or standardised means' differences (Hedges' g) for the most clinically relevant primary outcomes were calculated. Outcomes of the studies were grouped according to specific disorders and were combined in meta-analyses exploring short-term and long-term outcomes. Results 20 Randomised Controlled Trials with (n = 1,308) directly comparing the efficacy of CT and E in anxiety disorders were included in the meta-analysis. No statistically significant difference in the relative efficacy of CT and E was revealed in Post Traumatic Stress Disorder (PTSD), in Obsessive Compulsive Disorder (OCD) and in Panic Disorder (PD). There was a statistically significant difference favouring CT versus E in Social Phobia both in the short-term (Z = 3.72, p = 0.0002) and the long-term (Z = 3.28, p = 0.001) outcomes. Conclusions On the basis of extant literature, there appears to be no evidence of differential efficacy between cognitive therapy and exposure in PD, PTSD and OCD and strong evidence of superior efficacy of cognitive therapy in social phobia PMID:22185596

  2. Percolation under noise: Detecting explosive percolation using the second-largest component

    NASA Astrophysics Data System (ADS)

    Viles, Wes; Ginestet, Cedric E.; Tang, Ariana; Kramer, Mark A.; Kolaczyk, Eric D.

    2016-05-01

    We consider the problem of distinguishing between different rates of percolation under noise. A statistical model of percolation is constructed allowing for the birth and death of edges as well as the presence of noise in the observations. This graph-valued stochastic process is composed of a latent and an observed nonstationary process, where the observed graph process is corrupted by type-I and type-II errors. This produces a hidden Markov graph model. We show that for certain choices of parameters controlling the noise, the classical (Erdős-Rényi) percolation is visually indistinguishable from a more rapid form of percolation. In this setting, we compare two different criteria for discriminating between these two percolation models, based on the interquartile range (IQR) of the first component's size, and on the maximal size of the second-largest component. We show through data simulations that this second criterion outperforms the IQR of the first component's size, in terms of discriminatory power. The maximal size of the second component therefore provides a useful statistic for distinguishing between different rates of percolation, under physically motivated conditions for the birth and death of edges, and under noise. The potential application of the proposed criteria for the detection of clinically relevant percolation in the context of applied neuroscience is also discussed.

  3. Unique and Overlapping Symptoms in Schizophrenia Spectrum and Dissociative Disorders in Relation to Models of Psychopathology: A Systematic Review

    PubMed Central

    Renard, Selwyn B.; Huntjens, Rafaele J. C.; Lysaker, Paul H.; Moskowitz, Andrew; Aleman, André; Pijnenborg, Gerdina H. M.

    2017-01-01

    Schizophrenia spectrum disorders (SSDs) and dissociative disorders (DDs) are described in the fifth edition of the Diagnostic and Statistical Manual for Mental Disorders (DSM-5) and tenth edition of the International Statistical Classification of Diseases and Related Health Problems (ICD-10) as 2 categorically distinct diagnostic categories. However, several studies indicate high levels of co-occurrence between these diagnostic groups, which might be explained by overlapping symptoms. The aim of this systematic review is to provide a comprehensive overview of the research concerning overlap and differences in symptoms between schizophrenia spectrum and DDs. For this purpose the PubMed, PsycINFO, and Web of Science databases were searched for relevant literature. The literature contained a large body of evidence showing the presence of symptoms of dissociation in SSDs. Although there are quantitative differences between diagnoses, overlapping symptoms are not limited to certain domains of dissociation, nor to nonpathological forms of dissociation. In addition, dissociation seems to be related to a history of trauma in SSDs, as is also seen in DDs. There is also evidence showing that positive and negative symptoms typically associated with schizophrenia may be present in DD. Implications of these results are discussed with regard to different models of psychopathology and clinical practice. PMID:27209638

  4. A New Method for Automated Identification and Morphometry of Myelinated Fibers Through Light Microscopy Image Analysis.

    PubMed

    Novas, Romulo Bourget; Fazan, Valeria Paula Sassoli; Felipe, Joaquim Cezar

    2016-02-01

    Nerve morphometry is known to produce relevant information for the evaluation of several phenomena, such as nerve repair, regeneration, implant, transplant, aging, and different human neuropathies. Manual morphometry is laborious, tedious, time consuming, and subject to many sources of error. Therefore, in this paper, we propose a new method for the automated morphometry of myelinated fibers in cross-section light microscopy images. Images from the recurrent laryngeal nerve of adult rats and the vestibulocochlear nerve of adult guinea pigs were used herein. The proposed pipeline for fiber segmentation is based on the techniques of competitive clustering and concavity analysis. The evaluation of the proposed method for segmentation of images was done by comparing the automatic segmentation with the manual segmentation. To further evaluate the proposed method considering morphometric features extracted from the segmented images, the distributions of these features were tested for statistical significant difference. The method achieved a high overall sensitivity and very low false-positive rates per image. We detect no statistical difference between the distribution of the features extracted from the manual and the pipeline segmentations. The method presented a good overall performance, showing widespread potential in experimental and clinical settings allowing large-scale image analysis and, thus, leading to more reliable results.

  5. The ALICE Electronic Logbook

    NASA Astrophysics Data System (ADS)

    Altini, V.; Carena, F.; Carena, W.; Chapeland, S.; Chibante Barroso, V.; Costa, F.; Divià, R.; Fuchs, U.; Makhlyueva, I.; Roukoutakis, F.; Schossmaier, K.; Soòs, C.; Vande Vyvre, P.; Von Haller, B.; ALICE Collaboration

    2010-04-01

    All major experiments need tools that provide a way to keep a record of the events and activities, both during commissioning and operations. In ALICE (A Large Ion Collider Experiment) at CERN, this task is performed by the Alice Electronic Logbook (eLogbook), a custom-made application developed and maintained by the Data-Acquisition group (DAQ). Started as a statistics repository, the eLogbook has evolved to become not only a fully functional electronic logbook, but also a massive information repository used to store the conditions and statistics of the several online systems. It's currently used by more than 600 users in 30 different countries and it plays an important role in the daily ALICE collaboration activities. This paper will describe the LAMP (Linux, Apache, MySQL and PHP) based architecture of the eLogbook, the database schema and the relevance of the information stored in the eLogbook to the different ALICE actors, not only for near real time procedures but also for long term data-mining and analysis. It will also present the web interface, including the different used technologies, the implemented security measures and the current main features. Finally it will present the roadmap for the future, including a migration to the web 2.0 paradigm, the handling of the database ever-increasing data volume and the deployment of data-mining tools.

  6. An in Vivo Experimental Comparison of Stainless Steel and Titanium Schanz Screws for External Fixation.

    PubMed

    Ganser, Antonia; Thompson, Rosemary E; Tami, Ivan; Neuhoff, Dirk; Steiner, Adrian; Ito, Keita

    2007-02-01

    To compare the clinical benefits of stainless steel (SS) to titanium (Ti) on reducing pin track irritation/infection and pin loosening during external fracture fixation. A tibial gap osteotomy was created in 17 sheep and stabilized with four Schanz screws of either SS or Ti and an external fixation frame. Over the 12 week observation period, pin loosening was assessed by grading the radiolucency around the pins and measuring the extraction torque on pin removal at sacrifice. Irritation/infection was assessed with weekly clinical pin track grading. A histological analysis of the tissue adjacent to the pin site was made to assess biocompatibility. A statistically non-significant trend for less bone resorption around Ti pins was found during the early observation period. However, at sacrifice, there was no difference between the two materials. Also, there was no difference in the extraction torque, and there was similar remodeling and apposition of the bone around the pins. A statistically non-significant trend for more infection about SS pins at sacrifice was found. Histology showed a slightly higher prevalence of reactionary cells in SS samples, but was otherwise not much different than around Ti pins. There is no clinically relevant substantial advantage in using either SS or Ti pins on reducing pin loosening or pin track irritation/infection.

  7. Evidence Integration in Natural Acoustic Textures during Active and Passive Listening

    PubMed Central

    Rupp, Andre; Celikel, Tansu

    2018-01-01

    Abstract Many natural sounds can be well described on a statistical level, for example, wind, rain, or applause. Even though the spectro-temporal profile of these acoustic textures is highly dynamic, changes in their statistics are indicative of relevant changes in the environment. Here, we investigated the neural representation of change detection in natural textures in humans, and specifically addressed whether active task engagement is required for the neural representation of this change in statistics. Subjects listened to natural textures whose spectro-temporal statistics were modified at variable times by a variable amount. Subjects were instructed to either report the detection of changes (active) or to passively listen to the stimuli. A subset of passive subjects had performed the active task before (passive-aware vs passive-naive). Psychophysically, longer exposure to pre-change statistics was correlated with faster reaction times and better discrimination performance. EEG recordings revealed that the build-up rate and size of parieto-occipital (PO) potentials reflected change size and change time. Reduced effects were observed in the passive conditions. While P2 responses were comparable across conditions, slope and height of PO potentials scaled with task involvement. Neural source localization identified a parietal source as the main contributor of change-specific potentials, in addition to more limited contributions from auditory and frontal sources. In summary, the detection of statistical changes in natural acoustic textures is predominantly reflected in parietal locations both on the skull and source level. The scaling in magnitude across different levels of task involvement suggests a context-dependent degree of evidence integration. PMID:29662943

  8. Evidence Integration in Natural Acoustic Textures during Active and Passive Listening.

    PubMed

    Górska, Urszula; Rupp, Andre; Boubenec, Yves; Celikel, Tansu; Englitz, Bernhard

    2018-01-01

    Many natural sounds can be well described on a statistical level, for example, wind, rain, or applause. Even though the spectro-temporal profile of these acoustic textures is highly dynamic, changes in their statistics are indicative of relevant changes in the environment. Here, we investigated the neural representation of change detection in natural textures in humans, and specifically addressed whether active task engagement is required for the neural representation of this change in statistics. Subjects listened to natural textures whose spectro-temporal statistics were modified at variable times by a variable amount. Subjects were instructed to either report the detection of changes (active) or to passively listen to the stimuli. A subset of passive subjects had performed the active task before (passive-aware vs passive-naive). Psychophysically, longer exposure to pre-change statistics was correlated with faster reaction times and better discrimination performance. EEG recordings revealed that the build-up rate and size of parieto-occipital (PO) potentials reflected change size and change time. Reduced effects were observed in the passive conditions. While P2 responses were comparable across conditions, slope and height of PO potentials scaled with task involvement. Neural source localization identified a parietal source as the main contributor of change-specific potentials, in addition to more limited contributions from auditory and frontal sources. In summary, the detection of statistical changes in natural acoustic textures is predominantly reflected in parietal locations both on the skull and source level. The scaling in magnitude across different levels of task involvement suggests a context-dependent degree of evidence integration.

  9. Three-dimensional virtual planning in orthognathic surgery enhances the accuracy of soft tissue prediction.

    PubMed

    Van Hemelen, Geert; Van Genechten, Maarten; Renier, Lieven; Desmedt, Maria; Verbruggen, Elric; Nadjmi, Nasser

    2015-07-01

    Throughout the history of computing, shortening the gap between the physical and digital world behind the screen has always been strived for. Recent advances in three-dimensional (3D) virtual surgery programs have reduced this gap significantly. Although 3D assisted surgery is now widely available for orthognathic surgery, one might still argue whether a 3D virtual planning approach is a better alternative to a conventional two-dimensional (2D) planning technique. The purpose of this study was to compare the accuracy of a traditional 2D technique and a 3D computer-aided prediction method. A double blind randomised prospective study was performed to compare the prediction accuracy of a traditional 2D planning technique versus a 3D computer-aided planning approach. The accuracy of the hard and soft tissue profile predictions using both planning methods was investigated. There was a statistically significant difference between 2D and 3D soft tissue planning (p < 0.05). The statistically significant difference found between 2D and 3D planning and the actual soft tissue outcome was not confirmed by a statistically significant difference between methods. The 3D planning approach provides more accurate soft tissue planning. However, the 2D orthognathic planning is comparable to 3D planning when it comes to hard tissue planning. This study provides relevant results for choosing between 3D and 2D planning in clinical practice. Copyright © 2015 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.

  10. [Characteristics of Schizophrenia Patients' Homicide Behaviors and Their Correlations with Criminal Capacity].

    PubMed

    Sun, Z W; Shi, T T; Fu, P X

    2017-02-01

    To explore the characteristics of schizophrenia patients' homicide behaviors and the influences of the assessments of criminal capacity. Indicators such as demographic and clinical data, characteristics of criminal behaviors and criminal capacity from the suspects whom were diagnosed by forensic psychiatry as schizophrenia ( n =110) and normal mental ( n =70) with homicide behavior, were collected by self-made investigation form and compared. The influences of the assessments of criminal capacity on the suspects diagnosed as schizophrenia were also analyzed using logistic regression analysis. There were no significant statistical differences between the schizophrenic group and the normal mental group concerning age, gender, education and marital status ( P >0.05). There were significant statistical differences between the two groups concerning thought disorder, emotion state and social function before crime ( P <0.05) and there were significant statistical differences in some characteristics of the case such as aggressive history ( P <0.05), cue, trigger, plan, criminal incentives, object of crime, circumstance cognition and self-protection ( P <0.05). Multivariate logistic regression analysis suggested that thought disorder, emotion state, social function, criminal incentives, plan and self-protection before crime of the schizophrenic group were positively correlated with the criminal capacity ( P <0.05). The relevant influences of psychopathology and crime characteristics should be considered comprehensively for improving the accuracy of the criminal capacity evaluation on the suspects diagnosed as schizophrenia with homicide behavior. Copyright© by the Editorial Department of Journal of Forensic Medicine

  11. Validity of midday total testosterone levels in older men with erectile dysfunction.

    PubMed

    Welliver, R Charles; Wiser, Herbert J; Brannigan, Robert E; Feia, Kendall; Monga, Manoj; Köhler, Tobias S

    2014-07-01

    Based on studies showing the circadian rhythmicity of testosterone the optimal time of day to draw total testosterone in men has classically been reported as between 8 and 11 a.m. However, further studies demonstrated that the testosterone circadian rhythmicity becomes blunted with age. We retrospectively reviewed the charts of 2,569 men who presented with erectile dysfunction for total testosterone and draw times. We compared the men by age group, including less than 40 years and 5-year groupings after age 40 years. Total testosterone was analyzed for variability during the most common draw time hours (7 a.m. to 2 p.m.). Mean total testosterone at 7 to 9 a.m. and 9 a.m. to 2 p.m. clinically and statistically differed only in men younger than 40 vs 40 to 44 years old (mean difference 207 ng/dl, 95% CI 98-315, p = 0.0004 vs 149 ng/dl, 95% CI 36-262, p = 0.01). No other group showed a clinically and statistically significant difference between those periods. Total testosterone in men with erectile dysfunction who are younger than 45 years should be drawn as close to 7 a.m. as possible because a statistically and clinically relevant decrease in testosterone will occur during the course of the day. Men older than 45 years with erectile dysfunction can have total testosterone drawn at any time before 2 p.m. without misleading results. Copyright © 2014 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.

  12. Social functioning of elderly persons with malignant diseases.

    PubMed

    Berat, Svetlana; Nešković-Konstantinović, Zora; Nedović, Goran; Rapaić, Dragan; Marinković, Dragan

    2015-01-01

    Malignant disease, its treatment and consequences of treatment can often lead to social marginalization and reduced quality of life. The aim of this research was to determine how elderly patients with malignant diseases function in their social environment. Sociodemographic questionnaire and interview were used to investigate a group of 49 elderly persons undergoing adjuvant chemotherapy treatment against early carcinomas (P1), and a group of 51 elderly persons with advanced stages of cancer undergoing systemic chemotherapy (P2). There were two cycles of assessment: one just before the beginning of the first cycle of adjuvant or systemic chemotherapy, and the other three months later. The research paradigm was based on the relation between individual treatment and the impact of the malignant disease on functional and social incompetence. The obtained findings were compared with the group of 50 healthy elderly people (K) who share the same relevant features but do not suffer from malignant diseases. It was found that most healthy older people live in share house, whereas those who suffer from malignant diseases mostly live in separate households. In both groups of patients and healthy group older people are mostly taken care of by their children. Individuals in both groups of patients have been frequently visited by their relatives during initial stages of treatment, unlike the elderly people in the control group. However, the difference did not reach a statistical significance. Three months after the beginning of chemotherapy, there was a statistically relevant difference in favor of the group undergoing adjuvant treatment. Home visits eventually become less frequent, whereas communication by telephone becomes more frequent. It was also found that visits by friends and neighbors are statistically more frequent among subjects who undergo adjuvant treatment, both before the treatment began and three months later when compared to other groups. Our research shows that elderly people are subject to social exclusion, especially those with malignant diseases. Special care should be dedicated to monitoring of social functioning during treatment of patients with malignant disease considering the detected trend of deterioration and significance for further recover and cure.

  13. Understanding adolescent student perceptions of science education

    NASA Astrophysics Data System (ADS)

    Ebert, Ellen Kress

    This study used the Relevance of Science Education (ROSE) survey (Sjoberg & Schreiner, 2004) to examine topics of interest and perspectives of secondary science students in a large school district in the southwestern U.S. A situated learning perspective was used to frame the project. The research questions of this study focused on (a) perceptions students have about themselves and their science classroom and how these beliefs may influence their participation in the community of practice of science; (b) consideration of how a future science classroom where the curriculum is framed by the Next Generation Science Standards might foster students' beliefs and perceptions about science education and their legitimate peripheral participation in the community of practice of science; and (c) reflecting on their school science interests and perspectives, what can be inferred about students' identities as future scientists or STEM field professionals? Data were collected from 515 second year science students during a 4-week period in May of 2012 using a Web-based survey. Data were disaggregated by gender and ethnicity and analyzed descriptively and by statistical comparison between groups. Findings for Research Question 1 indicated that boys and girls showed statistically significant differences in scientific topics of interest. There were no statistical differences between ethnic groups although. For Research Question 2, it was determined that participants reported an increase in their interest when they deemed the context of the content to be personally relevant. Results for Research Question 3 showed that participants do not see themselves as youthful scientists or as becoming scientists. While participants value the importance of science in their lives and think all students should take science, they do not aspire to careers in science. Based on this study, a need for potential future work has been identified in three areas: (a) exploration of the perspectives and interests of non-mainstream students and urban students whose representation in this study was limited; (b) investigation of topics where students expressed low interests topics; and (c) development and design of authentic communities of practice in the science classroom.

  14. Determining relevant parameters for a statistical tropical cyclone genesis tool based upon global model output

    NASA Astrophysics Data System (ADS)

    Halperin, D.; Hart, R. E.; Fuelberg, H. E.; Cossuth, J.

    2013-12-01

    Predicting tropical cyclone (TC) genesis has been a vexing problem for forecasters. While the literature describes environmental conditions which are necessary for TC genesis, predicting if and when a specific disturbance will organize and become a TC remains a challenge. As recently as 5-10 years ago, global models possessed little if any skill in forecasting TC genesis. However, due to increased resolution and more advanced model parameterizations, we have reached the point where global models can provide useful TC genesis guidance to operational forecasters. A recent study evaluated five global models' ability to predict TC genesis out to four days over the North Atlantic basin (Halperin et al. 2013). The results indicate that the models are indeed able to capture the genesis time and location correctly a fair percentage of the time. The study also uncovered model biases. For example, probability of detection and false alarm rate varies spatially within the basin. Also, as expected, the models' performance decreases with increasing lead time. In order to explain these and other biases, it is useful to analyze the model-indicated genesis events further to determine whether or not there are systematic differences between successful forecasts (hits), false alarms, and miss events. This study will examine composites of a number of physically-relevant environmental parameters (e.g., magnitude of vertical wind shear, aerially averaged mid-level relative humidity) and disturbance-based parameters (e.g., 925 hPa maximum wind speed, vertical alignment of relative vorticity) among each TC genesis event classification (i.e., hit, false alarm, miss). We will use standard statistical tests (e.g., Student's t test, Mann-Whitney-U Test) to calculate whether or not any differences are statistically significant. We also plan to discuss how these composite results apply to a few illustrative case studies. The results may help determine which aspects of the forecast are (in)correct and whether the incorrect aspects can be bias-corrected. This, in turn, may allow us to further enhance probabilistic forecasts of TC genesis.

  15. An empirical comparison of key statistical attributes among potential ICU quality indicators.

    PubMed

    Brown, Sydney E S; Ratcliffe, Sarah J; Halpern, Scott D

    2014-08-01

    Good quality indicators should have face validity, relevance to patients, and be able to be measured reliably. Beyond these general requirements, good quality indicators should also have certain statistical properties, including sufficient variability to identify poor performers, relative insensitivity to severity adjustment, and the ability to capture what providers do rather than patients' characteristics. We assessed the performance of candidate indicators of ICU quality on these criteria. Indicators included ICU readmission, mortality, several length of stay outcomes, and the processes of venous-thromboembolism and stress ulcer prophylaxis provision. Retrospective cohort study. One hundred thirty-eight U.S. ICUs from 2001-2008 in the Project IMPACT database. Two hundred sixty-eight thousand eight hundred twenty-four patients discharged from U.S. ICUs. None. We assessed indicators' (1) variability across ICU-years; (2) degree of influence by patient vs. ICU and hospital characteristics using the Omega statistic; (3) sensitivity to severity adjustment by comparing the area under the receiver operating characteristic curve (AUC) between models including vs. excluding patient variables, and (4) correlation between risk adjusted quality indicators using a Spearman correlation. Large ranges of among-ICU variability were noted for all quality indicators, particularly for prolonged length of stay (4.7-71.3%) and the proportion of patients discharged home (30.6-82.0%), and ICU and hospital characteristics outweighed patient characteristics for stress ulcer prophylaxis (ω, 0.43; 95% CI, 0.34-0.54), venous thromboembolism prophylaxis (ω, 0.57; 95% CI, 0.53-0.61), and ICU readmissions (ω, 0.69; 95% CI, 0.52-0.90). Mortality measures were the most sensitive to severity adjustment (area under the receiver operating characteristic curve % difference, 29.6%); process measures were the least sensitive (area under the receiver operating characteristic curve % differences: venous thromboembolism prophylaxis, 3.4%; stress ulcer prophylaxis, 2.1%). None of the 10 indicators was clearly and consistently correlated with a majority of the other nine indicators. No indicator performed optimally across assessments. Future research should seek to define and operationalize quality in a way that is relevant to both patients and providers.

  16. Alexithymia is related to differences in gray matter volume: a voxel-based morphometry study.

    PubMed

    Ihme, Klas; Dannlowski, Udo; Lichev, Vladimir; Stuhrmann, Anja; Grotegerd, Dominik; Rosenberg, Nicole; Kugel, Harald; Heindel, Walter; Arolt, Volker; Kersting, Anette; Suslow, Thomas

    2013-01-23

    Alexithymia has been characterized as the inability to identify and describe feelings. Functional imaging studies have revealed that alexithymia is linked to reactivity changes in emotion- and face-processing-relevant brain areas. In this respect, anterior cingulate cortex (ACC), amygdala, anterior insula and fusiform gyrus (FFG) have been consistently reported. However, it remains to be clarified whether alexithymia is also associated with structural differences. Voxel-based morphometry on T1-weighted magnetic resonance images was used to investigate gray matter volume in 17 high alexithymics (HA) and 17 gender-matched low alexithymics (LA), which were selected from a sample of 161 healthy volunteers on basis of the 20-item Toronto Alexithymia Scale. Data were analyzed as statistic parametric maps for the comparisons LA>HA and HA>LA in a priori determined regions of interests (ROIs), i.e., ACC, amygdala, anterior insula and FFG. Moreover, an exploratory whole brain analysis was accomplished. For the contrast LA>HA, significant clusters were detected in the ACC, left amygdala and left anterior insula. Additionally, the whole brain analysis revealed volume differences in the left middle temporal gyrus. No significant differences were found for the comparison HA>LA. Our findings suggest that high compared to low alexithymics show less gray matter volume in several emotion-relevant brain areas. These structural differences might contribute to the functional alterations found in previous imaging studies in alexithymia. Copyright © 2012 Elsevier B.V. All rights reserved.

  17. Multi-sensory integration in a small brain

    NASA Astrophysics Data System (ADS)

    Gepner, Ruben; Wolk, Jason; Gershow, Marc

    Understanding how fluctuating multi-sensory stimuli are integrated and transformed in neural circuits has proved a difficult task. To address this question, we study the sensori-motor transformations happening in the brain of the Drosophila larva, a tractable model system with about 10,000 neurons. Using genetic tools that allow us to manipulate the activity of individual brain cells through their transparent body, we observe the stochastic decisions made by freely-behaving animals as their visual and olfactory environments fluctuate independently. We then use simple linear-nonlinear models to correlate outputs with relevant features in the inputs, and adaptive filtering processes to track changes in these relevant parameters used by the larva's brain to make decisions. We show how these techniques allow us to probe how statistics of stimuli from different sensory modalities combine to affect behavior, and can potentially guide our understanding of how neural circuits are anatomically and functionally integrated. Supported by NIH Grant 1DP2EB022359 and NSF Grant PHY-1455015.

  18. Benchmarking comparison and validation of MCNP photon interaction data

    NASA Astrophysics Data System (ADS)

    Colling, Bethany; Kodeli, I.; Lilley, S.; Packer, L. W.

    2017-09-01

    The objective of the research was to test available photoatomic data libraries for fusion relevant applications, comparing against experimental and computational neutronics benchmarks. Photon flux and heating was compared using the photon interaction data libraries (mcplib 04p, 05t, 84p and 12p). Suitable benchmark experiments (iron and water) were selected from the SINBAD database and analysed to compare experimental values with MCNP calculations using mcplib 04p, 84p and 12p. In both the computational and experimental comparisons, the majority of results with the 04p, 84p and 12p photon data libraries were within 1σ of the mean MCNP statistical uncertainty. Larger differences were observed when comparing computational results with the 05t test photon library. The Doppler broadening sampling bug in MCNP-5 is shown to be corrected for fusion relevant problems through use of the 84p photon data library. The recommended libraries for fusion neutronics are 84p (or 04p) with MCNP6 and 84p if using MCNP-5.

  19. Weak Lensing from Space I: Instrumentation and Survey Strategy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rhodes, Jason; Refregier, Alexandre; Massey, Richard

    A wide field space-based imaging telescope is necessary to fully exploit the technique of observing dark matter via weak gravitational lensing. This first paper in a three part series outlines the survey strategies and relevant instrumental parameters for such a mission. As a concrete example of hardware design, we consider the proposed Supernova/Acceleration Probe (SNAP). Using SNAP engineering models, we quantify the major contributions to this telescope's Point Spread Function (PSF). These PSF contributions are relevant to any similar wide field space telescope. We further show that the PSF of SNAP or a similar telescope will be smaller than currentmore » ground-based PSFs, and more isotropic and stable over time than the PSF of the Hubble Space Telescope. We outline survey strategies for two different regimes - a ''wide'' 300 square degree survey and a ''deep'' 15 square degree survey that will accomplish various weak lensing goals including statistical studies and dark matter mapping.« less

  20. Bayes in biological anthropology.

    PubMed

    Konigsberg, Lyle W; Frankenberg, Susan R

    2013-12-01

    In this article, we both contend and illustrate that biological anthropologists, particularly in the Americas, often think like Bayesians but act like frequentists when it comes to analyzing a wide variety of data. In other words, while our research goals and perspectives are rooted in probabilistic thinking and rest on prior knowledge, we often proceed to use statistical hypothesis tests and confidence interval methods unrelated (or tenuously related) to the research questions of interest. We advocate for applying Bayesian analyses to a number of different bioanthropological questions, especially since many of the programming and computational challenges to doing so have been overcome in the past two decades. To facilitate such applications, this article explains Bayesian principles and concepts, and provides concrete examples of Bayesian computer simulations and statistics that address questions relevant to biological anthropology, focusing particularly on bioarchaeology and forensic anthropology. It also simultaneously reviews the use of Bayesian methods and inference within the discipline to date. This article is intended to act as primer to Bayesian methods and inference in biological anthropology, explaining the relationships of various methods to likelihoods or probabilities and to classical statistical models. Our contention is not that traditional frequentist statistics should be rejected outright, but that there are many situations where biological anthropology is better served by taking a Bayesian approach. To this end it is hoped that the examples provided in this article will assist researchers in choosing from among the broad array of statistical methods currently available. Copyright © 2013 Wiley Periodicals, Inc.

  1. Do physicians understand cancer screening statistics? A national survey of primary care physicians in the United States.

    PubMed

    Wegwarth, Odette; Schwartz, Lisa M; Woloshin, Steven; Gaissmaier, Wolfgang; Gigerenzer, Gerd

    2012-03-06

    Unlike reduced mortality rates, improved survival rates and increased early detection do not prove that cancer screening tests save lives. Nevertheless, these 2 statistics are often used to promote screening. To learn whether primary care physicians understand which statistics provide evidence about whether screening saves lives. Parallel-group, randomized trial (randomization controlled for order effect only), conducted by Internet survey. (ClinicalTrials.gov registration number: NCT00981019) National sample of U.S. primary care physicians from a research panel maintained by Harris Interactive (79% cooperation rate). 297 physicians who practiced both inpatient and outpatient medicine were surveyed in 2010, and 115 physicians who practiced exclusively outpatient medicine were surveyed in 2011. Physicians received scenarios about the effect of 2 hypothetical screening tests: The effect was described as improved 5-year survival and increased early detection in one scenario and as decreased cancer mortality and increased incidence in the other. Physicians' recommendation of screening and perception of its benefit in the scenarios and general knowledge of screening statistics. Primary care physicians were more enthusiastic about the screening test supported by irrelevant evidence (5-year survival increased from 68% to 99%) than about the test supported by relevant evidence (cancer mortality reduced from 2 to 1.6 in 1000 persons). When presented with irrelevant evidence, 69% of physicians recommended the test, compared with 23% when presented with relevant evidence (P < 0.001). When asked general knowledge questions about screening statistics, many physicians did not distinguish between irrelevant and relevant screening evidence; 76% versus 81%, respectively, stated that each of these statistics proves that screening saves lives (P = 0.39). About one half (47%) of the physicians incorrectly said that finding more cases of cancer in screened as opposed to unscreened populations "proves that screening saves lives." Physicians' recommendations for screening were based on hypothetical scenarios, not actual practice. Most primary care physicians mistakenly interpreted improved survival and increased detection with screening as evidence that screening saves lives. Few correctly recognized that only reduced mortality in a randomized trial constitutes evidence of the benefit of screening. Harding Center for Risk Literacy, Max Planck Institute for Human Development.

  2. Information Entropy Production of Maximum Entropy Markov Chains from Spike Trains

    NASA Astrophysics Data System (ADS)

    Cofré, Rodrigo; Maldonado, Cesar

    2018-01-01

    We consider the maximum entropy Markov chain inference approach to characterize the collective statistics of neuronal spike trains, focusing on the statistical properties of the inferred model. We review large deviations techniques useful in this context to describe properties of accuracy and convergence in terms of sampling size. We use these results to study the statistical fluctuation of correlations, distinguishability and irreversibility of maximum entropy Markov chains. We illustrate these applications using simple examples where the large deviation rate function is explicitly obtained for maximum entropy models of relevance in this field.

  3. Medicaid Disenrollment and Disparities in Access to Care: Evidence from Tennessee.

    PubMed

    Tarazi, Wafa W; Green, Tiffany L; Sabik, Lindsay M

    2017-06-01

    To assess the effects of Tennessee's 2005 Medicaid disenrollment on access to health care among low-income nonelderly adults. We use data from the 2003-2008 Behavioral Risk Factor Surveillance System. We examined the effects of Medicaid disenrollment on access to care among adults living in Tennessee compared with neighboring states, using difference-in-difference models. Evidence suggests that Medicaid disenrollment resulted in significant decreases in health insurance and increases in cost-related barriers to care for low-income adults living in Tennessee. Statistically significant changes were not observed for having a personal doctor. Medicaid disenrollment is associated with reduced access to care. This finding is relevant for states considering expansions or contractions of Medicaid under the Affordable Care Act. © Health Research and Educational Trust.

  4. Southeast Atlantic Cloud Properties in a Multivariate Statistical Model - How Relevant is Air Mass History for Local Cloud Properties?

    NASA Astrophysics Data System (ADS)

    Fuchs, Julia; Cermak, Jan; Andersen, Hendrik

    2017-04-01

    This study aims at untangling the impacts of external dynamics and local conditions on cloud properties in the Southeast Atlantic (SEA) by combining satellite and reanalysis data using multivariate statistics. The understanding of clouds and their determinants at different scales is important for constraining the Earth's radiative budget, and thus prominent in climate-system research. In this study, SEA stratocumulus cloud properties are observed not only as the result of local environmental conditions but also as affected by external dynamics and spatial origins of air masses entering the study area. In order to assess to what extent cloud properties are impacted by aerosol concentration, air mass history, and meteorology, a multivariate approach is conducted using satellite observations of aerosol and cloud properties (MODIS, SEVIRI), information on aerosol species composition (MACC) and meteorological context (ERA-Interim reanalysis). To account for the often-neglected but important role of air mass origin, information on air mass history based on HYSPLIT modeling is included in the statistical model. This multivariate approach is intended to lead to a better understanding of the physical processes behind observed stratocumulus cloud properties in the SEA.

  5. Temperature in and out of equilibrium: A review of concepts, tools and attempts

    NASA Astrophysics Data System (ADS)

    Puglisi, A.; Sarracino, A.; Vulpiani, A.

    2017-11-01

    We review the general aspects of the concept of temperature in equilibrium and non-equilibrium statistical mechanics. Although temperature is an old and well-established notion, it still presents controversial facets. After a short historical survey of the key role of temperature in thermodynamics and statistical mechanics, we tackle a series of issues which have been recently reconsidered. In particular, we discuss different definitions and their relevance for energy fluctuations. The interest in such a topic has been triggered by the recent observation of negative temperatures in condensed matter experiments. Moreover, the ability to manipulate systems at the micro and nano-scale urges to understand and clarify some aspects related to the statistical properties of small systems (as the issue of temperature's ;fluctuations;). We also discuss the notion of temperature in a dynamical context, within the theory of linear response for Hamiltonian systems at equilibrium and stochastic models with detailed balance, and the generalized fluctuation-response relations, which provide a hint for an extension of the definition of temperature in far-from-equilibrium systems. To conclude we consider non-Hamiltonian systems, such as granular materials, turbulence and active matter, where a general theoretical framework is still lacking.

  6. Water Quality Sensing and Spatio-Temporal Monitoring Structure with Autocorrelation Kernel Methods.

    PubMed

    Vizcaíno, Iván P; Carrera, Enrique V; Muñoz-Romero, Sergio; Cumbal, Luis H; Rojo-Álvarez, José Luis

    2017-10-16

    Pollution on water resources is usually analyzed with monitoring campaigns, which consist of programmed sampling, measurement, and recording of the most representative water quality parameters. These campaign measurements yields a non-uniform spatio-temporal sampled data structure to characterize complex dynamics phenomena. In this work, we propose an enhanced statistical interpolation method to provide water quality managers with statistically interpolated representations of spatial-temporal dynamics. Specifically, our proposal makes efficient use of the a priori available information of the quality parameter measurements through Support Vector Regression (SVR) based on Mercer's kernels. The methods are benchmarked against previously proposed methods in three segments of the Machángara River and one segment of the San Pedro River in Ecuador, and their different dynamics are shown by statistically interpolated spatial-temporal maps. The best interpolation performance in terms of mean absolute error was the SVR with Mercer's kernel given by either the Mahalanobis spatial-temporal covariance matrix or by the bivariate estimated autocorrelation function. In particular, the autocorrelation kernel provides with significant improvement of the estimation quality, consistently for all the six water quality variables, which points out the relevance of including a priori knowledge of the problem.

  7. Water Quality Sensing and Spatio-Temporal Monitoring Structure with Autocorrelation Kernel Methods

    PubMed Central

    Vizcaíno, Iván P.; Muñoz-Romero, Sergio; Cumbal, Luis H.

    2017-01-01

    Pollution on water resources is usually analyzed with monitoring campaigns, which consist of programmed sampling, measurement, and recording of the most representative water quality parameters. These campaign measurements yields a non-uniform spatio-temporal sampled data structure to characterize complex dynamics phenomena. In this work, we propose an enhanced statistical interpolation method to provide water quality managers with statistically interpolated representations of spatial-temporal dynamics. Specifically, our proposal makes efficient use of the a priori available information of the quality parameter measurements through Support Vector Regression (SVR) based on Mercer’s kernels. The methods are benchmarked against previously proposed methods in three segments of the Machángara River and one segment of the San Pedro River in Ecuador, and their different dynamics are shown by statistically interpolated spatial-temporal maps. The best interpolation performance in terms of mean absolute error was the SVR with Mercer’s kernel given by either the Mahalanobis spatial-temporal covariance matrix or by the bivariate estimated autocorrelation function. In particular, the autocorrelation kernel provides with significant improvement of the estimation quality, consistently for all the six water quality variables, which points out the relevance of including a priori knowledge of the problem. PMID:29035333

  8. Some new mathematical methods for variational objective analysis

    NASA Technical Reports Server (NTRS)

    Wahba, Grace; Johnson, Donald R.

    1994-01-01

    Numerous results were obtained relevant to remote sensing, variational objective analysis, and data assimilation. A list of publications relevant in whole or in part is attached. The principal investigator gave many invited lectures, disseminating the results to the meteorological community as well as the statistical community. A list of invited lectures at meetings is attached, as well as a list of departmental colloquia at various universities and institutes.

  9. Statistics for clinical nursing practice: an introduction.

    PubMed

    Rickard, Claire M

    2008-11-01

    Difficulty in understanding statistics is one of the most frequently reported barriers to nurses applying research results in their practice. Yet the amount of nursing research published each year continues to grow, as does the expectation that nurses will undertake practice based on this evidence. Critical care nurses do not need to be statisticians, but they do need to develop a working knowledge of statistics so they can be informed consumers of research and so practice can evolve and improve. For those undertaking a research project, statistical literacy is required to interact with other researchers and statisticians, so as to best design and undertake the project. This article is the first in a series that guides critical care nurses through statistical terms and concepts relevant to their practice.

  10. Physical Regulation of the Self-Assembly of Tobacco Mosaic Virus Coat Protein

    PubMed Central

    Kegel, Willem K.; van der Schoot, Paul

    2006-01-01

    We present a statistical mechanical model based on the principle of mass action that explains the main features of the in vitro aggregation behavior of the coat protein of tobacco mosaic virus (TMV). By comparing our model to experimentally obtained stability diagrams, titration experiments, and calorimetric data, we pin down three competing factors that regulate the transitions between the different kinds of aggregated state of the coat protein. These are hydrophobic interactions, electrostatic interactions, and the formation of so-called “Caspar” carboxylate pairs. We suggest that these factors could be universal and relevant to a large class of virus coat proteins. PMID:16731551

  11. Antimalarial activity of synthetic 1,2,4-trioxanes and cyclic peroxy ketals, a quantum similarity study

    NASA Astrophysics Data System (ADS)

    Gironés, X.; Gallegos, A.; Carbó-Dorca, R.

    2001-12-01

    In this work, the antimalarial activity of two series of 20 and 7 synthetic 1,2,4-trioxanes and a set of 20 cyclic peroxy ketals are tested for correlation search by means of Molecular Quantum Similarity Measures (MQSM). QSAR models, dealing with different biological responses (IC90, IC50 and ED90) of the parasite Plasmodium Falciparum, are constructed using MQSM as molecular descriptors and are satisfactorily correlated. The statistical results of the 20 1,2,4-trioxanes are deeply analyzed to elucidate the relevant structural features in the biological activity, revealing the importance of phenyl substitutions.

  12. [Adoptive parents' satisfaction with the adoption experience and with its impact on family life].

    PubMed

    Sánchez-Sandoval, Yolanda

    2011-11-01

    In this study, we discuss the relevance of adoptive families' satisfaction in the assessment of adoption processes. The effects of adoption on a sample group of 272 adoptive families are analyzed. Most families show high levels of satisfaction as to: their decision to adopt, the features of their adopted children and how adoption has affected them as individuals and as a family. Statistical analyses show that these families can have different satisfaction levels depending on certain features of the adoptees, of the adoptive families or of their educational style. Life satisfaction of the adoptees is also related to how their adoptive parents evaluate the adoption.

  13. Through indigenous eyes: Native Americans and the HIV epidemic.

    PubMed

    Weaver, H N

    1999-02-01

    This article examines the phenomenon of HIV within the context of a Native American culture. Native Americans have some risk factors for HIV transmission that differ from those found in other populations. In addition, prevention and intervention activities with this population must consider cultural variables to maximize their effectiveness. Brief anecdotes are used to illustrate various concepts related to HIV and Native Americans and to include a human face along with facts and statistics. The author's unique perspective, coupled with a broad discussion of relevant issues enables non-Native American readers to understand better the phenomenon of HIV as it exists within a Native American context.

  14. Effect of esthetic core shades on the final color of IPS Empress all-ceramic crowns.

    PubMed

    Azer, Shereen S; Ayash, Ghada M; Johnston, William M; Khalil, Moustafa F; Rosenstiel, Stephen F

    2006-12-01

    Clinically relevant assessment of all-ceramic crowns supported by esthetic composite resin foundations has not been evaluated with regard to color reproducibility. This in vitro study quantitatively evaluated the influence of different shades of composite resin foundations and resin cement on the final color of a leucite-reinforced all-ceramic material. A total of 128 disks were fabricated; 64 (20 x 1 mm) were made of all-ceramic material (IPS Empress) and 64 (20 x 4 mm) of 4 different shades composite resin (Tetric Ceram). The ceramic and composite resin disks were luted using 2 shades (A3 and Transparent) of resin cement (Variolink II). Color was measured using a colorimeter configured with a diffuse illumination/0-degree viewing geometry, and Commission Internationale de l'Eclairage (CIE) L( *)a( *)b( *) values were directly calculated. Descriptive statistical analysis was performed, and color differences (DeltaE) for the average L( *), a( *) and b( *) color parameters were calculated. Repeated measures analysis of variance (ANOVA) was used to compare mean values and SDs between the different color combinations (alpha=.05). The CIE L( *)a( *)b( *) color coordinate values showed no significant differences for variation in color parameters due to the effect of the different composite resin shades (P=.24) or cement shades (P=.12). The mean color difference (DeltaE) value between the groups was 0.8. Within the limitations of this study, the use of different shades for composite resin cores and resin cements presented no statistically significant effect on the final color of IPS Empress all-ceramic material.

  15. Differential imprints of different ENSO flavors in global patterns of seasonal precipitation extremes

    NASA Astrophysics Data System (ADS)

    Wiedermann, Marc; Siegmund, Jonatan F.; Donges, Jonathan F.; Donner, Reik V.

    2017-04-01

    The El Nino Southern Oscillation (ENSO) with its positive (El Nino) and negative (La Nina) phases is known to trigger climatic responses in various parts of the Earth, an effect commonly attributed to teleconnectivity. A series of studies has demonstrated that El Nino periods exhibits a relatively broad variety of spatial patterns, which can be classified into two main flavors termed East Pacific (EP, canonical) and Central Pacific (CP, Modoki) El Nino, and that both subtypes can trigger distinct climatic responses like droughts vs. precipitation increases at the regional level. More recently, a similar discrimination of La Nina periods into two different flavors has been reported, and it is reasonable to assume that these different expressions are equally accompanied by differential responses of regional climate variability in particularly affected regions. In this work, we study in great detail the imprints of both types of El Nino and La Nina periods in extremal seasonal precipitation sums during fall (SON), winter (DJF) and spring (MAM) around the peak time of the corresponding ENSO phase. For this purpose, we employ a recently developed objective classification of El Nino and La Nina periods into their two respective flavors based on global teleconnectivity patterns in daily surface air temperature anomalies as captured by the associated climate network representations (Wiedermann et al., 2016). In order to study the statistical relevance of the timing of different El Nino and La Nina types on that of seasonal precipitation extremes around the globe (according to the GPCC data set as a reference), we utilize event coincidence analysis (Donges et al., 2016), a new powerful yet conceptually simple and intuitive statistical tool that allows quantifying the degree of simultaneity of distinct events in pairs of time series. Our results provide a comprehensive overview on ENSO related imprints in regional seasonal precipitation extremes. We demonstrate that key interlinkages between ENSO phases and droughts as well as extremely wet seasons depend crucially on the specific type of El Nino and La Nina event, highlighting the importance of correctly attributing the corresponding flavors when aiming to anticipate the likelihood of precipitation extremes. Straightforward upcoming extensions of the present work will address the imprints of ENSO types and flavors on extremes at different time scales that can be found in other relevant climate variables such as air temperature or more complex drought indices, as well as an assessment of the representation of the empirically found statistical relationships in contemporary climate models operated in hindcast as well as RCP scenario modes. M. Wiedermann, A. Radebach, J.F. Donges, J. Kurths, R.V. Donner: A climate network-based index to discriminate different types of El Nino and La Nina. Geophysical Research Letters, 43, 069119 (2016) J.F. Donges, C.-F. Schleussner, J.F. Siegmund, R.V. Donner: Event coincidence analysis for quantifying statistical interrelationships between event time series - On the role of extreme flood events as possible drivers of epidemics. European Physical Journal - Special Topics, 225(3), 471-487 (2016)

  16. Identification and characterization of earthquake clusters: a comparative analysis for selected sequences in Italy

    NASA Astrophysics Data System (ADS)

    Peresan, Antonella; Gentili, Stefania

    2017-04-01

    Identification and statistical characterization of seismic clusters may provide useful insights about the features of seismic energy release and their relation to physical properties of the crust within a given region. Moreover, a number of studies based on spatio-temporal analysis of main-shocks occurrence require preliminary declustering of the earthquake catalogs. Since various methods, relying on different physical/statistical assumptions, may lead to diverse classifications of earthquakes into main events and related events, we aim to investigate the classification differences among different declustering techniques. Accordingly, a formal selection and comparative analysis of earthquake clusters is carried out for the most relevant earthquakes in North-Eastern Italy, as reported in the local OGS-CRS bulletins, compiled at the National Institute of Oceanography and Experimental Geophysics since 1977. The comparison is then extended to selected earthquake sequences associated with a different seismotectonic setting, namely to events that occurred in the region struck by the recent Central Italy destructive earthquakes, making use of INGV data. Various techniques, ranging from classical space-time windows methods to ad hoc manual identification of aftershocks, are applied for detection of earthquake clusters. In particular, a statistical method based on nearest-neighbor distances of events in space-time-energy domain, is considered. Results from clusters identification by the nearest-neighbor method turn out quite robust with respect to the time span of the input catalogue, as well as to minimum magnitude cutoff. The identified clusters for the largest events reported in North-Eastern Italy since 1977 are well consistent with those reported in earlier studies, which were aimed at detailed manual aftershocks identification. The study shows that the data-driven approach, based on the nearest-neighbor distances, can be satisfactorily applied to decompose the seismic catalog into background seismicity and individual sequences of earthquake clusters, also in areas characterized by moderate seismic activity, where the standard declustering techniques may turn out rather gross approximations. With these results acquired, the main statistical features of seismic clusters are explored, including complex interdependence of related events, with the aim to characterize the space-time patterns of earthquakes occurrence in North-Eastern Italy and capture their basic differences with Central Italy sequences.

  17. Unstimulated cortisol secretory activity in everyday life and its relationship with fatigue and chronic fatigue syndrome: a systematic review and subset meta-analysis.

    PubMed

    Powell, Daniel J H; Liossi, Christina; Moss-Morris, Rona; Schlotz, Wolff

    2013-11-01

    The hypothalamic-pituitary-adrenal (HPA) axis is a psychoneuroendocrine regulator of the stress response and immune system, and dysfunctions have been associated with outcomes in several physical health conditions. Its end product, cortisol, is relevant to fatigue due to its role in energy metabolism. The systematic review examined the relationship between different markers of unstimulated salivary cortisol activity in everyday life in chronic fatigue syndrome (CFS) and fatigue assessed in other clinical and general populations. Search terms for the review related to salivary cortisol assessments, everyday life contexts, and fatigue. All eligible studies (n=19) were reviewed narratively in terms of associations between fatigue and assessed cortisol markers, including the cortisol awakening response (CAR), circadian profile (CP) output, and diurnal cortisol slope (DCS). Subset meta-analyses were conducted of case-control CFS studies examining group differences in three cortisol outcomes: CAR output; CAR increase; and CP output. Meta-analyses revealed an attenuation of the CAR increase within CFS compared to controls (d=-.34) but no statistically significant differences between groups for other markers. In the narrative review, total cortisol output (CAR or CP) was rarely associated with fatigue in any population; CAR increase and DCS were most relevant. Outcomes reflecting within-day change in cortisol levels (CAR increase; DCS) may be the most relevant to fatigue experience, and future research in this area should report at least one such marker. Results should be considered with caution due to heterogeneity in one meta-analysis and the small number of studies. Copyright © 2013 The Authors. Published by Elsevier Ltd.. All rights reserved.

  18. Publications in anesthesia journals: quality and clinical relevance.

    PubMed

    Lauritsen, Jakob; Moller, Ann M

    2004-11-01

    Clinicians performing evidence-based anesthesia rely on anesthesia journals for clinically relevant information. The objective of this study was to analyze the proportion of clinically relevant articles in five high impact anesthesia journals. We evaluated all articles published in Anesthesiology, Anesthesia & Analgesia, British Journal of Anesthesia, Anesthesia, and Acta Anaesthesiologica Scandinavica from January to June, 2000. Articles were assessed and classified according to type, outcome, and design; 1379 articles consisting of 5468 pages were evaluated and categorized. The most common types of article were animal and laboratory research (31.2%) and randomized clinical trial (20.4%). A clinically relevant article was defined as an article that used a statistically valid method and had a clinically relevant end-point. Altogether 18.6% of the pages had as their subject matter clinically relevant trials. We compared the Journal Impact Factor (a measure of the number of citations per article in a journal) and the proportion of clinically relevant pages and found that they were inversely proportional to each other.

  19. Pharmacokinetics of ketamine and norketamine enantiomers after racemic or S-ketamine IV bolus administration in dogs during sevoflurane anaesthesia.

    PubMed

    Romagnoli, Noemi; Bektas, Rima N; Kutter, Annette P; Barbarossa, Andrea; Roncada, Paola; Hartnack, Sonja; Bettschart-Wolfensberger, Regula

    2017-06-01

    The aims of this study were to measure plasma levels of R- and S-ketamine and their major metabolites R- and S-norketamine following single intravenous bolus administration of racemic or S-ketamine in sevoflurane anaesthetised dogs and to calculate the relevant pharmacokinetic profiles. Six adult healthy beagle dogs were used in the study. An intravenous bolus of 4mg/kg racemic ketamine (RS-KET) or 2mg/kg S-ketamine (S-KET) was administered, with a three-weeks washout period between treatments. Venous blood samples were collected at fixed times until 900min and R- and S-ketamine as well as R- and S-norketamine plasma levels determined by liquid chromatography coupled with tandem mass spectrometry. Cardiovascular parameters were recorded during the anaesthesia until 240min. All dogs recovered well from anaesthesia. No statistical differences between groups were detected in any cardiovascular parameter. The pharmacokinetics of S-ketamine did not differ when injected intravenously alone or as part of the racemic mixture in dogs anaesthetised with sevoflurane. Following racemic ketamine, the area under the curve of R-norketamine was statistically higher than the one of S-norketamine. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Statistical classification of road pavements using near field vehicle rolling noise measurements.

    PubMed

    Paulo, Joel Preto; Coelho, J L Bento; Figueiredo, Mário A T

    2010-10-01

    Low noise surfaces have been increasingly considered as a viable and cost-effective alternative to acoustical barriers. However, road planners and administrators frequently lack information on the correlation between the type of road surface and the resulting noise emission profile. To address this problem, a method to identify and classify different types of road pavements was developed, whereby near field road noise is analyzed using statistical learning methods. The vehicle rolling sound signal near the tires and close to the road surface was acquired by two microphones in a special arrangement which implements the Close-Proximity method. A set of features, characterizing the properties of the road pavement, was extracted from the corresponding sound profiles. A feature selection method was used to automatically select those that are most relevant in predicting the type of pavement, while reducing the computational cost. A set of different types of road pavement segments were tested and the performance of the classifier was evaluated. Results of pavement classification performed during a road journey are presented on a map, together with geographical data. This procedure leads to a considerable improvement in the quality of road pavement noise data, thereby increasing the accuracy of road traffic noise prediction models.

  1. Predicting risk for portal vein thrombosis in acute pancreatitis patients: A comparison of radical basis function artificial neural network and logistic regression models.

    PubMed

    Fei, Yang; Hu, Jian; Gao, Kun; Tu, Jianfeng; Li, Wei-Qin; Wang, Wei

    2017-06-01

    To construct a radical basis function (RBF) artificial neural networks (ANNs) model to predict the incidence of acute pancreatitis (AP)-induced portal vein thrombosis. The analysis included 353 patients with AP who had admitted between January 2011 and December 2015. RBF ANNs model and logistic regression model were constructed based on eleven factors relevant to AP respectively. Statistical indexes were used to evaluate the value of the prediction in two models. The predict sensitivity, specificity, positive predictive value, negative predictive value and accuracy by RBF ANNs model for PVT were 73.3%, 91.4%, 68.8%, 93.0% and 87.7%, respectively. There were significant differences between the RBF ANNs and logistic regression models in these parameters (P<0.05). In addition, a comparison of the area under receiver operating characteristic curves of the two models showed a statistically significant difference (P<0.05). The RBF ANNs model is more likely to predict the occurrence of PVT induced by AP than logistic regression model. D-dimer, AMY, Hct and PT were important prediction factors of approval for AP-induced PVT. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. Visualization of Spatio-Temporal Relations in Movement Event Using Multi-View

    NASA Astrophysics Data System (ADS)

    Zheng, K.; Gu, D.; Fang, F.; Wang, Y.; Liu, H.; Zhao, W.; Zhang, M.; Li, Q.

    2017-09-01

    Spatio-temporal relations among movement events extracted from temporally varying trajectory data can provide useful information about the evolution of individual or collective movers, as well as their interactions with their spatial and temporal contexts. However, the pure statistical tools commonly used by analysts pose many difficulties, due to the large number of attributes embedded in multi-scale and multi-semantic trajectory data. The need for models that operate at multiple scales to search for relations at different locations within time and space, as well as intuitively interpret what these relations mean, also presents challenges. Since analysts do not know where or when these relevant spatio-temporal relations might emerge, these models must compute statistical summaries of multiple attributes at different granularities. In this paper, we propose a multi-view approach to visualize the spatio-temporal relations among movement events. We describe a method for visualizing movement events and spatio-temporal relations that uses multiple displays. A visual interface is presented, and the user can interactively select or filter spatial and temporal extents to guide the knowledge discovery process. We also demonstrate how this approach can help analysts to derive and explain the spatio-temporal relations of movement events from taxi trajectory data.

  3. Hierarchy and Psychometric Properties of ADHD Symptoms in Spanish Children: An Application of the Graded Response Model

    PubMed Central

    Arias, Victor B.; Nuñez, Daniel E.; Martínez-Molina, Agustín; Ponce, Fernando P.; Arias, Benito

    2016-01-01

    The Diagnostic and Statistical Manual of Mental Disorders (DSM) diagnostic criteria assume that the 18 symptoms carry the same weight in an Attention Deficit with Hyperactivity Disorder (ADHD) diagnosis and bear the same discriminatory capacity. However, it is reasonable to think that symptoms may differ in terms of severity and even in the reliability with they represent the disorder. To test this hypothesis, the aim of this study was to calibrate in a sample of Spanish children (age 4–7; n = 784) a scale for assessing the symptoms of ADHD proposed by Diagnostic and Statistical Manual of Mental Disorders, IV-TR within the framework of Item Response Theory. Samejima’s Graded Response Model was used as a method for estimating the item difficulty and discrimination parameters. The results showed that ADHD subscales (Attention Deficit and Hyperactivity / Impulsivity) had good psychometric properties and had also a good fit to the model. However, relevant differences between symptoms were observed at the level of severity, informativeness and reliability for the assessment of ADHD. This finding suggests that it would be useful to identify the symptoms that are more important than the others with regard to diagnosing ADHD. PMID:27736911

  4. Hierarchy and Psychometric Properties of ADHD Symptoms in Spanish Children: An Application of the Graded Response Model.

    PubMed

    Arias, Victor B; Nuñez, Daniel E; Martínez-Molina, Agustín; Ponce, Fernando P; Arias, Benito

    2016-01-01

    The Diagnostic and Statistical Manual of Mental Disorders (DSM) diagnostic criteria assume that the 18 symptoms carry the same weight in an Attention Deficit with Hyperactivity Disorder (ADHD) diagnosis and bear the same discriminatory capacity. However, it is reasonable to think that symptoms may differ in terms of severity and even in the reliability with they represent the disorder. To test this hypothesis, the aim of this study was to calibrate in a sample of Spanish children (age 4-7; n = 784) a scale for assessing the symptoms of ADHD proposed by Diagnostic and Statistical Manual of Mental Disorders, IV-TR within the framework of Item Response Theory. Samejima's Graded Response Model was used as a method for estimating the item difficulty and discrimination parameters. The results showed that ADHD subscales (Attention Deficit and Hyperactivity / Impulsivity) had good psychometric properties and had also a good fit to the model. However, relevant differences between symptoms were observed at the level of severity, informativeness and reliability for the assessment of ADHD. This finding suggests that it would be useful to identify the symptoms that are more important than the others with regard to diagnosing ADHD.

  5. Applying Probabilistic Decision Models to Clinical Trial Design

    PubMed Central

    Smith, Wade P; Phillips, Mark H

    2018-01-01

    Clinical trial design most often focuses on a single or several related outcomes with corresponding calculations of statistical power. We consider a clinical trial to be a decision problem, often with competing outcomes. Using a current controversy in the treatment of HPV-positive head and neck cancer, we apply several different probabilistic methods to help define the range of outcomes given different possible trial designs. Our model incorporates the uncertainties in the disease process and treatment response and the inhomogeneities in the patient population. Instead of expected utility, we have used a Markov model to calculate quality adjusted life expectancy as a maximization objective. Monte Carlo simulations over realistic ranges of parameters are used to explore different trial scenarios given the possible ranges of parameters. This modeling approach can be used to better inform the initial trial design so that it will more likely achieve clinical relevance. PMID:29888075

  6. Random Positions of Dendritic Spines in Human Cerebral Cortex

    PubMed Central

    Morales, Juan; Benavides-Piccione, Ruth; Dar, Mor; Fernaud, Isabel; Rodríguez, Angel; Anton-Sanchez, Laura; Bielza, Concha; Larrañaga, Pedro; DeFelipe, Javier

    2014-01-01

    Dendritic spines establish most excitatory synapses in the brain and are located in Purkinje cell's dendrites along helical paths, perhaps maximizing the probability to contact different axons. To test whether spine helixes also occur in neocortex, we reconstructed >500 dendritic segments from adult human cortex obtained from autopsies. With Fourier analysis and spatial statistics, we analyzed spine position along apical and basal dendrites of layer 3 pyramidal neurons from frontal, temporal, and cingulate cortex. Although we occasionally detected helical positioning, for the great majority of dendrites we could not reject the null hypothesis of spatial randomness in spine locations, either in apical or basal dendrites, in neurons of different cortical areas or among spines of different volumes and lengths. We conclude that in adult human neocortex spine positions are mostly random. We discuss the relevance of these results for spine formation and plasticity and their functional impact for cortical circuits. PMID:25057209

  7. Fear and loathing: undergraduate nursing students' experiences of a mandatory course in applied statistics.

    PubMed

    Hagen, Brad; Awosoga, Oluwagbohunmi A; Kellett, Peter; Damgaard, Marie

    2013-04-23

    This article describes the results of a qualitative research study evaluating nursing students' experiences of a mandatory course in applied statistics, and the perceived effectiveness of teaching methods implemented during the course. Fifteen nursing students in the third year of a four-year baccalaureate program in nursing participated in focus groups before and after taking the mandatory course in statistics. The interviews were transcribed and analyzed using content analysis to reveal four major themes: (i) "one of those courses you throw out?," (ii) "numbers and terrifying equations," (iii) "first aid for statistics casualties," and (iv) "re-thinking curriculum." Overall, the data revealed that although nursing students initially enter statistics courses with considerable skepticism, fear, and anxiety, there are a number of concrete actions statistics instructors can take to reduce student fear and increase the perceived relevance of courses in statistics.

  8. Statistics in Japanese universities.

    PubMed Central

    Ito, P K

    1979-01-01

    The teaching of statistics in the U.S. and Japanese universities is briefly reviewed. It is found that H. Hotelling's articles and subsequent relevant publications on the teaching of statistics have contributed to a considerable extent to the establishment of excellent departments of statistics in U.S. universities and colleges. Today the U.S. may be proud of many well-staffed and well-organized departments of theoretical and applied statistics with excellent undergraduate and graduate programs. On the contrary, no Japanese universities have an independent department of statistics at present, and the teaching of statistics has been spread among a heterogeneous group of departments of application. This was mainly due to the Japanese government regulation concerning the establishment of a university. However, it has recently been revised so that an independent department of statistics may be started in a Japanese university with undergraduate and graduate programs. It is hoped that discussions will be started among those concerned on the question of organization of the teaching of statistics in Japanese universities as soon as possible. PMID:396154

  9. Organic Laboratory Experiments: Micro vs. Conventional.

    ERIC Educational Resources Information Center

    Chloupek-McGough, Marge

    1989-01-01

    Presents relevant statistics accumulated in a fall organic laboratory course. Discusses laboratory equipment setup to lower the amount of waste. Notes decreased solid wastes were produced compared to the previous semester. (MVL)

  10. Psychometric properties of the Portuguese version of place attachment scale for youth in residential care.

    PubMed

    Magalhães, Eunice; Calheiros, María M

    2015-01-01

    Although the significant scientific advances on place attachment literature, no instruments exist specifically developed or adapted to residential care. 410 adolescents (11 - 18 years old) participated in this study. The place attachment scale evaluates five dimensions: Place identity, Place dependence, Institutional bonding, Caregivers bonding and Friend bonding. Data analysis included descriptive statistics, content validity, construct validity (Confirmatory Factor Analysis), concurrent validity with correlations with satisfaction with life and with institution, and reliability evidences. The relationship with individual characteristics and placement length was also verified. Content validity analysis revealed that more than half of the panellists perceive all the items as relevant to assess the construct in residential care. The structure with five dimensions revealed good fit statistics and concurrent validity evidences were found, with significant correlations with satisfaction with life and with the institution. Acceptable values of internal consistence and specific gender differences were found. The preliminary psychometric properties of this scale suggest it potential to be used with youth in care.

  11. Stochastic Spatial Models in Ecology: A Statistical Physics Approach

    NASA Astrophysics Data System (ADS)

    Pigolotti, Simone; Cencini, Massimo; Molina, Daniel; Muñoz, Miguel A.

    2018-07-01

    Ecosystems display a complex spatial organization. Ecologists have long tried to characterize them by looking at how different measures of biodiversity change across spatial scales. Ecological neutral theory has provided simple predictions accounting for general empirical patterns in communities of competing species. However, while neutral theory in well-mixed ecosystems is mathematically well understood, spatial models still present several open problems, limiting the quantitative understanding of spatial biodiversity. In this review, we discuss the state of the art in spatial neutral theory. We emphasize the connection between spatial ecological models and the physics of non-equilibrium phase transitions and how concepts developed in statistical physics translate in population dynamics, and vice versa. We focus on non-trivial scaling laws arising at the critical dimension D = 2 of spatial neutral models, and their relevance for biological populations inhabiting two-dimensional environments. We conclude by discussing models incorporating non-neutral effects in the form of spatial and temporal disorder, and analyze how their predictions deviate from those of purely neutral theories.

  12. The role of control groups in mutagenicity studies: matching biological and statistical relevance.

    PubMed

    Hauschke, Dieter; Hothorn, Torsten; Schäfer, Juliane

    2003-06-01

    The statistical test of the conventional hypothesis of "no treatment effect" is commonly used in the evaluation of mutagenicity experiments. Failing to reject the hypothesis often leads to the conclusion in favour of safety. The major drawback of this indirect approach is that what is controlled by a prespecified level alpha is the probability of erroneously concluding hazard (producer risk). However, the primary concern of safety assessment is the control of the consumer risk, i.e. limiting the probability of erroneously concluding that a product is safe. In order to restrict this risk, safety has to be formulated as the alternative, and hazard, i.e. the opposite, has to be formulated as the hypothesis. The direct safety approach is examined for the case when the corresponding threshold value is expressed either as a fraction of the population mean for the negative control, or as a fraction of the difference between the positive and negative controls.

  13. [The Revision and 5th Edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5): Consequences for the Diagnostic Work with Children and Adolescents].

    PubMed

    Zulauf Logoz, Marina

    2014-01-01

    The Revision and 5th Edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5): Consequences for the Diagnostic Work with Children and Adolescents.The present paper describes and discusses the major revisions in DSM-5 for children and adolescents. A major modification is that the separate chapter for disorders first diagnosed in childhood and adolescence was abandoned in favour of the integration of these clinical pictures into the relevant disorder-specific chapters. Several new diagnoses and diagnostic groups were introduced: "Disruptive mood regulation disorder" is a new diagnosis; the different diagnoses for autism were brought together into one, and a new diagnostic group for obsessive-compulsive disorders has been established. The developmental approach of DSM-5 and the integration of dimensional assessment tools are to be welcomed. Practice will show if the critiques afraid of possible increases in prevalences or those who approve the changes will end up being right.

  14. Initial Steps toward Validating and Measuring the Quality of Computerized Provider Documentation

    PubMed Central

    Hammond, Kenric W.; Efthimiadis, Efthimis N.; Weir, Charlene R.; Embi, Peter J.; Thielke, Stephen M.; Laundry, Ryan M.; Hedeen, Ashley

    2010-01-01

    Background: Concerns exist about the quality of electronic health care documentation. Prior studies have focused on physicians. This investigation studied document quality perceptions of practitioners (including physicians), nurses and administrative staff. Methods: An instrument developed from staff interviews and literature sources was administered to 110 practitioners, nurses and administrative staff. Short, long and original versions of records were rated. Results: Length transformation did not affect quality ratings. On several scales practitioners rated notes less favorably than administrators or nurses. The original source document was associated with the quality rating, as was tf·idf, a relevance statistic computed from document text. Tf·idf was strongly associated with practitioner quality ratings. Conclusion: Document quality estimates were not sensitive to modifying redundancy in documents. Some perceptions of quality differ by role. Intrinsic document properties are associated with staff judgments of document quality. For practitioners, the tf·idf statistic was strongly associated with the quality dimensions evaluated. PMID:21346983

  15. Stochastic Spatial Models in Ecology: A Statistical Physics Approach

    NASA Astrophysics Data System (ADS)

    Pigolotti, Simone; Cencini, Massimo; Molina, Daniel; Muñoz, Miguel A.

    2017-11-01

    Ecosystems display a complex spatial organization. Ecologists have long tried to characterize them by looking at how different measures of biodiversity change across spatial scales. Ecological neutral theory has provided simple predictions accounting for general empirical patterns in communities of competing species. However, while neutral theory in well-mixed ecosystems is mathematically well understood, spatial models still present several open problems, limiting the quantitative understanding of spatial biodiversity. In this review, we discuss the state of the art in spatial neutral theory. We emphasize the connection between spatial ecological models and the physics of non-equilibrium phase transitions and how concepts developed in statistical physics translate in population dynamics, and vice versa. We focus on non-trivial scaling laws arising at the critical dimension D = 2 of spatial neutral models, and their relevance for biological populations inhabiting two-dimensional environments. We conclude by discussing models incorporating non-neutral effects in the form of spatial and temporal disorder, and analyze how their predictions deviate from those of purely neutral theories.

  16. Statistical studies of Pc 3-5 pulsations and their relevance for possible source mechanisms of ULF waves

    NASA Technical Reports Server (NTRS)

    Anderson, Brian J.

    1993-01-01

    A number of statistical studies using spacecraft data have been made of ULF waves in the magnetosphere. These studies provide an overview of ULF pulsation activity for r = 5-15 R(E) and allow an assessment of likely source mechanisms. In this review pulsations are categorized into five general types: compressional Pc 5, poloidal Pc 4, toroidal harmonics, toroidal Pc 5 (fundamental mode), and incoherent noise. The occurrence distributions and/or distributions of wave power of the different types suggest that compressional Pc 5 and poloidal Pc 4 derive their energy locally, most likely from energetic protons. The toroidal pulsations, both harmonic and fundamental mode, appear to be driven by an energy source outside the magnetopause - directly upstream in the sheath and solar wind for harmonics and the flanks for fundamentals. Incoherent pulsations are a prominent pulsation type but from their occurrence distribution alone it is unclear what their dominant energy source may be.

  17. Interactive classification and content-based retrieval of tissue images

    NASA Astrophysics Data System (ADS)

    Aksoy, Selim; Marchisio, Giovanni B.; Tusk, Carsten; Koperski, Krzysztof

    2002-11-01

    We describe a system for interactive classification and retrieval of microscopic tissue images. Our system models tissues in pixel, region and image levels. Pixel level features are generated using unsupervised clustering of color and texture values. Region level features include shape information and statistics of pixel level feature values. Image level features include statistics and spatial relationships of regions. To reduce the gap between low-level features and high-level expert knowledge, we define the concept of prototype regions. The system learns the prototype regions in an image collection using model-based clustering and density estimation. Different tissue types are modeled using spatial relationships of these regions. Spatial relationships are represented by fuzzy membership functions. The system automatically selects significant relationships from training data and builds models which can also be updated using user relevance feedback. A Bayesian framework is used to classify tissues based on these models. Preliminary experiments show that the spatial relationship models we developed provide a flexible and powerful framework for classification and retrieval of tissue images.

  18. Statistical modeling of urban air temperature distributions under different synoptic conditions

    NASA Astrophysics Data System (ADS)

    Beck, Christoph; Breitner, Susanne; Cyrys, Josef; Hald, Cornelius; Hartz, Uwe; Jacobeit, Jucundus; Richter, Katja; Schneider, Alexandra; Wolf, Kathrin

    2015-04-01

    Within urban areas air temperature may vary distinctly between different locations. These intra-urban air temperature variations partly reach magnitudes that are relevant with respect to human thermal comfort. Therefore and furthermore taking into account potential interrelations with other health related environmental factors (e.g. air quality) it is important to estimate spatial patterns of intra-urban air temperature distributions that may be incorporated into urban planning processes. In this contribution we present an approach to estimate spatial temperature distributions in the urban area of Augsburg (Germany) by means of statistical modeling. At 36 locations in the urban area of Augsburg air temperatures are measured with high temporal resolution (4 min.) since December 2012. These 36 locations represent different typical urban land use characteristics in terms of varying percentage coverages of different land cover categories (e.g. impervious, built-up, vegetated). Percentage coverages of these land cover categories have been extracted from different sources (Open Street Map, European Urban Atlas, Urban Morphological Zones) for regular grids of varying size (50, 100, 200 meter horizonal resolution) for the urban area of Augsburg. It is well known from numerous studies that land use characteristics have a distinct influence on air temperature and as well other climatic variables at a certain location. Therefore air temperatures at the 36 locations are modeled utilizing land use characteristics (percentage coverages of land cover categories) as predictor variables in Stepwise Multiple Regression models and in Random Forest based model approaches. After model evaluation via cross-validation appropriate statistical models are applied to gridded land use data to derive spatial urban air temperature distributions. Varying models are tested and applied for different seasons and times of the day and also for different synoptic conditions (e.g. clear and calm situations, cloudy and windy situations). Based on hourly air temperature data from our measurements in the urban area of Augsburg distinct temperature differences between locations with different urban land use characteristics are revealed. Under clear and calm weather conditions differences between mean hourly air temperatures reach values around 8°C. Whereas during cloudy and windy weather maximum differences in mean hourly air temperatures do not exceed 5°C. Differences appear usually slightly more pronounced in summer than in winter. First results from the application of statistical modeling approaches reveal promising skill of the models in terms of explained variances reaching up to 60% in leave-one-out cross-validation experiments. The contribution depicts the methodology of our approach and presents and discusses first results.

  19. The Statistical Segment Length of DNA: Opportunities for Biomechanical Modeling in Polymer Physics and Next-Generation Genomics.

    PubMed

    Dorfman, Kevin D

    2018-02-01

    The development of bright bisintercalating dyes for deoxyribonucleic acid (DNA) in the 1990s, most notably YOYO-1, revolutionized the field of polymer physics in the ensuing years. These dyes, in conjunction with modern molecular biology techniques, permit the facile observation of polymer dynamics via fluorescence microscopy and thus direct tests of different theories of polymer dynamics. At the same time, they have played a key role in advancing an emerging next-generation method known as genome mapping in nanochannels. The effect of intercalation on the bending energy of DNA as embodied by a change in its statistical segment length (or, alternatively, its persistence length) has been the subject of significant controversy. The precise value of the statistical segment length is critical for the proper interpretation of polymer physics experiments and controls the phenomena underlying the aforementioned genomics technology. In this perspective, we briefly review the model of DNA as a wormlike chain and a trio of methods (light scattering, optical or magnetic tweezers, and atomic force microscopy (AFM)) that have been used to determine the statistical segment length of DNA. We then outline the disagreement in the literature over the role of bisintercalation on the bending energy of DNA, and how a multiscale biomechanical approach could provide an important model for this scientifically and technologically relevant problem.

  20. The Use of Cronbach's Alpha When Developing and Reporting Research Instruments in Science Education

    NASA Astrophysics Data System (ADS)

    Taber, Keith S.

    2017-06-01

    Cronbach's alpha is a statistic commonly quoted by authors to demonstrate that tests and scales that have been constructed or adopted for research projects are fit for purpose. Cronbach's alpha is regularly adopted in studies in science education: it was referred to in 69 different papers published in 4 leading science education journals in a single year (2015)—usually as a measure of reliability. This article explores how this statistic is used in reporting science education research and what it represents. Authors often cite alpha values with little commentary to explain why they feel this statistic is relevant and seldom interpret the result for readers beyond citing an arbitrary threshold for an acceptable value. Those authors who do offer readers qualitative descriptors interpreting alpha values adopt a diverse and seemingly arbitrary terminology. More seriously, illustrative examples from the science education literature demonstrate that alpha may be acceptable even when there are recognised problems with the scales concerned. Alpha is also sometimes inappropriately used to claim an instrument is unidimensional. It is argued that a high value of alpha offers limited evidence of the reliability of a research instrument, and that indeed a very high value may actually be undesirable when developing a test of scientific knowledge or understanding. Guidance is offered to authors reporting, and readers evaluating, studies that present Cronbach's alpha statistic as evidence of instrument quality.

  1. Body Mass Index Class Is Independently Associated With Health-Related Quality of Life After Primary Total Hip Arthroplasty: An Institutional Registry-Based Study.

    PubMed

    McLawhorn, Alexander S; Steinhaus, Michael E; Southren, Daniel L; Lee, Yuo-Yu; Dodwell, Emily R; Figgie, Mark P

    2017-01-01

    The purpose of this study was to compare the health-related quality of life (HRQoL) of patients across World Health Organization (WHO) body mass index (BMI) classes before and after total hip arthroplasty (THA). Patients with end-stage hip osteoarthritis who received elective primary unilateral THA were identified through an institutional registry and categorized based on the World Health Organization BMI classification. Age, sex, laterality, year of surgery, and Charlson-Deyo comorbidity index were recorded. The primary outcome was the EQ-5D-3L index and visual analog scale (EQ-VAS) scores at 2 years postoperatively. Inferential statistics and regression analyses were performed to determine associations between BMI classes and HRQoL. EQ-5D-3L scores at baseline and at 2 years were statistically different across BMI classes, with higher EQ-VAS and index scores in patients with lower BMI. There was no difference observed for the 2-year change in EQ-VAS scores, but there was a statistically greater increase in index scores for more obese patients. In the regression analyses, there were statistically significant negative effect estimates for EQ-VAS and index scores associated with increasing BMI class. BMI class is independently associated with lower HRQoL scores 2 years after primary THA. While absolute scores in obese patients were lower than in nonobese patients, obese patients enjoyed more positive changes in EQ-5D index scores after THA. These results may provide the most detailed information on how BMI influences HRQoL before and after THA, and they are relevant to future economic decision analyses on the topic. Copyright © 2016 Elsevier Inc. All rights reserved.

  2. Characteristics of genomic signatures derived using univariate methods and mechanistically anchored functional descriptors for predicting drug- and xenobiotic-induced nephrotoxicity.

    PubMed

    Shi, Weiwei; Bugrim, Andrej; Nikolsky, Yuri; Nikolskya, Tatiana; Brennan, Richard J

    2008-01-01

    ABSTRACT The ideal toxicity biomarker is composed of the properties of prediction (is detected prior to traditional pathological signs of injury), accuracy (high sensitivity and specificity), and mechanistic relationships to the endpoint measured (biological relevance). Gene expression-based toxicity biomarkers ("signatures") have shown good predictive power and accuracy, but are difficult to interpret biologically. We have compared different statistical methods of feature selection with knowledge-based approaches, using GeneGo's database of canonical pathway maps, to generate gene sets for the classification of renal tubule toxicity. The gene set selection algorithms include four univariate analyses: t-statistics, fold-change, B-statistics, and RankProd, and their combination and overlap for the identification of differentially expressed probes. Enrichment analysis following the results of the four univariate analyses, Hotelling T-square test, and, finally out-of-bag selection, a variant of cross-validation, were used to identify canonical pathway maps-sets of genes coordinately involved in key biological processes-with classification power. Differentially expressed genes identified by the different statistical univariate analyses all generated reasonably performing classifiers of tubule toxicity. Maps identified by enrichment analysis or Hotelling T-square had lower classification power, but highlighted perturbed lipid homeostasis as a common discriminator of nephrotoxic treatments. The out-of-bag method yielded the best functionally integrated classifier. The map "ephrins signaling" performed comparably to a classifier derived using sparse linear programming, a machine learning algorithm, and represents a signaling network specifically involved in renal tubule development and integrity. Such functional descriptors of toxicity promise to better integrate predictive toxicogenomics with mechanistic analysis, facilitating the interpretation and risk assessment of predictive genomic investigations.

  3. Molecular system identification for enzyme directed evolution and design

    NASA Astrophysics Data System (ADS)

    Guan, Xiangying; Chakrabarti, Raj

    2017-09-01

    The rational design of chemical catalysts requires methods for the measurement of free energy differences in the catalytic mechanism for any given catalyst Hamiltonian. The scope of experimental learning algorithms that can be applied to catalyst design would also be expanded by the availability of such methods. Methods for catalyst characterization typically either estimate apparent kinetic parameters that do not necessarily correspond to free energy differences in the catalytic mechanism or measure individual free energy differences that are not sufficient for establishing the relationship between the potential energy surface and catalytic activity. Moreover, in order to enhance the duty cycle of catalyst design, statistically efficient methods for the estimation of the complete set of free energy differences relevant to the catalytic activity based on high-throughput measurements are preferred. In this paper, we present a theoretical and algorithmic system identification framework for the optimal estimation of free energy differences in solution phase catalysts, with a focus on one- and two-substrate enzymes. This framework, which can be automated using programmable logic, prescribes a choice of feasible experimental measurements and manipulated input variables that identify the complete set of free energy differences relevant to the catalytic activity and minimize the uncertainty in these free energy estimates for each successive Hamiltonian design. The framework also employs decision-theoretic logic to determine when model reduction can be applied to improve the duty cycle of high-throughput catalyst design. Automation of the algorithm using fluidic control systems is proposed, and applications of the framework to the problem of enzyme design are discussed.

  4. Methods for specifying spatial boundaries of cities in the world: The impacts of delineation methods on city sustainability indices.

    PubMed

    Uchiyama, Yuta; Mori, Koichiro

    2017-08-15

    The purpose of this paper is to analyze how different definitions and methods for delineating the spatial boundaries of cities have an impact on the values of city sustainability indicators. It is necessary to distinguish the inside of cities from the outside when calculating the values of sustainability indicators that assess the impacts of human activities within cities on areas beyond their boundaries. For this purpose, spatial boundaries of cities should be practically detected on the basis of a relevant definition of a city. Although no definition of a city is commonly shared among academic fields, three practical methods for identifying urban areas are available in remote sensing science. Those practical methods are based on population density, landcover, and night-time lights. These methods are correlated, but non-negligible differences exist in their determination of urban extents and urban population. Furthermore, critical and statistically significant differences in some urban environmental sustainability indicators result from the three different urban detection methods. For example, the average values of CO 2 emissions per capita and PM 10 concentration in cities with more than 1 million residents are significantly different among the definitions. When analyzing city sustainability indicators and disseminating the implication of the results, the values based on the different definitions should be simultaneously investigated. It is necessary to carefully choose a relevant definition to analyze sustainability indicators for policy making. Otherwise, ineffective and inefficient policies will be developed. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Rapid label-free identification of Klebsiella pneumoniae antibiotic resistant strains by the drop-coating deposition surface-enhanced Raman scattering method

    NASA Astrophysics Data System (ADS)

    Cheong, Youjin; Kim, Young Jin; Kang, Heeyoon; Choi, Samjin; Lee, Hee Joo

    2017-08-01

    Although many methodologies have been developed to identify unknown bacteria, bacterial identification in clinical microbiology remains a complex and time-consuming procedure. To address this problem, we developed a label-free method for rapidly identifying clinically relevant multilocus sequencing typing-verified quinolone-resistant Klebsiella pneumoniae strains. We also applied the method to identify three strains from colony samples, ATCC70063 (control), ST11 and ST15; these are the prevalent quinolone-resistant K. pneumoniae strains in East Asia. The colonies were identified using a drop-coating deposition surface-enhanced Raman scattering (DCD-SERS) procedure coupled with a multivariate statistical method. Our workflow exhibited an enhancement factor of 11.3 × 106 to Raman intensities, high reproducibility (relative standard deviation of 7.4%), and a sensitive limit of detection (100 pM rhodamine 6G), with a correlation coefficient of 0.98. All quinolone-resistant K. pneumoniae strains showed similar spectral Raman shifts (high correlations) regardless of bacterial type, as well as different Raman vibrational modes compared to Escherichia coli strains. Our proposed DCD-SERS procedure coupled with the multivariate statistics-based identification method achieved excellent performance in discriminating similar microbes from one another and also in subtyping of K. pneumoniae strains. Therefore, our label-free DCD-SERS procedure coupled with the computational decision supporting method is a potentially useful method for the rapid identification of clinically relevant K. pneumoniae strains.

  6. Effect of three different rotary instrumentation systems on postinstrumentation pain: A randomized clinical trial

    PubMed Central

    Subbiya, Arunajatesan; Cherkas, Pavel S.; Vivekanandhan, Paramasivam; Geethapriya, Nagarajan; Malarvizhi, Dhakshinamoorthy; Mitthra, Suresh

    2017-01-01

    Background: Endodontic instrumentation is liable to cause some postinstrumentation pain (PIP). Rotary endodontic instruments differ in their design, metallurgy, surface treatment, etc. Aim: This randomized clinical trial aimed to assess the incidence of PIP after root canal instrumentation with three different rotary endodontic systems which differ in their design, namely, ProTaper, Mtwo, and K3. Materials and Methods: A total of 150 patients between the ages of 25 and 50 were chosen for the study. Teeth with asymptomatic irreversible pulpitis due to carious exposure were selected. The patients received local anesthesia by inferior alveolar nerve block. After preparing the access cavity, root canal instrumentation was done with one of the three instruments (n = 50) and closed dressing was given. PIP was assessed every 12 h for 5 days, and tenderness to percussion was analyzed at the end of 1, 3, and 7 days. Statistical Analysis: Mann–Whitney U-test to determine significant differences at P < 0.01. Results: The PIP and tenderness were less in Mtwo group when compared to ProTaper and K3 groups up to 84 h and 72 h respectively and statistically significant (P < 0.05). There was no statistically significant difference between ProTaper and K3 both in PIP and tenderness. Conclusion: Rotary endodontic instrumentation causes some degree of PIP and tenderness to percussion. Among the instruments used, Mtwo causes less PIP and tenderness when compared to ProTaper and K3, and there was no difference between ProTaper and K3. Clinical Relevance: PIP is highly subjective and may vary among different subjects. The apical (3 mm) taper of ProTaper was 0.08 followed by a smaller taper, whereas, the other two files were of a constant 0.06 taper, which means there could have been a greater apical extrusion and therefore more PIP. Despite, the mean of the age was similar, there could have been a difference in the size of the canal and therefore a difference in apical extrusion and PIP. PMID:29430103

  7. Analysis of differences in bone removal during femoral box osteotomy for primary total knee arthroplasty.

    PubMed

    Graceffa, Angelo; Indelli, Pier Francesco; Basnett, Kaitlyn; Marcucci, Massimiliano

    2014-01-01

    this study was conducted to compare the quantity of intercondylar bone removed during femoral box osteotomy for implantation of three contemporary posterior stabilized (PS) total knee arthroplasty designs: Sigma PS (DePuy), Vanguard (Biomet) and Persona (Zimmer). we compared the maximum volumetric bone resection required for the housing of the PS mechanism of these three designs. Bone removal by each PS box cutting jig was three-dimensionally measured. The differences between the three designs were analyzed by the Kruskal-Wallis test. The Mann-Whitney U-test was used for pairwise comparisons. The level of significance was set at p<0.05. for small-size implants, the average box osteotomy volume of Persona was significantly smaller than the Vanguard and Sigma PS volumes (p=0.003). The mean difference between Vanguard and Sigma PS (p=0.01) was also significant. For medium size implants, the mean difference between Persona and Sigma PS (p=0.008) and the mean difference between Vanguard and Sigma PS (p=0.01) were statistically significant. For large size implants, the mean difference between Vanguard and Sigma PS (p=0.01) and the mean difference between Sigma PS and Persona (p=0.008) were statistically significant. irrespective of implant size, the Persona cutting jig always resected significantly less bone than did Vanguard and Sigma PS. although this study does not establish any clinical relevance of removing more or less bone at primary TKA, its results suggest that if a PS design is indicated, it is preferable to select a model which resects less distal femoral bone.

  8. Nonstationarity RC Workshop Report: Nonstationary Weather Patterns and Extreme Events Informing Design and Planning for Long-Lived Infrastructure

    DTIC Science & Technology

    2017-11-01

    magnitude, intensity, and seasonality of climate. For infrastructure projects, relevant design life often exceeds 30 years—a period of time of...uncertainty about future statistical properties of climate at time and spatial scales required for planning and design purposes. Information...about future statistical properties of climate at time and spatial scales required for planning and design , and for assessing future operational

  9. Implementation of the common phrase index method on the phrase query for information retrieval

    NASA Astrophysics Data System (ADS)

    Fatmawati, Triyah; Zaman, Badrus; Werdiningsih, Indah

    2017-08-01

    As the development of technology, the process of finding information on the news text is easy, because the text of the news is not only distributed in print media, such as newspapers, but also in electronic media that can be accessed using the search engine. In the process of finding relevant documents on the search engine, a phrase often used as a query. The number of words that make up the phrase query and their position obviously affect the relevance of the document produced. As a result, the accuracy of the information obtained will be affected. Based on the outlined problem, the purpose of this research was to analyze the implementation of the common phrase index method on information retrieval. This research will be conducted in English news text and implemented on a prototype to determine the relevance level of the documents produced. The system is built with the stages of pre-processing, indexing, term weighting calculation, and cosine similarity calculation. Then the system will display the document search results in a sequence, based on the cosine similarity. Furthermore, system testing will be conducted using 100 documents and 20 queries. That result is then used for the evaluation stage. First, determine the relevant documents using kappa statistic calculation. Second, determine the system success rate using precision, recall, and F-measure calculation. In this research, the result of kappa statistic calculation was 0.71, so that the relevant documents are eligible for the system evaluation. Then the calculation of precision, recall, and F-measure produces precision of 0.37, recall of 0.50, and F-measure of 0.43. From this result can be said that the success rate of the system to produce relevant documents is low.

  10. What is the effect of surgery on the quality of life of the adolescent with adolescent idiopathic scoliosis? A review and statistical analysis of the literature.

    PubMed

    Rushton, Paul R P; Grevitt, Michael P

    2013-04-20

    Review and statistical analysis of studies evaluating the effect of surgery on the health-related quality of life of adolescents with adolescent idiopathic scoliosis, using Scoliosis Research Society (SRS) outcomes. Apply published minimum clinical important differences (MCID) values for the SRS22r questionnaire to the literature to identify what areas of health-related quality of life are consistently affected by surgery and whether changes are clinically meaningful. The interpretation of published studies using the SRS outcomes has been limited by the lack of MCID values for the questionnaire domains. The recent publication of these data allows the clinical importance of any changes in these studies to be examined for the first time. A literature search was undertaken to locate suitable studies that were then analyzed. Statistically significant differences from baseline to 2 years postoperatively were ascertained by narratively reporting the analyses within included studies. When possible, clinically significant changes were assessed using 95% confidence intervals for the change in mean domain score. If the lower bound of the confidence intervals for the change exceeded the MCID for that domain, the change was considered clinically significant. The numbers of cohorts available for the different analyses varied (5-16). Eighty-one percent and 94% of included cohorts experienced statistically significant improvements in pain and self-image domains. In terms of clinical significance, it was only self-image that regularly improved by more than MCID, doing so in 4 of 5 included cohorts (80%) compared with 1 of 12 cohorts (8%) for pain. No clinically relevant changes occurred in mental health or activity domains. Evidence suggests that surgery can lead to clinically important improvement in patient self-image. Surgeons and patients should be aware of the limited evidence for improvements in domains other than self-image after surgery. Surgical decision-making will also be influenced by the natural history of adolescent idiopathic scoliosis.

  11. Densely calculated facial soft tissue thickness for craniofacial reconstruction in Chinese adults.

    PubMed

    Shui, Wuyang; Zhou, Mingquan; Deng, Qingqiong; Wu, Zhongke; Ji, Yuan; Li, Kang; He, Taiping; Jiang, Haiyan

    2016-09-01

    Craniofacial reconstruction (CFR) is used to recreate a likeness of original facial appearance for an unidentified skull; this technique has been applied in both forensics and archeology. Many CFR techniques rely on the average facial soft tissue thickness (FSTT) of anatomical landmarks, related to ethnicity, age, sex, body mass index (BMI), etc. Previous studies typically employed FSTT at sparsely distributed anatomical landmarks, where different landmark definitions may affect the contrasting results. In the present study, a total of 90,198 one-to-one correspondence skull vertices are established on 171 head CT-scans and the FSTT of each corresponding vertex is calculated (hereafter referred to as densely calculated FSTT) for statistical analysis and CFR. Basic descriptive statistics (i.e., mean and standard deviation) for densely calculated FSTT are reported separately according to sex and age. Results show that 76.12% of overall vertices indicate that the FSTT is greater in males than females, with the exception of vertices around the zygoma, zygomatic arch and mid-lateral orbit. These sex-related significant differences are found at 55.12% of all vertices and the statistically age-related significant differences are depicted between the three age groups at a majority of all vertices (73.31% for males and 63.43% for females). Five non-overlapping categories are given and the descriptive statistics (i.e., mean, standard deviation, local standard deviation and percentage) are reported. Multiple appearances are produced using the densely calculated FSTT of various age and sex groups, and a quantitative assessment is provided to examine how relevant the choice of FSTT is to increasing the accuracy of CFR. In conclusion, this study provides a new perspective in understanding the distribution of FSTT and the construction of a new densely calculated FSTT database for craniofacial reconstruction. Copyright © 2016. Published by Elsevier Ireland Ltd.

  12. Building a database for statistical characterization of ELMs on DIII-D

    NASA Astrophysics Data System (ADS)

    Fritch, B. J.; Marinoni, A.; Bortolon, A.

    2017-10-01

    Edge localized modes (ELMs) are bursty instabilities which occur in the edge region of H-mode plasmas and have the potential to damage in-vessel components of future fusion machines by exposing the divertor region to large energy and particle fluxes during each ELM event. While most ELM studies focus on average quantities (e.g. energy loss per ELM), this work investigates the statistical distributions of ELM characteristics, as a function of plasma parameters. A semi-automatic algorithm is being used to create a database documenting trigger times of the tens of thousands of ELMs for DIII-D discharges in scenarios relevant to ITER, thus allowing statistically significant analysis. Probability distributions of inter-ELM periods and energy losses will be determined and related to relevant plasma parameters such as density, stored energy, and current in order to constrain models and improve estimates of the expected inter-ELM periods and sizes, both of which must be controlled in future reactors. Work supported in part by US DoE under the Science Undergraduate Laboratory Internships (SULI) program, DE-FC02-04ER54698 and DE-FG02- 94ER54235.

  13. Statistics for Radiology Research.

    PubMed

    Obuchowski, Nancy A; Subhas, Naveen; Polster, Joshua

    2017-02-01

    Biostatistics is an essential component in most original research studies in imaging. In this article we discuss five key statistical concepts for study design and analyses in modern imaging research: statistical hypothesis testing, particularly focusing on noninferiority studies; imaging outcomes especially when there is no reference standard; dealing with the multiplicity problem without spending all your study power; relevance of confidence intervals in reporting and interpreting study results; and finally tools for assessing quantitative imaging biomarkers. These concepts are presented first as examples of conversations between investigator and biostatistician, and then more detailed discussions of the statistical concepts follow. Three skeletal radiology examples are used to illustrate the concepts. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  14. Permutation testing of orthogonal factorial effects in a language-processing experiment using fMRI.

    PubMed

    Suckling, John; Davis, Matthew H; Ooi, Cinly; Wink, Alle Meije; Fadili, Jalal; Salvador, Raymond; Welchew, David; Sendur, Levent; Maxim, Vochita; Bullmore, Edward T

    2006-05-01

    The block-paradigm of the Functional Image Analysis Contest (FIAC) dataset was analysed with the Brain Activation and Morphological Mapping software. Permutation methods in the wavelet domain were used for inference on cluster-based test statistics of orthogonal contrasts relevant to the factorial design of the study, namely: the average response across all active blocks, the main effect of speaker, the main effect of sentence, and the interaction between sentence and speaker. Extensive activation was seen with all these contrasts. In particular, different vs. same-speaker blocks produced elevated activation in bilateral regions of the superior temporal lobe and repetition suppression for linguistic materials (same vs. different-sentence blocks) in left inferior frontal regions. These are regions previously reported in the literature. Additional regions were detected in this study, perhaps due to the enhanced sensitivity of the methodology. Within-block sentence suppression was tested post-hoc by regression of an exponential decay model onto the extracted time series from the left inferior frontal gyrus, but no strong evidence of such an effect was found. The significance levels set for the activation maps are P-values at which we expect <1 false-positive cluster per image. Nominal type I error control was verified by empirical testing of a test statistic corresponding to a randomly ordered design matrix. The small size of the BOLD effect necessitates sensitive methods of detection of brain activation. Permutation methods permit the necessary flexibility to develop novel test statistics to meet this challenge.

  15. Scandcleft randomised trials of primary surgery for unilateral cleft lip and palate: 8. Assessing naso-labial appearance in 5-year-olds - a preliminary study.

    PubMed

    Mølsted, Kirsten; Humerinta, Kirsti; Küseler, Annelise; Skaare, Pål; Bellardie, Haydn; Shaw, William; Karsten, Agneta; Kåre Sæle, Paul; Rizell, Sara; Marcusson, Agneta; Eyres, Philip; Semb, Gunvor

    2017-02-01

    Facial appearance is one of the most relevant measures of success in cleft lip and palate treatment. The aim was to assess nasolabial appearance at 5 years of age in all children in the project. In this part of the project the local protocol for lip closure continued to be used because the primary lip and nose operations were not part of the randomisation. The great majority of the surgeons used Millard's technique together with McComb's technique for the nose. One center used Tennison-Randalls technique and in one center the centers own technique as well as nose plugs were used. Three hundred and fifty-nine children participated in this part of the project. Standardised photos according to a specific protocol developed for the Scandcleft project were taken. Only the nasolabial area was shown, the surrounding facial features were masked. Three components were scored using a 5-point ordinal scale. A new developed Scandcleft Yardstick was used. The reliability of the method was tested using the weighted kappa statistics. Both the interrater and intrarater reliability scores were good to very good. There were statistically significant differences between the three trials. The Millard procedure combined with McComb technique had been used in the majority of the cases in all three trials. There were statistically significant differences between the three trials concerning upper lip, nasal form, and cleft side profile. ISRCTN29932826.

  16. An Audit of Indian Health Insurance Claims for Mental Illness from Pooled Insurance Information Bureau's Macroindicator Data.

    PubMed

    Mohandoss, Anusa Arunachalam; Thavarajah, Rooban

    2017-01-01

    Information on the social and voluntary insurance coverage of mental illness in India is scarce. We attempted to address this lacuna, utilizing a secondary macrodata approach for 3 years. Mental illness per se is not covered by most of existing Indian health insurance policies. Publicly available de-identified claim macrodata for all health (nonlife) insurance for Indian financial year from 2011-2012 to 2013-2014 were collected. The age group, gender, amount of claims, proportion of claims, and details of number of days of hospitalization were collected and analyzed. Descriptive statistics, Chi-square test, and Wilcoxon tests were used appropriately. P ≤ 0.05 was considered statistically significant. In 2011-2012, there were 2864 claims from the registered 2,591,781 members citing mental illness (0.11%) which decreased to 0.03% in 2012-2013 and marginally rose to 0.07% of all claims. The total amount of claims paid for mental illness was Rs. 51.7 millions in 2011-2012, Rs. 97.2 million in 2012-2013, and Rs. 150 million in 2013-2014. Statistically significant difference emerged in terms of age group, gender, amount and proportion of claim, and number of days of hospitalization. The penetration of health insurance is low and claim for mental illness remains low. The difference in patterns of age, gender, amount of claims, and number of days for mental illness provides detailed relevant information to formulate future policies.

  17. I-Maculaweb: A Tool to Support Data Reuse in Ophthalmology

    PubMed Central

    Bonetto, Monica; Nicolò, Massimo; Gazzarata, Roberta; Fraccaro, Paolo; Rosa, Raffaella; Musetti, Donatella; Musolino, Maria; Traverso, Carlo E.

    2016-01-01

    This paper intends to present a Web-based application to collect and manage clinical data and clinical trials together in a unique tool. I-maculaweb is a user-friendly Web-application designed to manage, share, and analyze clinical data from patients affected by degenerative and vascular diseases of the macula. The unique and innovative scientific and technological elements of this project are the integration with individual and population data, relevant for degenerative and vascular diseases of the macula. Clinical records can also be extracted for statistical purposes and used for clinical decision support systems. I-maculaweb is based on an existing multilevel and multiscale data management model, which includes general principles that are suitable for several different clinical domains. The database structure has been specifically built to respect laterality, a key aspect in ophthalmology. Users can add and manage patient records, follow-up visits, treatment, diagnoses, and clinical history. There are two different modalities to extract records: one for the patient’s own center, in which personal details are shown and the other for statistical purposes, where all center’s anonymized data are visible. The Web-platform allows effective management, sharing, and reuse of information within primary care and clinical research. Clear and precise clinical data will improve understanding of real-life management of degenerative and vascular diseases of the macula as well as increasing precise epidemiologic and statistical data. Furthermore, this Web-based application can be easily employed as an electronic clinical research file in clinical studies. PMID:27170913

  18. A Quantile Mapping Bias Correction Method Based on Hydroclimatic Classification of the Guiana Shield

    PubMed Central

    Ringard, Justine; Seyler, Frederique; Linguet, Laurent

    2017-01-01

    Satellite precipitation products (SPPs) provide alternative precipitation data for regions with sparse rain gauge measurements. However, SPPs are subject to different types of error that need correction. Most SPP bias correction methods use the statistical properties of the rain gauge data to adjust the corresponding SPP data. The statistical adjustment does not make it possible to correct the pixels of SPP data for which there is no rain gauge data. The solution proposed in this article is to correct the daily SPP data for the Guiana Shield using a novel two set approach, without taking into account the daily gauge data of the pixel to be corrected, but the daily gauge data from surrounding pixels. In this case, a spatial analysis must be involved. The first step defines hydroclimatic areas using a spatial classification that considers precipitation data with the same temporal distributions. The second step uses the Quantile Mapping bias correction method to correct the daily SPP data contained within each hydroclimatic area. We validate the results by comparing the corrected SPP data and daily rain gauge measurements using relative RMSE and relative bias statistical errors. The results show that analysis scale variation reduces rBIAS and rRMSE significantly. The spatial classification avoids mixing rainfall data with different temporal characteristics in each hydroclimatic area, and the defined bias correction parameters are more realistic and appropriate. This study demonstrates that hydroclimatic classification is relevant for implementing bias correction methods at the local scale. PMID:28621723

  19. A Quantile Mapping Bias Correction Method Based on Hydroclimatic Classification of the Guiana Shield.

    PubMed

    Ringard, Justine; Seyler, Frederique; Linguet, Laurent

    2017-06-16

    Satellite precipitation products (SPPs) provide alternative precipitation data for regions with sparse rain gauge measurements. However, SPPs are subject to different types of error that need correction. Most SPP bias correction methods use the statistical properties of the rain gauge data to adjust the corresponding SPP data. The statistical adjustment does not make it possible to correct the pixels of SPP data for which there is no rain gauge data. The solution proposed in this article is to correct the daily SPP data for the Guiana Shield using a novel two set approach, without taking into account the daily gauge data of the pixel to be corrected, but the daily gauge data from surrounding pixels. In this case, a spatial analysis must be involved. The first step defines hydroclimatic areas using a spatial classification that considers precipitation data with the same temporal distributions. The second step uses the Quantile Mapping bias correction method to correct the daily SPP data contained within each hydroclimatic area. We validate the results by comparing the corrected SPP data and daily rain gauge measurements using relative RMSE and relative bias statistical errors. The results show that analysis scale variation reduces rBIAS and rRMSE significantly. The spatial classification avoids mixing rainfall data with different temporal characteristics in each hydroclimatic area, and the defined bias correction parameters are more realistic and appropriate. This study demonstrates that hydroclimatic classification is relevant for implementing bias correction methods at the local scale.

  20. A rigidity transition and glassy dynamics in a model for confluent 3D tissues

    NASA Astrophysics Data System (ADS)

    Merkel, Matthias; Manning, M. Lisa

    The origin of rigidity in disordered materials is an outstanding open problem in statistical physics. Recently, a new type of rigidity transition was discovered in a family of models for 2D biological tissues, but the mechanisms responsible for rigidity remain unclear. This is not just a statistical physics problem, but also relevant for embryonic development, cancer growth, and wound healing. To gain insight into this rigidity transition and make new predictions about biological bulk tissues, we have developed a fully 3D self-propelled Voronoi (SPV) model. The model takes into account shape, elasticity, and self-propelled motion of the individual cells. We find that in the absence of self-propulsion, this model exhibits a rigidity transition that is controlled by a dimensionless model parameter describing the preferred cell shape, with an accompanying structural order parameter. In the presence of self-propulsion, the rigidity transition appears as a glass-like transition featuring caging and aging effects. Given the similarities between this transition and jamming in particulate solids, it is natural to ask if the two transitions are related. By comparing statistics of Voronoi geometries, we show the transitions are surprisingly close but demonstrably distinct. Furthermore, an index theorem used to identify topologically protected mechanical modes in jammed systems can be extended to these vertex-type models. In our model, residual stresses govern the transition and enter the index theorem in a different way compared to jammed particles, suggesting the origin of rigidity may be different between the two.

  1. A maximally selected test of symmetry about zero.

    PubMed

    Laska, Eugene; Meisner, Morris; Wanderling, Joseph

    2012-11-20

    The problem of testing symmetry about zero has a long and rich history in the statistical literature. We introduce a new test that sequentially discards observations whose absolute value is below increasing thresholds defined by the data. McNemar's statistic is obtained at each threshold and the largest is used as the test statistic. We obtain the exact distribution of this maximally selected McNemar and provide tables of critical values and a program for computing p-values. Power is compared with the t-test, the Wilcoxon Signed Rank Test and the Sign Test. The new test, MM, is slightly less powerful than the t-test and Wilcoxon Signed Rank Test for symmetric normal distributions with nonzero medians and substantially more powerful than all three tests for asymmetric mixtures of normal random variables with or without zero medians. The motivation for this test derives from the need to appraise the safety profile of new medications. If pre and post safety measures are obtained, then under the null hypothesis, the variables are exchangeable and the distribution of their difference is symmetric about a zero median. Large pre-post differences are the major concern of a safety assessment. The discarded small observations are not particularly relevant to safety and can reduce power to detect important asymmetry. The new test was utilized on data from an on-road driving study performed to determine if a hypnotic, a drug used to promote sleep, has next day residual effects. Copyright © 2012 John Wiley & Sons, Ltd.

  2. Efficacy of polishing kits on the surface roughness and color stability of different composite resins.

    PubMed

    Kocaagaoglu, H; Aslan, T; Gürbulak, A; Albayrak, H; Taşdemir, Z; Gumus, H

    2017-05-01

    Different polishing kits may have different effects on the composite resin surfaces. The aim of this study was to evaluate the surface roughness and color stability of four different composites which was applied different polishing technique. Thirty specimens were made for each composite resin group (nanohybrid, GrandioSo-GS; nanohybrid, Clearfil Majesty Esthetic-CME; hybrid, Valux Plus-VP; micro-hybrid, Ruby Comp-RC; [15 mm in diameter and 2 mm height]), with the different monomer composition and particle size from a total of 120 specimens. Each composite group was divided into three subgroups (n = 10). The first subgroup of the each composite subgroups served as control (C) and had no surface treatment. The second subgroup of the each composite resin groups was polished with finishing discs (Bisco Finishing Discs; Bisco Inc., Schaumburg, IL, USA). The third subgroup of the each composite resin was polished with polishing wheel (Enhance and PoGo, Dentsply, Konstanz, Germany). The surface roughness and the color differences measurement of the specimens were made and recorded. The data were compared using Kruskal-Wallis test, and regression analysis was used in order to examine the correlation between surface roughness and color differences of the specimens (α = 0.05). The Kruskal-Wallis test indicated significant difference among the composite resins in terms of ΔE (P < 0.05), and there was no statistically significant difference among composite resins in terms of surface roughness (P > 0.05). Result of the regression analysis indicated statistically significant correlation between Ra and ΔE values (P < 0.05, r2 = 0.74). The findings of the present study have clinical relevance in the choice of polishing kits used.

  3. Evaluating ambivalence: social-cognitive and affective brain regions associated with ambivalent decision-making

    PubMed Central

    van Harreveld, Frenk; Rotteveel, Mark; Lelieveld, Gert-Jan; Crone, Eveline A.

    2014-01-01

    Ambivalence is a state of inconsistency that is often experienced as affectively aversive. In this functional magnetic resonance imaging study, we investigated the role of cognitive and social-affective processes in the experience of ambivalence and coping with its negative consequences. We examined participants’ brain activity during the dichotomous evaluation (pro vs contra) of pretested ambivalent (e.g. alcohol), positive (e.g. happiness) and negative (e.g. genocide) word stimuli. We manipulated evaluation relevance by varying the probability of evaluation consequences, under the hypothesis that ambivalence is experienced as more negative when outcomes are relevant. When making ambivalent evaluations, more activity was found in the anterior cingulate cortex, the insula, the temporal parietal junction (TPJ) and the posterior cingulate cortex (PCC)/precuneus, for both high and low evaluation relevance. After statistically conservative corrections, activity in the TPJ and PCC/precuneus was negatively correlated with experienced ambivalence after scanning, as measured by Priester and Petty’s felt ambivalence scale (1996). The findings show that cognitive and social-affective brain areas are involved in the experience of ambivalence. However, these networks are differently associated with subsequent reduction of ambivalence, thus highlighting the importance of understanding both cognitive and affective processes involved in ambivalent decision-making. PMID:23685774

  4. Process defects and in situ monitoring methods in metal powder bed fusion: a review

    NASA Astrophysics Data System (ADS)

    Grasso, Marco; Colosimo, Bianca Maria

    2017-04-01

    Despite continuous technological enhancements of metal Additive Manufacturing (AM) systems, the lack of process repeatability and stability still represents a barrier for the industrial breakthrough. The most relevant metal AM applications currently involve industrial sectors (e.g. aerospace and bio-medical) where defects avoidance is fundamental. Because of this, there is the need to develop novel in situ monitoring tools able to keep under control the stability of the process on a layer-by-layer basis, and to detect the onset of defects as soon as possible. On the one hand, AM systems must be equipped with in situ sensing devices able to measure relevant quantities during the process, a.k.a. process signatures. On the other hand, in-process data analytics and statistical monitoring techniques are required to detect and localize the defects in an automated way. This paper reviews the literature and the commercial tools for in situ monitoring of powder bed fusion (PBF) processes. It explores the different categories of defects and their main causes, the most relevant process signatures and the in situ sensing approaches proposed so far. Particular attention is devoted to the development of automated defect detection rules and the study of process control strategies, which represent two critical fields for the development of future smart PBF systems.

  5. Defining functioning levels in patients with schizophrenia: A combination of a novel clustering method and brain SPECT analysis.

    PubMed

    Catherine, Faget-Agius; Aurélie, Vincenti; Eric, Guedj; Pierre, Michel; Raphaëlle, Richieri; Marine, Alessandrini; Pascal, Auquier; Christophe, Lançon; Laurent, Boyer

    2017-12-30

    This study aims to define functioning levels of patients with schizophrenia by using a method of interpretable clustering based on a specific functioning scale, the Functional Remission Of General Schizophrenia (FROGS) scale, and to test their validity regarding clinical and neuroimaging characterization. In this observational study, patients with schizophrenia have been classified using a hierarchical top-down method called clustering using unsupervised binary trees (CUBT). Socio-demographic, clinical, and neuroimaging SPECT perfusion data were compared between the different clusters to ensure their clinical relevance. A total of 242 patients were analyzed. A four-group functioning level structure has been identified: 54 are classified as "minimal", 81 as "low", 64 as "moderate", and 43 as "high". The clustering shows satisfactory statistical properties, including reproducibility and discriminancy. The 4 clusters consistently differentiate patients. "High" functioning level patients reported significantly the lowest scores on the PANSS and the CDSS, and the highest scores on the GAF, the MARS and S-QoL 18. Functioning levels were significantly associated with cerebral perfusion of two relevant areas: the left inferior parietal cortex and the anterior cingulate. Our study provides relevant functioning levels in schizophrenia, and may enhance the use of functioning scale. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Spike Triggered Covariance in Strongly Correlated Gaussian Stimuli

    PubMed Central

    Aljadeff, Johnatan; Segev, Ronen; Berry, Michael J.; Sharpee, Tatyana O.

    2013-01-01

    Many biological systems perform computations on inputs that have very large dimensionality. Determining the relevant input combinations for a particular computation is often key to understanding its function. A common way to find the relevant input dimensions is to examine the difference in variance between the input distribution and the distribution of inputs associated with certain outputs. In systems neuroscience, the corresponding method is known as spike-triggered covariance (STC). This method has been highly successful in characterizing relevant input dimensions for neurons in a variety of sensory systems. So far, most studies used the STC method with weakly correlated Gaussian inputs. However, it is also important to use this method with inputs that have long range correlations typical of the natural sensory environment. In such cases, the stimulus covariance matrix has one (or more) outstanding eigenvalues that cannot be easily equalized because of sampling variability. Such outstanding modes interfere with analyses of statistical significance of candidate input dimensions that modulate neuronal outputs. In many cases, these modes obscure the significant dimensions. We show that the sensitivity of the STC method in the regime of strongly correlated inputs can be improved by an order of magnitude or more. This can be done by evaluating the significance of dimensions in the subspace orthogonal to the outstanding mode(s). Analyzing the responses of retinal ganglion cells probed with Gaussian noise, we find that taking into account outstanding modes is crucial for recovering relevant input dimensions for these neurons. PMID:24039563

  7. Record statistics of financial time series and geometric random walks

    NASA Astrophysics Data System (ADS)

    Sabir, Behlool; Santhanam, M. S.

    2014-09-01

    The study of record statistics of correlated series in physics, such as random walks, is gaining momentum, and several analytical results have been obtained in the past few years. In this work, we study the record statistics of correlated empirical data for which random walk models have relevance. We obtain results for the records statistics of select stock market data and the geometric random walk, primarily through simulations. We show that the distribution of the age of records is a power law with the exponent α lying in the range 1.5≤α≤1.8. Further, the longest record ages follow the Fréchet distribution of extreme value theory. The records statistics of geometric random walk series is in good agreement with that obtained from empirical stock data.

  8. ROTAS: a rotamer-dependent, atomic statistical potential for assessment and prediction of protein structures.

    PubMed

    Park, Jungkap; Saitou, Kazuhiro

    2014-09-18

    Multibody potentials accounting for cooperative effects of molecular interactions have shown better accuracy than typical pairwise potentials. The main challenge in the development of such potentials is to find relevant structural features that characterize the tightly folded proteins. Also, the side-chains of residues adopt several specific, staggered conformations, known as rotamers within protein structures. Different molecular conformations result in different dipole moments and induce charge reorientations. However, until now modeling of the rotameric state of residues had not been incorporated into the development of multibody potentials for modeling non-bonded interactions in protein structures. In this study, we develop a new multibody statistical potential which can account for the influence of rotameric states on the specificity of atomic interactions. In this potential, named "rotamer-dependent atomic statistical potential" (ROTAS), the interaction between two atoms is specified by not only the distance and relative orientation but also by two state parameters concerning the rotameric state of the residues to which the interacting atoms belong. It was clearly found that the rotameric state is correlated to the specificity of atomic interactions. Such rotamer-dependencies are not limited to specific type or certain range of interactions. The performance of ROTAS was tested using 13 sets of decoys and was compared to those of existing atomic-level statistical potentials which incorporate orientation-dependent energy terms. The results show that ROTAS performs better than other competing potentials not only in native structure recognition, but also in best model selection and correlation coefficients between energy and model quality. A new multibody statistical potential, ROTAS accounting for the influence of rotameric states on the specificity of atomic interactions was developed and tested on decoy sets. The results show that ROTAS has improved ability to recognize native structure from decoy models compared to other potentials. The effectiveness of ROTAS may provide insightful information for the development of many applications which require accurate side-chain modeling such as protein design, mutation analysis, and docking simulation.

  9. Uncertainty Analysis and Order-by-Order Optimization of Chiral Nuclear Interactions

    DOE PAGES

    Carlsson, Boris; Forssen, Christian; Fahlin Strömberg, D.; ...

    2016-02-24

    Chiral effective field theory ( ΧEFT) provides a systematic approach to describe low-energy nuclear forces. Moreover, EFT is able to provide well-founded estimates of statistical and systematic uncertainties | although this unique advantage has not yet been fully exploited. We ll this gap by performing an optimization and statistical analysis of all the low-energy constants (LECs) up to next-to-next-to-leading order. Our optimization protocol corresponds to a simultaneous t to scattering and bound-state observables in the pion-nucleon, nucleon-nucleon, and few-nucleon sectors, thereby utilizing the full model capabilities of EFT. Finally, we study the effect on other observables by demonstrating forward-error-propagation methodsmore » that can easily be adopted by future works. We employ mathematical optimization and implement automatic differentiation to attain e cient and machine-precise first- and second-order derivatives of the objective function with respect to the LECs. This is also vital for the regression analysis. We use power-counting arguments to estimate the systematic uncertainty that is inherent to EFT and we construct chiral interactions at different orders with quantified uncertainties. Statistical error propagation is compared with Monte Carlo sampling showing that statistical errors are in general small compared to systematic ones. In conclusion, we find that a simultaneous t to different sets of data is critical to (i) identify the optimal set of LECs, (ii) capture all relevant correlations, (iii) reduce the statistical uncertainty, and (iv) attain order-by-order convergence in EFT. Furthermore, certain systematic uncertainties in the few-nucleon sector are shown to get substantially magnified in the many-body sector; in particlar when varying the cutoff in the chiral potentials. The methodology and results presented in this Paper open a new frontier for uncertainty quantification in ab initio nuclear theory.« less

  10. Partial Least Squares Regression Can Aid in Detecting Differential Abundance of Multiple Features in Sets of Metagenomic Samples

    PubMed Central

    Libiger, Ondrej; Schork, Nicholas J.

    2015-01-01

    It is now feasible to examine the composition and diversity of microbial communities (i.e., “microbiomes”) that populate different human organs and orifices using DNA sequencing and related technologies. To explore the potential links between changes in microbial communities and various diseases in the human body, it is essential to test associations involving different species within and across microbiomes, environmental settings and disease states. Although a number of statistical techniques exist for carrying out relevant analyses, it is unclear which of these techniques exhibit the greatest statistical power to detect associations given the complexity of most microbiome datasets. We compared the statistical power of principal component regression, partial least squares regression, regularized regression, distance-based regression, Hill's diversity measures, and a modified test implemented in the popular and widely used microbiome analysis methodology “Metastats” across a wide range of simulated scenarios involving changes in feature abundance between two sets of metagenomic samples. For this purpose, simulation studies were used to change the abundance of microbial species in a real dataset from a published study examining human hands. Each technique was applied to the same data, and its ability to detect the simulated change in abundance was assessed. We hypothesized that a small subset of methods would outperform the rest in terms of the statistical power. Indeed, we found that the Metastats technique modified to accommodate multivariate analysis and partial least squares regression yielded high power under the models and data sets we studied. The statistical power of diversity measure-based tests, distance-based regression and regularized regression was significantly lower. Our results provide insight into powerful analysis strategies that utilize information on species counts from large microbiome data sets exhibiting skewed frequency distributions obtained on a small to moderate number of samples. PMID:26734061

  11. Adaptation in Coding by Large Populations of Neurons in the Retina

    NASA Astrophysics Data System (ADS)

    Ioffe, Mark L.

    A comprehensive theory of neural computation requires an understanding of the statistical properties of the neural population code. The focus of this work is the experimental study and theoretical analysis of the statistical properties of neural activity in the tiger salamander retina. This is an accessible yet complex system, for which we control the visual input and record from a substantial portion--greater than a half--of the ganglion cell population generating the spiking output. Our experiments probe adaptation of the retina to visual statistics: a central feature of sensory systems which have to adjust their limited dynamic range to a far larger space of possible inputs. In Chapter 1 we place our work in context with a brief overview of the relevant background. In Chapter 2 we describe the experimental methodology of recording from 100+ ganglion cells in the tiger salamander retina. In Chapter 3 we first present the measurements of adaptation of individual cells to changes in stimulation statistics and then investigate whether pairwise correlations in fluctuations of ganglion cell activity change across different stimulation conditions. We then transition to a study of the population-level probability distribution of the retinal response captured with maximum-entropy models. Convergence of the model inference is presented in Chapter 4. In Chapter 5 we first test the empirical presence of a phase transition in such models fitting the retinal response to different experimental conditions, and then proceed to develop other characterizations which are sensitive to complexity in the interaction matrix. This includes an analysis of the dynamics of sampling at finite temperature, which demonstrates a range of subtle attractor-like properties in the energy landscape. These are largely conserved when ambient illumination is varied 1000-fold, a result not necessarily apparent from the measured low-order statistics of the distribution. Our results form a consistent picture which is discussed at the end of Chapter 5. We conclude with a few future directions related to this thesis.

  12. 46 CFR 201.132 - Conduct of the hearing.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ..., such as an official report, decision, opinion, or published scientific or economic statistical data... relevant part thereof. (h) Oral argument at hearings. A request for oral argument at the close of testimony...

  13. 46 CFR 201.132 - Conduct of the hearing.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ..., such as an official report, decision, opinion, or published scientific or economic statistical data... relevant part thereof. (h) Oral argument at hearings. A request for oral argument at the close of testimony...

  14. 46 CFR 201.132 - Conduct of the hearing.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ..., such as an official report, decision, opinion, or published scientific or economic statistical data... relevant part thereof. (h) Oral argument at hearings. A request for oral argument at the close of testimony...

  15. 46 CFR 201.132 - Conduct of the hearing.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ..., such as an official report, decision, opinion, or published scientific or economic statistical data... relevant part thereof. (h) Oral argument at hearings. A request for oral argument at the close of testimony...

  16. 46 CFR 201.132 - Conduct of the hearing.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ..., such as an official report, decision, opinion, or published scientific or economic statistical data... relevant part thereof. (h) Oral argument at hearings. A request for oral argument at the close of testimony...

  17. Cancer Related-Knowledge - Small Area Estimates

    Cancer.gov

    These model-based estimates are produced using statistical models that combine data from the Health Information National Trends Survey, and auxiliary variables obtained from relevant sources and borrow strength from other areas with similar characteristics.

  18. Broken Ergodicity in Ideal, Homogeneous, Incompressible Turbulence

    NASA Technical Reports Server (NTRS)

    Morin, Lee; Shebalin, John; Fu, Terry; Nguyen, Phu; Shum, Victor

    2010-01-01

    We discuss the statistical mechanics of numerical models of ideal homogeneous, incompressible turbulence and their relevance for dissipative fluids and magnetofluids. These numerical models are based on Fourier series and the relevant statistical theory predicts that Fourier coefficients of fluid velocity and magnetic fields (if present) are zero-mean random variables. However, numerical simulations clearly show that certain coefficients have a non-zero mean value that can be very large compared to the associated standard deviation. We explain this phenomena in terms of broken ergodicity', which is defined to occur when dynamical behavior does not match ensemble predictions on very long time-scales. We review the theoretical basis of broken ergodicity, apply it to 2-D and 3-D fluid and magnetohydrodynamic simulations of homogeneous turbulence, and show new results from simulations using GPU (graphical processing unit) computers.

  19. Assessment of NDE Reliability Data

    NASA Technical Reports Server (NTRS)

    Yee, B. G. W.; Chang, F. H.; Couchman, J. C.; Lemon, G. H.; Packman, P. F.

    1976-01-01

    Twenty sets of relevant Nondestructive Evaluation (NDE) reliability data have been identified, collected, compiled, and categorized. A criterion for the selection of data for statistical analysis considerations has been formulated. A model to grade the quality and validity of the data sets has been developed. Data input formats, which record the pertinent parameters of the defect/specimen and inspection procedures, have been formulated for each NDE method. A comprehensive computer program has been written to calculate the probability of flaw detection at several confidence levels by the binomial distribution. This program also selects the desired data sets for pooling and tests the statistical pooling criteria before calculating the composite detection reliability. Probability of detection curves at 95 and 50 percent confidence levels have been plotted for individual sets of relevant data as well as for several sets of merged data with common sets of NDE parameters.

  20. Is parenting style a predictor of suicide attempts in a representative sample of adolescents?

    PubMed Central

    2014-01-01

    Background Suicidal ideation and suicide attempts are serious but not rare conditions in adolescents. However, there are several research and practical suicide-prevention initiatives that discuss the possibility of preventing serious self-harm. Profound knowledge about risk and protective factors is therefore necessary. The aim of this study is a) to clarify the role of parenting behavior and parenting styles in adolescents’ suicide attempts and b) to identify other statistically significant and clinically relevant risk and protective factors for suicide attempts in a representative sample of German adolescents. Methods In the years 2007/2008, a representative written survey of N = 44,610 students in the 9th grade of different school types in Germany was conducted. In this survey, the lifetime prevalence of suicide attempts was investigated as well as potential predictors including parenting behavior. A three-step statistical analysis was carried out: I) As basic model, the association between parenting and suicide attempts was explored via binary logistic regression controlled for age and sex. II) The predictive values of 13 additional potential risk/protective factors were analyzed with single binary logistic regression analyses for each predictor alone. Non-significant predictors were excluded in Step III. III) In a multivariate binary logistic regression analysis, all significant predictor variables from Step II and the parenting styles were included after testing for multicollinearity. Results Three parental variables showed a relevant association with suicide attempts in adolescents – (all protective): mother’s warmth and father’s warmth in childhood and mother’s control in adolescence (Step I). In the full model (Step III), Authoritative parenting (protective: OR: .79) and Rejecting-Neglecting parenting (risk: OR: 1.63) were identified as significant predictors (p < .001) for suicidal attempts. Seven further variables were interpreted to be statistically significant and clinically relevant: ADHD, female sex, smoking, Binge Drinking, absenteeism/truancy, migration background, and parental separation events. Conclusions Parenting style does matter. While children of Authoritative parents profit, children of Rejecting-Neglecting parents are put at risk – as we were able to show for suicide attempts in adolescence. Some of the identified risk factors contribute new knowledge and potential areas of intervention for special groups such as migrants or children diagnosed with ADHD. PMID:24766881

Top