Science.gov

Sample records for addition statistically significant

  1. Statistically significant deviations from additivity: What do they mean in assessing toxicity of mixtures?

    PubMed

    Liu, Yang; Vijver, Martina G; Qiu, Hao; Baas, Jan; Peijnenburg, Willie J G M

    2015-12-01

    There is increasing attention from scientists and policy makers to the joint effects of multiple metals on organisms when present in a mixture. Using root elongation of lettuce (Lactuca sativa L.) as a toxicity endpoint, the combined effects of binary mixtures of Cu, Cd, and Ni were studied. The statistical MixTox model was used to search deviations from the reference models i.e. concentration addition (CA) and independent action (IA). The deviations were subsequently interpreted as 'interactions'. A comprehensive experiment was designed to test the reproducibility of the 'interactions'. The results showed that the toxicity of binary metal mixtures was equally well predicted by both reference models. We found statistically significant 'interactions' in four of the five total datasets. However, the patterns of 'interactions' were found to be inconsistent or even contradictory across the different independent experiments. It is recommended that a statistically significant 'interaction', must be treated with care and is not necessarily biologically relevant. Searching a statistically significant interaction can be the starting point for further measurements and modeling to advance the understanding of underlying mechanisms and non-additive interactions occurring inside the organisms. PMID:26188643

  2. Statistical Significance Testing.

    ERIC Educational Resources Information Center

    McLean, James E., Ed.; Kaufman, Alan S., Ed.

    1998-01-01

    The controversy about the use or misuse of statistical significance testing has become the major methodological issue in educational research. This special issue contains three articles that explore the controversy, three commentaries on these articles, an overall response, and three rejoinders by the first three authors. They are: (1)…

  3. Lack of Statistical Significance

    ERIC Educational Resources Information Center

    Kehle, Thomas J.; Bray, Melissa A.; Chafouleas, Sandra M.; Kawano, Takuji

    2007-01-01

    Criticism has been leveled against the use of statistical significance testing (SST) in many disciplines. However, the field of school psychology has been largely devoid of critiques of SST. Inspection of the primary journals in school psychology indicated numerous examples of SST with nonrandom samples and/or samples of convenience. In this…

  4. Statistical or biological significance?

    PubMed

    Saxon, Emma

    2015-01-01

    Oat plants grown at an agricultural research facility produce higher yields in Field 1 than in Field 2, under well fertilised conditions and with similar weather exposure; all oat plants in both fields are healthy and show no sign of disease. In this study, the authors hypothesised that the soil microbial community might be different in each field, and these differences might explain the difference in oat plant growth. They carried out a metagenomic analysis of the 16 s ribosomal 'signature' sequences from bacteria in 50 randomly located soil samples in each field to determine the composition of the bacterial community. The study identified >1000 species, most of which were present in both fields. The authors identified two plant growth-promoting species that were significantly reduced in soil from Field 2 (Student's t-test P < 0.05), and concluded that these species might have contributed to reduced yield. PMID:26541972

  5. Statistically significant relational data mining :

    SciTech Connect

    Berry, Jonathan W.; Leung, Vitus Joseph; Phillips, Cynthia Ann; Pinar, Ali; Robinson, David Gerald; Berger-Wolf, Tanya; Bhowmick, Sanjukta; Casleton, Emily; Kaiser, Mark; Nordman, Daniel J.; Wilson, Alyson G.

    2014-02-01

    This report summarizes the work performed under the project (3z(BStatitically significant relational data mining.(3y (BThe goal of the project was to add more statistical rigor to the fairly ad hoc area of data mining on graphs. Our goal was to develop better algorithms and better ways to evaluate algorithm quality. We concetrated on algorithms for community detection, approximate pattern matching, and graph similarity measures. Approximate pattern matching involves finding an instance of a relatively small pattern, expressed with tolerance, in a large graph of data observed with uncertainty. This report gathers the abstracts and references for the eight refereed publications that have appeared as part of this work. We then archive three pieces of research that have not yet been published. The first is theoretical and experimental evidence that a popular statistical measure for comparison of community assignments favors over-resolved communities over approximations to a ground truth. The second are statistically motivated methods for measuring the quality of an approximate match of a small pattern in a large graph. The third is a new probabilistic random graph model. Statisticians favor these models for graph analysis. The new local structure graph model overcomes some of the issues with popular models such as exponential random graph models and latent variable models.

  6. Significant results: statistical or clinical?

    PubMed Central

    2016-01-01

    The null hypothesis significance test method is popular in biological and medical research. Many researchers have used this method for their research without exact knowledge, though it has both merits and shortcomings. Readers will know its shortcomings, as well as several complementary or alternative methods, as such the estimated effect size and the confidence interval. PMID:27066201

  7. Statistical significance of the gallium anomaly

    SciTech Connect

    Giunti, Carlo; Laveder, Marco

    2011-06-15

    We calculate the statistical significance of the anomalous deficit of electron neutrinos measured in the radioactive source experiments of the GALLEX and SAGE solar neutrino detectors, taking into account the uncertainty of the detection cross section. We found that the statistical significance of the anomaly is {approx}3.0{sigma}. A fit of the data in terms of neutrino oscillations favors at {approx}2.7{sigma} short-baseline electron neutrino disappearance with respect to the null hypothesis of no oscillations.

  8. Statistical Significance vs. Practical Significance: An Exploration through Health Education

    ERIC Educational Resources Information Center

    Rosen, Brittany L.; DeMaria, Andrea L.

    2012-01-01

    The purpose of this paper is to examine the differences between statistical and practical significance, including strengths and criticisms of both methods, as well as provide information surrounding the application of various effect sizes and confidence intervals within health education research. Provided are recommendations, explanations and…

  9. Comments on the Statistical Significance Testing Articles.

    ERIC Educational Resources Information Center

    Knapp, Thomas R.

    1998-01-01

    Expresses a "middle-of-the-road" position on statistical significance testing, suggesting that it has its place but that confidence intervals are generally more useful. Identifies 10 errors of omission or commission in the papers reviewed that weaken the positions taken in their discussions. (SLD)

  10. Statistical significance of normalized global alignment.

    PubMed

    Peris, Guillermo; Marzal, Andrés

    2014-03-01

    The comparison of homologous proteins from different species is a first step toward a function assignment and a reconstruction of the species evolution. Though local alignment is mostly used for this purpose, global alignment is important for constructing multiple alignments or phylogenetic trees. However, statistical significance of global alignments is not completely clear, lacking a specific statistical model to describe alignments or depending on computationally expensive methods like Z-score. Recently we presented a normalized global alignment, defined as the best compromise between global alignment cost and length, and showed that this new technique led to better classification results than Z-score at a much lower computational cost. However, it is necessary to analyze the statistical significance of the normalized global alignment in order to be considered a completely functional algorithm for protein alignment. Experiments with unrelated proteins extracted from the SCOP ASTRAL database showed that normalized global alignment scores can be fitted to a log-normal distribution. This fact, obtained without any theoretical support, can be used to derive statistical significance of normalized global alignments. Results are summarized in a table with fitted parameters for different scoring schemes. PMID:24400820

  11. Assessing the statistical significance of periodogram peaks

    NASA Astrophysics Data System (ADS)

    Baluev, R. V.

    2008-04-01

    The least-squares (or Lomb-Scargle) periodogram is a powerful tool that is routinely used in many branches of astronomy to search for periodicities in observational data. The problem of assessing the statistical significance of candidate periodicities for a number of periodograms is considered. Based on results in extreme value theory, improved analytic estimations of false alarm probabilities are given. These include an upper limit to the false alarm probability (or a lower limit to the significance). The estimations are tested numerically in order to establish regions of their practical applicability.

  12. Social significance of community structure: Statistical view

    NASA Astrophysics Data System (ADS)

    Li, Hui-Jia; Daniels, Jasmine J.

    2015-01-01

    Community structure analysis is a powerful tool for social networks that can simplify their topological and functional analysis considerably. However, since community detection methods have random factors and real social networks obtained from complex systems always contain error edges, evaluating the significance of a partitioned community structure is an urgent and important question. In this paper, integrating the specific characteristics of real society, we present a framework to analyze the significance of a social community. The dynamics of social interactions are modeled by identifying social leaders and corresponding hierarchical structures. Instead of a direct comparison with the average outcome of a random model, we compute the similarity of a given node with the leader by the number of common neighbors. To determine the membership vector, an efficient community detection algorithm is proposed based on the position of the nodes and their corresponding leaders. Then, using a log-likelihood score, the tightness of the community can be derived. Based on the distribution of community tightness, we establish a connection between p -value theory and network analysis, and then we obtain a significance measure of statistical form . Finally, the framework is applied to both benchmark networks and real social networks. Experimental results show that our work can be used in many fields, such as determining the optimal number of communities, analyzing the social significance of a given community, comparing the performance among various algorithms, etc.

  13. Social significance of community structure: statistical view.

    PubMed

    Li, Hui-Jia; Daniels, Jasmine J

    2015-01-01

    Community structure analysis is a powerful tool for social networks that can simplify their topological and functional analysis considerably. However, since community detection methods have random factors and real social networks obtained from complex systems always contain error edges, evaluating the significance of a partitioned community structure is an urgent and important question. In this paper, integrating the specific characteristics of real society, we present a framework to analyze the significance of a social community. The dynamics of social interactions are modeled by identifying social leaders and corresponding hierarchical structures. Instead of a direct comparison with the average outcome of a random model, we compute the similarity of a given node with the leader by the number of common neighbors. To determine the membership vector, an efficient community detection algorithm is proposed based on the position of the nodes and their corresponding leaders. Then, using a log-likelihood score, the tightness of the community can be derived. Based on the distribution of community tightness, we establish a connection between p-value theory and network analysis, and then we obtain a significance measure of statistical form . Finally, the framework is applied to both benchmark networks and real social networks. Experimental results show that our work can be used in many fields, such as determining the optimal number of communities, analyzing the social significance of a given community, comparing the performance among various algorithms, etc. PMID:25679651

  14. Statistical Significance of Trends in Exoplanetary Atmospheres

    NASA Astrophysics Data System (ADS)

    Harrington, Joseph; Bowman, M.; Blumenthal, S. D.; Loredo, T. J.; UCF Exoplanets Group

    2013-10-01

    Cowan and Agol (2011) and we (Harrington et al. 2007, 2010, 2011, 2012, 2013) have noted that at higher equilibrium temperatures, observed exoplanet fluxes are substantially higher than even the elevated equilibrium temperature predicts. With a substantial increase in the number of atmospheric flux measurements, we can now test the statistical significance of this trend. We can also cast the data on a variety of axes to search further for the physics behind both the jump in flux above about 2000 K and the wide scatter in fluxes at all temperatures. This work was supported by NASA Planetary Atmospheres grant NNX12AI69G and NASA Astrophysics Data Analysis Program grant NNX13AF38G.

  15. On the statistical significance of climate trends

    NASA Astrophysics Data System (ADS)

    Franzke, Christian

    2010-05-01

    One of the major problems in climate science is the prediction of future climate change due to anthropogenic green-house gas emissions. The earth's climate is not changing in a uniform way because it is a complex nonlinear system of many interacting components. The overall warming trend can be interrupted by cooling periods due to natural variability. Thus, in order to statistically distinguish between internal climate variability and genuine trends one has to assume a certain null model of the climate variability. Traditionally a short-range, and not a long-range, dependent null model is chosen. Here I show evidence for the first time that temperature data at 8 stations across Antarctica are long-range dependent and that the choice of a long-range, rather than a short-range, dependent null model negates the statistical significance of temperature trends at 2 out of 3 stations. These results show the short comings of traditional trend analysis and imply that more attention should be given to the correlation structure of climate data, in particular if they are long-range dependent. In this study I use the Empirical Mode Decomposition (EMD) to decompose the univariate temperature time series into a finite number of Intrinsic Mode Functions (IMF) and an instantaneous mean. While there is no unambiguous definition of a trend, in this study we interpret the instantaneous mean as a trend which is possibly nonlinear. The EMD method has been shown to be a powerful method for extracting trends from noisy and nonlinear time series. I will show that this way of identifying trends is superior to the traditional linear least-square fits.

  16. Sibling Competition & Growth Tradeoffs. Biological vs. Statistical Significance

    PubMed Central

    Kramer, Karen L.; Veile, Amanda; Otárola-Castillo, Erik

    2016-01-01

    Early childhood growth has many downstream effects on future health and reproduction and is an important measure of offspring quality. While a tradeoff between family size and child growth outcomes is theoretically predicted in high-fertility societies, empirical evidence is mixed. This is often attributed to phenotypic variation in parental condition. However, inconsistent study results may also arise because family size confounds the potentially differential effects that older and younger siblings can have on young children’s growth. Additionally, inconsistent results might reflect that the biological significance associated with different growth trajectories is poorly understood. This paper addresses these concerns by tracking children’s monthly gains in height and weight from weaning to age five in a high fertility Maya community. We predict that: 1) as an aggregate measure family size will not have a major impact on child growth during the post weaning period; 2) competition from young siblings will negatively impact child growth during the post weaning period; 3) however because of their economic value, older siblings will have a negligible effect on young children’s growth. Accounting for parental condition, we use linear mixed models to evaluate the effects that family size, younger and older siblings have on children’s growth. Congruent with our expectations, it is younger siblings who have the most detrimental effect on children’s growth. While we find statistical evidence of a quantity/quality tradeoff effect, the biological significance of these results is negligible in early childhood. Our findings help to resolve why quantity/quality studies have had inconsistent results by showing that sibling competition varies with sibling age composition, not just family size, and that biological significance is distinct from statistical significance. PMID:26938742

  17. Testing the Difference of Correlated Agreement Coefficients for Statistical Significance

    ERIC Educational Resources Information Center

    Gwet, Kilem L.

    2016-01-01

    This article addresses the problem of testing the difference between two correlated agreement coefficients for statistical significance. A number of authors have proposed methods for testing the difference between two correlated kappa coefficients, which require either the use of resampling methods or the use of advanced statistical modeling…

  18. Reviewer Bias for Statistically Significant Results: A Reexamination.

    ERIC Educational Resources Information Center

    Fagley, N. S.; McKinney, I. Jean

    1983-01-01

    Reexamines the article by Atkinson, Furlong, and Wampold (1982) and questions their conclusion that reviewers were biased toward statistically significant results. A statistical power analysis shows the power of their bogus study was low. Low power in a study reporting nonsignificant findings is a valid reason for recommending not to publish.…

  19. Advances in Testing the Statistical Significance of Mediation Effects

    ERIC Educational Resources Information Center

    Mallinckrodt, Brent; Abraham, W. Todd; Wei, Meifen; Russell, Daniel W.

    2006-01-01

    P. A. Frazier, A. P. Tix, and K. E. Barron (2004) highlighted a normal theory method popularized by R. M. Baron and D. A. Kenny (1986) for testing the statistical significance of indirect effects (i.e., mediator variables) in multiple regression contexts. However, simulation studies suggest that this method lacks statistical power relative to some…

  20. Statistical significance test for transition matrices of atmospheric Markov chains

    NASA Technical Reports Server (NTRS)

    Vautard, Robert; Mo, Kingtse C.; Ghil, Michael

    1990-01-01

    Low-frequency variability of large-scale atmospheric dynamics can be represented schematically by a Markov chain of multiple flow regimes. This Markov chain contains useful information for the long-range forecaster, provided that the statistical significance of the associated transition matrix can be reliably tested. Monte Carlo simulation yields a very reliable significance test for the elements of this matrix. The results of this test agree with previously used empirical formulae when each cluster of maps identified as a distinct flow regime is sufficiently large and when they all contain a comparable number of maps. Monte Carlo simulation provides a more reliable way to test the statistical significance of transitions to and from small clusters. It can determine the most likely transitions, as well as the most unlikely ones, with a prescribed level of statistical significance.

  1. Decadal power in land air temperatures: Is it statistically significant?

    NASA Astrophysics Data System (ADS)

    Thejll, Peter A.

    2001-12-01

    The geographical distribution and properties of the well-known 10-11 year signal in terrestrial temperature records is investigated. By analyzing the Global Historical Climate Network data for surface air temperatures we verify that the signal is strongest in North America and is similar in nature to that reported earlier by R. G. Currie. The decadal signal is statistically significant for individual stations, but it is not possible to show that the signal is statistically significant globally, using strict tests. In North America, during the twentieth century, the decadal variability in the solar activity cycle is associated with the decadal part of the North Atlantic Oscillation index series in such a way that both of these signals correspond to the same spatial pattern of cooling and warming. A method for testing statistical results with Monte Carlo trials on data fields with specified temporal structure and specific spatial correlation retained is presented.

  2. Your Chi-Square Test Is Statistically Significant: Now What?

    ERIC Educational Resources Information Center

    Sharpe, Donald

    2015-01-01

    Applied researchers have employed chi-square tests for more than one hundred years. This paper addresses the question of how one should follow a statistically significant chi-square test result in order to determine the source of that result. Four approaches were evaluated: calculating residuals, comparing cells, ransacking, and partitioning. Data…

  3. A Comparison of Statistical Significance Tests for Selecting Equating Functions

    ERIC Educational Resources Information Center

    Moses, Tim

    2009-01-01

    This study compared the accuracies of nine previously proposed statistical significance tests for selecting identity, linear, and equipercentile equating functions in an equivalent groups equating design. The strategies included likelihood ratio tests for the loglinear models of tests' frequency distributions, regression tests, Kolmogorov-Smirnov…

  4. Assigning statistical significance to proteotypic peptides via database searches

    PubMed Central

    Alves, Gelio; Ogurtsov, Aleksey Y.; Yu, Yi-Kuo

    2011-01-01

    Querying MS/MS spectra against a database containing only proteotypic peptides reduces data analysis time due to reduction of database size. Despite the speed advantage, this search strategy is challenged by issues of statistical significance and coverage. The former requires separating systematically significant identifications from less confident identifications, while the latter arises when the underlying peptide is not present, due to single amino acid polymorphisms (SAPs) or post-translational modifications (PTMs), in the proteotypic peptide libraries searched. To address both issues simultaneously, we have extended RAId’s knowledge database to include proteotypic information, utilized RAId’s statistical strategy to assign statistical significance to proteotypic peptides, and modified RAId’s programs to allow for consideration of proteotypic information during database searches. The extended database alleviates the coverage problem since all annotated modifications, even those occurred within proteotypic peptides, may be considered. Taking into account the likelihoods of observation, the statistical strategy of RAId provides accurate E-value assignments regardless whether a candidate peptide is proteotypic or not. The advantage of including proteotypic information is evidenced by its superior retrieval performance when compared to regular database searches. PMID:21055489

  5. Assigning statistical significance to proteotypic peptides via database searches.

    PubMed

    Alves, Gelio; Ogurtsov, Aleksey Y; Yu, Yi-Kuo

    2011-02-01

    Querying MS/MS spectra against a database containing only proteotypic peptides reduces data analysis time due to reduction of database size. Despite the speed advantage, this search strategy is challenged by issues of statistical significance and coverage. The former requires separating systematically significant identifications from less confident identifications, while the latter arises when the underlying peptide is not present, due to single amino acid polymorphisms (SAPs) or post-translational modifications (PTMs), in the proteotypic peptide libraries searched. To address both issues simultaneously, we have extended RAId's knowledge database to include proteotypic information, utilized RAId's statistical strategy to assign statistical significance to proteotypic peptides, and modified RAId's programs to allow for consideration of proteotypic information during database searches. The extended database alleviates the coverage problem since all annotated modifications, even those that occurred within proteotypic peptides, may be considered. Taking into account the likelihoods of observation, the statistical strategy of RAId provides accurate E-value assignments regardless whether a candidate peptide is proteotypic or not. The advantage of including proteotypic information is evidenced by its superior retrieval performance when compared to regular database searches. PMID:21055489

  6. Addition of Cryoprotectant Significantly Alters the Epididymal Sperm Proteome

    PubMed Central

    Yoon, Sung-Jae; Rahman, Md Saidur; Kwon, Woo-Sung; Park, Yoo-Jin; Pang, Myung-Geol

    2016-01-01

    Although cryopreservation has been developed and optimized over the past decades, it causes various stresses, including cold shock, osmotic stress, and ice crystal formation, thereby reducing fertility. During cryopreservation, addition of cryoprotective agent (CPA) is crucial for protecting spermatozoa from freezing damage. However, the intrinsic toxicity and osmotic stress induced by CPA cause damage to spermatozoa. To identify the effects of CPA addition during cryopreservation, we assessed the motility (%), motion kinematics, capacitation status, and viability of epididymal spermatozoa using computer-assisted sperm analysis and Hoechst 33258/chlortetracycline fluorescence staining. Moreover, the effects of CPA addition were also demonstrated at the proteome level using two-dimensional electrophoresis. Our results demonstrated that CPA addition significantly reduced sperm motility (%), curvilinear velocity, viability (%), and non-capacitated spermatozoa, whereas straightness and acrosome-reacted spermatozoa increased significantly (p < 0.05). Ten proteins were differentially expressed (two decreased and eight increased) (>3 fold, p < 0.05) after CPA, whereas NADH dehydrogenase flavoprotein 2, f-actin-capping protein subunit beta, superoxide dismutase 2, and outer dense fiber protein 2 were associated with several important signaling pathways (p < 0.05). The present study provides a mechanistic basis for specific cryostresses and potential markers of CPA-induced stress. Therefore, these might provide information about the development of safe biomaterials for cryopreservation and basic ground for sperm cryopreservation. PMID:27031703

  7. Statistical significance of climate sensitivity predictors obtained by data mining

    NASA Astrophysics Data System (ADS)

    Caldwell, Peter M.; Bretherton, Christopher S.; Zelinka, Mark D.; Klein, Stephen A.; Santer, Benjamin D.; Sanderson, Benjamin M.

    2014-03-01

    Several recent efforts to estimate Earth's equilibrium climate sensitivity (ECS) focus on identifying quantities in the current climate which are skillful predictors of ECS yet can be constrained by observations. This study automates the search for observable predictors using data from phase 5 of the Coupled Model Intercomparison Project. The primary focus of this paper is assessing statistical significance of the resulting predictive relationships. Failure to account for dependence between models, variables, locations, and seasons is shown to yield misleading results. A new technique for testing the field significance of data-mined correlations which avoids these problems is presented. Using this new approach, all 41,741 relationships we tested were found to be explainable by chance. This leads us to conclude that data mining is best used to identify potential relationships which are then validated or discarded using physically based hypothesis testing.

  8. Statistical significance across multiple optimization models for community partition

    NASA Astrophysics Data System (ADS)

    Li, Ju; Li, Hui-Jia; Mao, He-Jin; Chen, Junhua

    2016-05-01

    The study of community structure is an important problem in a wide range of applications, which can help us understand the real network system deeply. However, due to the existence of random factors and error edges in real networks, how to measure the significance of community structure efficiently is a crucial question. In this paper, we present a novel statistical framework computing the significance of community structure across multiple optimization methods. Different from the universal approaches, we calculate the similarity between a given node and its leader and employ the distribution of link tightness to derive the significance score, instead of a direct comparison to a randomized model. Based on the distribution of community tightness, a new “p-value” form significance measure is proposed for community structure analysis. Specially, the well-known approaches and their corresponding quality functions are unified to a novel general formulation, which facilitates in providing a detailed comparison across them. To determine the position of leaders and their corresponding followers, an efficient algorithm is proposed based on the spectral theory. Finally, we apply the significance analysis to some famous benchmark networks and the good performance verified the effectiveness and efficiency of our framework.

  9. American Vocational Education Research Association Members' Perceptions of Statistical Significance Tests and Other Statistical Controversies.

    ERIC Educational Resources Information Center

    Gordon, Howard R. D.

    A random sample of 113 members of the American Vocational Education Research Association (AVERA) was surveyed to obtain baseline information regarding AVERA members' perceptions of statistical significance tests. The Psychometrics Group Instrument was used to collect data from participants. Of those surveyed, 67% were male, 93% had earned a…

  10. Weak additivity principle for current statistics in d dimensions.

    PubMed

    Pérez-Espigares, C; Garrido, P L; Hurtado, P I

    2016-04-01

    The additivity principle (AP) allows one to compute the current distribution in many one-dimensional nonequilibrium systems. Here we extend this conjecture to general d-dimensional driven diffusive systems, and validate its predictions against both numerical simulations of rare events and microscopic exact calculations of three paradigmatic models of diffusive transport in d=2. Crucially, the existence of a structured current vector field at the fluctuating level, coupled to the local mobility, turns out to be essential to understand current statistics in d>1. We prove that, when compared to the straightforward extension of the AP to high d, the so-called weak AP always yields a better minimizer of the macroscopic fluctuation theory action for current statistics. PMID:27176236

  11. Weak additivity principle for current statistics in d dimensions

    NASA Astrophysics Data System (ADS)

    Pérez-Espigares, C.; Garrido, P. L.; Hurtado, P. I.

    2016-04-01

    The additivity principle (AP) allows one to compute the current distribution in many one-dimensional nonequilibrium systems. Here we extend this conjecture to general d -dimensional driven diffusive systems, and validate its predictions against both numerical simulations of rare events and microscopic exact calculations of three paradigmatic models of diffusive transport in d =2 . Crucially, the existence of a structured current vector field at the fluctuating level, coupled to the local mobility, turns out to be essential to understand current statistics in d >1 . We prove that, when compared to the straightforward extension of the AP to high d , the so-called weak AP always yields a better minimizer of the macroscopic fluctuation theory action for current statistics.

  12. Assessing statistical significance in multivariable genome wide association analysis

    PubMed Central

    Buzdugan, Laura; Kalisch, Markus; Navarro, Arcadi; Schunk, Daniel; Fehr, Ernst; Bühlmann, Peter

    2016-01-01

    Motivation: Although Genome Wide Association Studies (GWAS) genotype a very large number of single nucleotide polymorphisms (SNPs), the data are often analyzed one SNP at a time. The low predictive power of single SNPs, coupled with the high significance threshold needed to correct for multiple testing, greatly decreases the power of GWAS. Results: We propose a procedure in which all the SNPs are analyzed in a multiple generalized linear model, and we show its use for extremely high-dimensional datasets. Our method yields P-values for assessing significance of single SNPs or groups of SNPs while controlling for all other SNPs and the family wise error rate (FWER). Thus, our method tests whether or not a SNP carries any additional information about the phenotype beyond that available by all the other SNPs. This rules out spurious correlations between phenotypes and SNPs that can arise from marginal methods because the ‘spuriously correlated’ SNP merely happens to be correlated with the ‘truly causal’ SNP. In addition, the method offers a data driven approach to identifying and refining groups of SNPs that jointly contain informative signals about the phenotype. We demonstrate the value of our method by applying it to the seven diseases analyzed by the Wellcome Trust Case Control Consortium (WTCCC). We show, in particular, that our method is also capable of finding significant SNPs that were not identified in the original WTCCC study, but were replicated in other independent studies. Availability and implementation: Reproducibility of our research is supported by the open-source Bioconductor package hierGWAS. Contact: peter.buehlmann@stat.math.ethz.ch Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27153677

  13. Statistical tests of additional plate boundaries from plate motion inversions

    NASA Technical Reports Server (NTRS)

    Stein, S.; Gordon, R. G.

    1984-01-01

    The application of the F-ratio test, a standard statistical technique, to the results of relative plate motion inversions has been investigated. The method tests whether the improvement in fit of the model to the data resulting from the addition of another plate to the model is greater than that expected purely by chance. This approach appears to be useful in determining whether additional plate boundaries are justified. Previous results have been confirmed favoring separate North American and South American plates with a boundary located beween 30 N and the equator. Using Chase's global relative motion data, it is shown that in addition to separate West African and Somalian plates, separate West Indian and Australian plates, with a best-fitting boundary between 70 E and 90 E, can be resolved. These results are generally consistent with the observation that the Indian plate's internal deformation extends somewhat westward of the Ninetyeast Ridge. The relative motion pole is similar to Minster and Jordan's and predicts the NW-SE compression observed in earthquake mechanisms near the Ninetyeast Ridge.

  14. Statistical controversies in clinical research: statistical significance-too much of a good thing ….

    PubMed

    Buyse, M; Hurvitz, S A; Andre, F; Jiang, Z; Burris, H A; Toi, M; Eiermann, W; Lindsay, M-A; Slamon, D

    2016-05-01

    The use and interpretation of P values is a matter of debate in applied research. We argue that P values are useful as a pragmatic guide to interpret the results of a clinical trial, not as a strict binary boundary that separates real treatment effects from lack thereof. We illustrate our point using the result of BOLERO-1, a randomized, double-blind trial evaluating the efficacy and safety of adding everolimus to trastuzumab and paclitaxel as first-line therapy for HER2+ advanced breast cancer. In this trial, the benefit of everolimus was seen only in the predefined subset of patients with hormone receptor-negative breast cancer at baseline (progression-free survival hazard ratio = 0.66, P = 0.0049). A strict interpretation of this finding, based on complex 'alpha splitting' rules to assess statistical significance, led to the conclusion that the benefit of everolimus was not statistically significant either overall or in the subset. We contend that this interpretation does not do justice to the data, and we argue that the benefit of everolimus in hormone receptor-negative breast cancer is both statistically compelling and clinically relevant. PMID:26861602

  15. Statistical downscaling rainfall using artificial neural network: significantly wetter Bangkok?

    NASA Astrophysics Data System (ADS)

    Vu, Minh Tue; Aribarg, Thannob; Supratid, Siriporn; Raghavan, Srivatsan V.; Liong, Shie-Yui

    2015-08-01

    Artificial neural network (ANN) is an established technique with a flexible mathematical structure that is capable of identifying complex nonlinear relationships between input and output data. The present study utilizes ANN as a method of statistically downscaling global climate models (GCMs) during the rainy season at meteorological site locations in Bangkok, Thailand. The study illustrates the applications of the feed forward back propagation using large-scale predictor variables derived from both the ERA-Interim reanalyses data and present day/future GCM data. The predictors are first selected over different grid boxes surrounding Bangkok region and then screened by using principal component analysis (PCA) to filter the best correlated predictors for ANN training. The reanalyses downscaled results of the present day climate show good agreement against station precipitation with a correlation coefficient of 0.8 and a Nash-Sutcliffe efficiency of 0.65. The final downscaled results for four GCMs show an increasing trend of precipitation for rainy season over Bangkok by the end of the twenty-first century. The extreme values of precipitation determined using statistical indices show strong increases of wetness. These findings will be useful for policy makers in pondering adaptation measures due to flooding such as whether the current drainage network system is sufficient to meet the changing climate and to plan for a range of related adaptation/mitigation measures.

  16. Testing for Additivity at Select Mixture Groups of Interest Based on Statistical Equivalence Testing Methods

    SciTech Connect

    Stork, LeAnna M.; Gennings, Chris; Carchman, Richard; Carter, Jr., Walter H.; Pounds, Joel G.; Mumtaz, Moiz

    2006-12-01

    Several assumptions, defined and undefined, are used in the toxicity assessment of chemical mixtures. In scientific practice mixture components in the low-dose region, particularly subthreshold doses, are often assumed to behave additively (i.e., zero interaction) based on heuristic arguments. This assumption has important implications in the practice of risk assessment, but has not been experimentally tested. We have developed methodology to test for additivity in the sense of Berenbaum (Advances in Cancer Research, 1981), based on the statistical equivalence testing literature where the null hypothesis of interaction is rejected for the alternative hypothesis of additivity when data support the claim. The implication of this approach is that conclusions of additivity are made with a false positive rate controlled by the experimenter. The claim of additivity is based on prespecified additivity margins, which are chosen using expert biological judgment such that small deviations from additivity, which are not considered to be biologically important, are not statistically significant. This approach is in contrast to the usual hypothesis-testing framework that assumes additivity in the null hypothesis and rejects when there is significant evidence of interaction. In this scenario, failure to reject may be due to lack of statistical power making the claim of additivity problematic. The proposed method is illustrated in a mixture of five organophosphorus pesticides that were experimentally evaluated alone and at relevant mixing ratios. Motor activity was assessed in adult male rats following acute exposure. Four low-dose mixture groups were evaluated. Evidence of additivity is found in three of the four low-dose mixture groups.The proposed method tests for additivity of the whole mixture and does not take into account subset interactions (e.g., synergistic, antagonistic) that may have occurred and cancelled each other out.

  17. Effect size, confidence interval and statistical significance: a practical guide for biologists.

    PubMed

    Nakagawa, Shinichi; Cuthill, Innes C

    2007-11-01

    Null hypothesis significance testing (NHST) is the dominant statistical approach in biology, although it has many, frequently unappreciated, problems. Most importantly, NHST does not provide us with two crucial pieces of information: (1) the magnitude of an effect of interest, and (2) the precision of the estimate of the magnitude of that effect. All biologists should be ultimately interested in biological importance, which may be assessed using the magnitude of an effect, but not its statistical significance. Therefore, we advocate presentation of measures of the magnitude of effects (i.e. effect size statistics) and their confidence intervals (CIs) in all biological journals. Combined use of an effect size and its CIs enables one to assess the relationships within data more effectively than the use of p values, regardless of statistical significance. In addition, routine presentation of effect sizes will encourage researchers to view their results in the context of previous research and facilitate the incorporation of results into future meta-analysis, which has been increasingly used as the standard method of quantitative review in biology. In this article, we extensively discuss two dimensionless (and thus standardised) classes of effect size statistics: d statistics (standardised mean difference) and r statistics (correlation coefficient), because these can be calculated from almost all study designs and also because their calculations are essential for meta-analysis. However, our focus on these standardised effect size statistics does not mean unstandardised effect size statistics (e.g. mean difference and regression coefficient) are less important. We provide potential solutions for four main technical problems researchers may encounter when calculating effect size and CIs: (1) when covariates exist, (2) when bias in estimating effect size is possible, (3) when data have non-normal error structure and/or variances, and (4) when data are non

  18. Understanding the Sampling Distribution and Its Use in Testing Statistical Significance.

    ERIC Educational Resources Information Center

    Breunig, Nancy A.

    Despite the increasing criticism of statistical significance testing by researchers, particularly in the publication of the 1994 American Psychological Association's style manual, statistical significance test results are still popular in journal articles. For this reason, it remains important to understand the logic of inferential statistics. A…

  19. Outcomes of pharmacological management of nocturia with non-antidiuretic agents: does statistically significant equal clinically significant?

    PubMed

    Smith, Ariana L; Wein, Alan J

    2011-05-01

    To evaluate the statistical and clinical efficacy of the pharmacological treatments of nocturia using non-antidiuretic agents. A literature review of treatments of nocturia specifically addressing the impact of alpha blockers, 5-alpha reductase inhibitors (5ARI) and antimuscarinics on reduction in nocturnal voids. Despite commonly reported statistically significant results, nocturia has shown a poor clinical response to traditional therapies for benign prostatic hyperplasia including alpha blockers and 5ARI. Similarly, nocturia has shown a poor clinical response to traditional therapies for overactive bladder including antimuscarinics. Statistical success has been achieved in some groups with a variety of alpha blockers and antimuscarinic agents, but the clinical significance of these changes is doubtful. It is likely that other types of therapy will need to be employed in order to achieve a clinically significant reduction in nocturia. PMID:21518417

  20. Uses and Abuses of Statistical Significance Tests and Other Statistical Resources: A Comparative Study

    ERIC Educational Resources Information Center

    Monterde-i-Bort, Hector; Frias-Navarro, Dolores; Pascual-Llobell, Juan

    2010-01-01

    The empirical study we present here deals with a pedagogical issue that has not been thoroughly explored up until now in our field. Previous empirical studies in other sectors have identified the opinions of researchers about this topic, showing that completely unacceptable interpretations have been made of significance tests and other statistical…

  1. Identification of Statistically Significant Differences between Standard Scores on the Woodcock Reading Mastery Tests.

    ERIC Educational Resources Information Center

    Simpson, Robert G.

    1981-01-01

    Occasionally, differences in test scores seem to indicate that a student performs much better in one reading area than in another when, in reality, the differences may not be statistically significant. The author presents a table in which statistically significant differences between Woodcock test standard scores are identified. (Author)

  2. "What If" Analyses: Ways to Interpret Statistical Significance Test Results Using EXCEL or "R"

    ERIC Educational Resources Information Center

    Ozturk, Elif

    2012-01-01

    The present paper aims to review two motivations to conduct "what if" analyses using Excel and "R" to understand the statistical significance tests through the sample size context. "What if" analyses can be used to teach students what statistical significance tests really do and in applied research either prospectively to estimate what sample size…

  3. Statistical Significance Testing in Second Language Research: Basic Problems and Suggestions for Reform

    ERIC Educational Resources Information Center

    Norris, John M.

    2015-01-01

    Traditions of statistical significance testing in second language (L2) quantitative research are strongly entrenched in how researchers design studies, select analyses, and interpret results. However, statistical significance tests using "p" values are commonly misinterpreted by researchers, reviewers, readers, and others, leading to…

  4. The Importance of Invariance Procedures as against Tests of Statistical Significance.

    ERIC Educational Resources Information Center

    Fish, Larry

    A growing controversy surrounds the strict interpretation of statistical significance tests in social research. Statistical significance tests fail in particular to provide estimates for the stability of research results. Methods that do provide such estimates are known as invariance or cross-validation procedures. Invariance analysis is largely…

  5. A Review of Post-1994 Literature on Whether Statistical Significance Tests Should Be Banned.

    ERIC Educational Resources Information Center

    Sullivan, Jeremy R.

    This paper summarizes the literature regarding statistical significance testing with an emphasis on: (1) the post-1994 literature in various disciplines; (2) alternatives to statistical significance testing; and (3) literature exploring why researchers have demonstrably failed to be influenced by the 1994 American Psychological Association…

  6. Performance evaluation of hydrological models: Statistical significance for reducing subjectivity in goodness-of-fit assessments

    NASA Astrophysics Data System (ADS)

    Ritter, Axel; Muñoz-Carpena, Rafael

    2013-02-01

    SummarySuccess in the use of computer models for simulating environmental variables and processes requires objective model calibration and verification procedures. Several methods for quantifying the goodness-of-fit of observations against model-calculated values have been proposed but none of them is free of limitations and are often ambiguous. When a single indicator is used it may lead to incorrect verification of the model. Instead, a combination of graphical results, absolute value error statistics (i.e. root mean square error), and normalized goodness-of-fit statistics (i.e. Nash-Sutcliffe Efficiency coefficient, NSE) is currently recommended. Interpretation of NSE values is often subjective, and may be biased by the magnitude and number of data points, data outliers and repeated data. The statistical significance of the performance statistics is an aspect generally ignored that helps in reducing subjectivity in the proper interpretation of the model performance. In this work, approximated probability distributions for two common indicators (NSE and root mean square error) are derived with bootstrapping (block bootstrapping when dealing with time series), followed by bias corrected and accelerated calculation of confidence intervals. Hypothesis testing of the indicators exceeding threshold values is proposed in a unified framework for statistically accepting or rejecting the model performance. It is illustrated how model performance is not linearly related with NSE, which is critical for its proper interpretation. Additionally, the sensitivity of the indicators to model bias, outliers and repeated data is evaluated. The potential of the difference between root mean square error and mean absolute error for detecting outliers is explored, showing that this may be considered a necessary but not a sufficient condition of outlier presence. The usefulness of the approach for the evaluation of model performance is illustrated with case studies including those with

  7. Strategies for Testing Statistical and Practical Significance in Detecting DIF with Logistic Regression Models

    ERIC Educational Resources Information Center

    Fidalgo, Angel M.; Alavi, Seyed Mohammad; Amirian, Seyed Mohammad Reza

    2014-01-01

    This study examines three controversial aspects in differential item functioning (DIF) detection by logistic regression (LR) models: first, the relative effectiveness of different analytical strategies for detecting DIF; second, the suitability of the Wald statistic for determining the statistical significance of the parameters of interest; and…

  8. Using the Bootstrap Method for a Statistical Significance Test of Differences between Summary Histograms

    NASA Technical Reports Server (NTRS)

    Xu, Kuan-Man

    2006-01-01

    A new method is proposed to compare statistical differences between summary histograms, which are the histograms summed over a large ensemble of individual histograms. It consists of choosing a distance statistic for measuring the difference between summary histograms and using a bootstrap procedure to calculate the statistical significance level. Bootstrapping is an approach to statistical inference that makes few assumptions about the underlying probability distribution that describes the data. Three distance statistics are compared in this study. They are the Euclidean distance, the Jeffries-Matusita distance and the Kuiper distance. The data used in testing the bootstrap method are satellite measurements of cloud systems called cloud objects. Each cloud object is defined as a contiguous region/patch composed of individual footprints or fields of view. A histogram of measured values over footprints is generated for each parameter of each cloud object and then summary histograms are accumulated over all individual histograms in a given cloud-object size category. The results of statistical hypothesis tests using all three distances as test statistics are generally similar, indicating the validity of the proposed method. The Euclidean distance is determined to be most suitable after comparing the statistical tests of several parameters with distinct probability distributions among three cloud-object size categories. Impacts on the statistical significance levels resulting from differences in the total lengths of satellite footprint data between two size categories are also discussed.

  9. The Utility of Statistical Significance Testing in Psychological and Educational Research: A Review of Recent Literature and Proposed Alternatives.

    ERIC Educational Resources Information Center

    Sullivan, Jeremy R.

    2001-01-01

    Summarizes the post-1994 literature in psychology and education regarding statistical significance testing, emphasizing limitations and defenses of statistical testing and alternatives or supplements to statistical significance testing. (SLD)

  10. There's more than one way to conduct a replication study: Beyond statistical significance.

    PubMed

    Anderson, Samantha F; Maxwell, Scott E

    2016-03-01

    As the field of psychology struggles to trust published findings, replication research has begun to become more of a priority to both scientists and journals. With this increasing emphasis placed on reproducibility, it is essential that replication studies be capable of advancing the field. However, we argue that many researchers have been only narrowly interpreting the meaning of replication, with studies being designed with a simple statistically significant or nonsignificant results framework in mind. Although this interpretation may be desirable in some cases, we develop a variety of additional "replication goals" that researchers could consider when planning studies. Even if researchers are aware of these goals, we show that they are rarely used in practice-as results are typically analyzed in a manner only appropriate to a simple significance test. We discuss each goal conceptually, explain appropriate analysis procedures, and provide 1 or more examples to illustrate these analyses in practice. We hope that these various goals will allow researchers to develop a more nuanced understanding of replication that can be flexible enough to answer the various questions that researchers might seek to understand. PMID:26214497

  11. Statistical significance of task related deep brain EEG dynamic changes in the time-frequency domain.

    PubMed

    Chládek, J; Brázdil, M; Halámek, J; Plešinger, F; Jurák, P

    2013-01-01

    We present an off-line analysis procedure for exploring brain activity recorded from intra-cerebral electroencephalographic data (SEEG). The objective is to determine the statistical differences between different types of stimulations in the time-frequency domain. The procedure is based on computing relative signal power change and subsequent statistical analysis. An example of characteristic statistically significant event-related de/synchronization (ERD/ERS) detected across different frequency bands following different oddball stimuli is presented. The method is used for off-line functional classification of different brain areas. PMID:24109865

  12. Clinical relevance vs. statistical significance: Using neck outcomes in patients with temporomandibular disorders as an example.

    PubMed

    Armijo-Olivo, Susan; Warren, Sharon; Fuentes, Jorge; Magee, David J

    2011-12-01

    Statistical significance has been used extensively to evaluate the results of research studies. Nevertheless, it offers only limited information to clinicians. The assessment of clinical relevance can facilitate the interpretation of the research results into clinical practice. The objective of this study was to explore different methods to evaluate the clinical relevance of the results using a cross-sectional study as an example comparing different neck outcomes between subjects with temporomandibular disorders and healthy controls. Subjects were compared for head and cervical posture, maximal cervical muscle strength, endurance of the cervical flexor and extensor muscles, and electromyographic activity of the cervical flexor muscles during the CranioCervical Flexion Test (CCFT). The evaluation of clinical relevance of the results was performed based on the effect size (ES), minimal important difference (MID), and clinical judgement. The results of this study show that it is possible to have statistical significance without having clinical relevance, to have both statistical significance and clinical relevance, to have clinical relevance without having statistical significance, or to have neither statistical significance nor clinical relevance. The evaluation of clinical relevance in clinical research is crucial to simplify the transfer of knowledge from research into practice. Clinical researchers should present the clinical relevance of their results. PMID:21658987

  13. Impact of criticism of null-hypothesis significance testing on statistical reporting practices in conservation biology.

    PubMed

    Fidler, Fiona; Burgman, Mark A; Cumming, Geoff; Buttrose, Robert; Thomason, Neil

    2006-10-01

    Over the last decade, criticisms of null-hypothesis significance testing have grown dramatically, and several alternative practices, such as confidence intervals, information theoretic, and Bayesian methods, have been advocated. Have these calls for change had an impact on the statistical reporting practices in conservation biology? In 2000 and 2001, 92% of sampled articles in Conservation Biology and Biological Conservation reported results of null-hypothesis tests. In 2005 this figure dropped to 78%. There were corresponding increases in the use of confidence intervals, information theoretic, and Bayesian techniques. Of those articles reporting null-hypothesis testing--which still easily constitute the majority--very few report statistical power (8%) and many misinterpret statistical nonsignificance as evidence for no effect (63%). Overall, results of our survey show some improvements in statistical practice, but further efforts are clearly required to move the discipline toward improved practices. PMID:17002771

  14. Statistical Significance of Long-Range `Optimal Climate Normal' Temperature and Precipitation Forecasts.

    NASA Astrophysics Data System (ADS)

    Wilks, Daniel S.

    1996-04-01

    A simple approach to long-range forecasting of monthly or seasonal quantities is as the average of observations over some number of the most recent years. Finding this `optimal climate normal' (OCN) involves examining the relationships between the observed variable and averages of its values over the previous one to 30 years and selecting the averaging period yielding the best results. This procedure involves a multiplicity of comparisons, which will lead to misleadingly positive results for developments data. The statistical significance of these OCNs are assessed here using a resampling procedure, in which time series of U.S. Climate Division data are repeatedly shuffled to produce statistical distributions of forecast performance measures, under the null hypothesis that the OCNs exhibit no predictive skill. Substantial areas in the United States are found for which forecast performance appears to be significantly better than would occur by chance.Another complication in the assessment of the statistical significance of the OCNs derives from the spatial correlation exhibited by the data. Because of this correlation, instances of Type I errors (false rejections of local null hypotheses) will tend to occur with spatial coherency and accordingly have the potential to be confused with regions for which there may be real predictability. The `field significance' of the collections of local tests is also assessed here by simultaneously and coherently shuffling the time series for the Climate Divisions. Areas exhibiting significant local tests are large enough to conclude that seasonal OCN temperature forecasts exhibit significant skill over parts of the United States for all seasons except SON, OND, and NDJ, and that seasonal OCN precipitation forecasts are significantly skillful only in the fall. Statistical significance is weaker for monthly than for seasonal OCN temperature forecasts, and the monthly OCN precipitation forecasts do not exhibit significant predictive

  15. Alphas and Asterisks: The Development of Statistical Significance Testing Standards in Sociology

    ERIC Educational Resources Information Center

    Leahey, Erin

    2005-01-01

    In this paper, I trace the development of statistical significance testing standards in sociology by analyzing data from articles published in two prestigious sociology journals between 1935 and 2000. I focus on the role of two key elements in the diffusion literature, contagion and rationality, as well as the role of institutional factors. I…

  16. Evaluating Statistical Significance Using Corrected and Uncorrected Magnitude of Effect Size Estimates.

    ERIC Educational Resources Information Center

    Snyder, Patricia; Lawson, Stephen

    Magnitude of effect measures (MEMs), when adequately understood and correctly used, are important aids for researchers who do not want to rely solely on tests of statistical significance in substantive result interpretation. The MEM tells how much of the dependent variable can be controlled, predicted, or explained by the independent variables.…

  17. Using the Descriptive Bootstrap to Evaluate Result Replicability (Because Statistical Significance Doesn't)

    ERIC Educational Resources Information Center

    Spinella, Sarah

    2011-01-01

    As result replicability is essential to science and difficult to achieve through external replicability, the present paper notes the insufficiency of null hypothesis statistical significance testing (NHSST) and explains the bootstrap as a plausible alternative, with a heuristic example to illustrate the bootstrap method. The bootstrap relies on…

  18. Statistical Significance, Effect Size, and Replication: What Do the Journals Say?

    ERIC Educational Resources Information Center

    DeVaney, Thomas A.

    2001-01-01

    Studied the attitudes of representatives of journals in education, sociology, and psychology through an electronic survey completed by 194 journal representatives. Results suggest that the majority of journals do not have written policies concerning the reporting of results from statistical significance testing, and most indicated that statistical…

  19. Statistical Significance of the Trends in Monthly Heavy Precipitation Over the US

    SciTech Connect

    Mahajan, Salil; North, Dr. Gerald R.; Saravanan, Dr. R.; Genton, Dr. Marc G.

    2012-01-01

    Trends in monthly heavy precipitation, defined by a return period of one year, are assessed for statistical significance in observations and Global Climate Model (GCM) simulations over the contiguous United States using Monte Carlo non-parametric and parametric bootstrapping techniques. The results from the two Monte Carlo approaches are found to be similar to each other, and also to the traditional non-parametric Kendall's {tau} test, implying the robustness of the approach. Two different observational data-sets are employed to test for trends in monthly heavy precipitation and are found to exhibit consistent results. Both data-sets demonstrate upward trends, one of which is found to be statistically significant at the 95% confidence level. Upward trends similar to observations are observed in some climate model simulations of the twentieth century, but their statistical significance is marginal. For projections of the twenty-first century, a statistically significant upwards trend is observed in most of the climate models analyzed. The change in the simulated precipitation variance appears to be more important in the twenty-first century projections than changes in the mean precipitation. Stochastic fluctuations of the climate-system are found to be dominate monthly heavy precipitation as some GCM simulations show a downwards trend even in the twenty-first century projections when the greenhouse gas forcings are strong.

  20. Statistical Significance of the Contribution of Variables to the PCA Solution: An Alternative Permutation Strategy

    ERIC Educational Resources Information Center

    Linting, Marielle; van Os, Bart Jan; Meulman, Jacqueline J.

    2011-01-01

    In this paper, the statistical significance of the contribution of variables to the principal components in principal components analysis (PCA) is assessed nonparametrically by the use of permutation tests. We compare a new strategy to a strategy used in previous research consisting of permuting the columns (variables) of a data matrix…

  1. Weighing the costs of different errors when determining statistical significant during monitoring

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Selecting appropriate significance levels when constructing confidence intervals and performing statistical analyses with rangeland monitoring data is not a straightforward process. This process is burdened by the conventional selection of “95% confidence” (i.e., Type I error rate, a =0.05) as the d...

  2. Recent Literature on Whether Statistical Significance Tests Should or Should Not Be Banned.

    ERIC Educational Resources Information Center

    Deegear, James

    This paper summarizes the literature regarding statistical significant testing with an emphasis on recent literature in various discipline and literature exploring why researchers have demonstrably failed to be influenced by the American Psychological Association publication manual's encouragement to report effect sizes. Also considered are…

  3. Interpreting Statistical Significance Test Results: A Proposed New "What If" Method.

    ERIC Educational Resources Information Center

    Kieffer, Kevin M.; Thompson, Bruce

    As the 1994 publication manual of the American Psychological Association emphasized, "p" values are affected by sample size. As a result, it can be helpful to interpret the results of statistical significant tests in a sample size context by conducting so-called "what if" analyses. However, these methods can be inaccurate unless "corrected" effect…

  4. The Effects of Electrode Impedance on Data Quality and Statistical Significance in ERP Recordings

    PubMed Central

    Kappenman, Emily S.; Luck, Steven J.

    2010-01-01

    To determine whether data quality is meaningfully reduced by high electrode impedance, EEG was recorded simultaneously from low- and high-impedance electrode sites during an oddball task. Low-frequency noise was found to be increased at high-impedance sites relative to low-impedance sites, especially when the recording environment was warm and humid. The increased noise at the high-impedance sites caused an increase in the number of trials needed to obtain statistical significance in analyses of P3 amplitude, but this could be partially mitigated by high-pass filtering and artifact rejection. High electrode impedance did not reduce statistical power for the N1 wave unless the recording environment was warm and humid. Thus, high electrode impedance may increase noise and decrease statistical power under some conditions, but these effects can be reduced by using a cool and dry recording environment and appropriate signal processing methods. PMID:20374541

  5. Identification of Microorganisms by High Resolution Tandem Mass Spectrometry with Accurate Statistical Significance.

    PubMed

    Alves, Gelio; Wang, Guanghui; Ogurtsov, Aleksey Y; Drake, Steven K; Gucek, Marjan; Suffredini, Anthony F; Sacks, David B; Yu, Yi-Kuo

    2016-02-01

    Correct and rapid identification of microorganisms is the key to the success of many important applications in health and safety, including, but not limited to, infection treatment, food safety, and biodefense. With the advance of mass spectrometry (MS) technology, the speed of identification can be greatly improved. However, the increasing number of microbes sequenced is challenging correct microbial identification because of the large number of choices present. To properly disentangle candidate microbes, one needs to go beyond apparent morphology or simple 'fingerprinting'; to correctly prioritize the candidate microbes, one needs to have accurate statistical significance in microbial identification. We meet these challenges by using peptidome profiles of microbes to better separate them and by designing an analysis method that yields accurate statistical significance. Here, we present an analysis pipeline that uses tandem MS (MS/MS) spectra for microbial identification or classification. We have demonstrated, using MS/MS data of 81 samples, each composed of a single known microorganism, that the proposed pipeline can correctly identify microorganisms at least at the genus and species levels. We have also shown that the proposed pipeline computes accurate statistical significances, i.e., E-values for identified peptides and unified E-values for identified microorganisms. The proposed analysis pipeline has been implemented in MiCId, a freely available software for Microorganism Classification and Identification. MiCId is available for download at http://www.ncbi.nlm.nih.gov/CBBresearch/Yu/downloads.html . Graphical Abstract ᅟ. PMID:26510657

  6. Identification of Microorganisms by High Resolution Tandem Mass Spectrometry with Accurate Statistical Significance

    NASA Astrophysics Data System (ADS)

    Alves, Gelio; Wang, Guanghui; Ogurtsov, Aleksey Y.; Drake, Steven K.; Gucek, Marjan; Suffredini, Anthony F.; Sacks, David B.; Yu, Yi-Kuo

    2016-02-01

    Correct and rapid identification of microorganisms is the key to the success of many important applications in health and safety, including, but not limited to, infection treatment, food safety, and biodefense. With the advance of mass spectrometry (MS) technology, the speed of identification can be greatly improved. However, the increasing number of microbes sequenced is challenging correct microbial identification because of the large number of choices present. To properly disentangle candidate microbes, one needs to go beyond apparent morphology or simple `fingerprinting'; to correctly prioritize the candidate microbes, one needs to have accurate statistical significance in microbial identification. We meet these challenges by using peptidome profiles of microbes to better separate them and by designing an analysis method that yields accurate statistical significance. Here, we present an analysis pipeline that uses tandem MS (MS/MS) spectra for microbial identification or classification. We have demonstrated, using MS/MS data of 81 samples, each composed of a single known microorganism, that the proposed pipeline can correctly identify microorganisms at least at the genus and species levels. We have also shown that the proposed pipeline computes accurate statistical significances, i.e., E-values for identified peptides and unified E-values for identified microorganisms. The proposed analysis pipeline has been implemented in MiCId, a freely available software for Microorganism Classification and Identification. MiCId is available for download at http://www.ncbi.nlm.nih.gov/CBBresearch/Yu/downloads.html.

  7. Mass spectrometry-based protein identification with accurate statistical significance assignment

    PubMed Central

    Alves, Gelio; Yu, Yi-Kuo

    2015-01-01

    Motivation: Assigning statistical significance accurately has become increasingly important as metadata of many types, often assembled in hierarchies, are constructed and combined for further biological analyses. Statistical inaccuracy of metadata at any level may propagate to downstream analyses, undermining the validity of scientific conclusions thus drawn. From the perspective of mass spectrometry-based proteomics, even though accurate statistics for peptide identification can now be achieved, accurate protein level statistics remain challenging. Results: We have constructed a protein ID method that combines peptide evidences of a candidate protein based on a rigorous formula derived earlier; in this formula the database P-value of every peptide is weighted, prior to the final combination, according to the number of proteins it maps to. We have also shown that this protein ID method provides accurate protein level E-value, eliminating the need of using empirical post-processing methods for type-I error control. Using a known protein mixture, we find that this protein ID method, when combined with the Sorić formula, yields accurate values for the proportion of false discoveries. In terms of retrieval efficacy, the results from our method are comparable with other methods tested. Availability and implementation: The source code, implemented in C++ on a linux system, is available for download at ftp://ftp.ncbi.nlm.nih.gov/pub/qmbp/qmbp_ms/RAId/RAId_Linux_64Bit. Contact: yyu@ncbi.nlm.nih.gov Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25362092

  8. A Multi-Core Parallelization Strategy for Statistical Significance Testing in Learning Classifier Systems

    PubMed Central

    Rudd, James; Moore, Jason H.; Urbanowicz, Ryan J.

    2013-01-01

    Permutation-based statistics for evaluating the significance of class prediction, predictive attributes, and patterns of association have only appeared within the learning classifier system (LCS) literature since 2012. While still not widely utilized by the LCS research community, formal evaluations of test statistic confidence are imperative to large and complex real world applications such as genetic epidemiology where it is standard practice to quantify the likelihood that a seemingly meaningful statistic could have been obtained purely by chance. LCS algorithms are relatively computationally expensive on their own. The compounding requirements for generating permutation-based statistics may be a limiting factor for some researchers interested in applying LCS algorithms to real world problems. Technology has made LCS parallelization strategies more accessible and thus more popular in recent years. In the present study we examine the benefits of externally parallelizing a series of independent LCS runs such that permutation testing with cross validation becomes more feasible to complete on a single multi-core workstation. We test our python implementation of this strategy in the context of a simulated complex genetic epidemiological data mining problem. Our evaluations indicate that as long as the number of concurrent processes does not exceed the number of CPU cores, the speedup achieved is approximately linear. PMID:24358057

  9. On an Additive Semigraphoid Model for Statistical Networks With Application to Pathway Analysis

    PubMed Central

    Li, Bing; Chun, Hyonho; Zhao, Hongyu

    2014-01-01

    We introduce a nonparametric method for estimating non-gaussian graphical models based on a new statistical relation called additive conditional independence, which is a three-way relation among random vectors that resembles the logical structure of conditional independence. Additive conditional independence allows us to use one-dimensional kernel regardless of the dimension of the graph, which not only avoids the curse of dimensionality but also simplifies computation. It also gives rise to a parallel structure to the gaussian graphical model that replaces the precision matrix by an additive precision operator. The estimators derived from additive conditional independence cover the recently introduced nonparanormal graphical model as a special case, but outperform it when the gaussian copula assumption is violated. We compare the new method with existing ones by simulations and in genetic pathway analysis. PMID:26401064

  10. Clinical Significance of Additional Ablation of Atrial Premature Beats after Catheter Ablation for Atrial Fibrillation

    PubMed Central

    Kim, In-Soo; Yang, Pil-Sung; Kim, Tae-Hoon; Park, Junbeum; Park, Jin-Kyu; Uhm, Jae Sun; Joung, Boyoung; Lee, Moon Hyoung

    2016-01-01

    Purpose The clinical significance of post-procedural atrial premature beats immediately after catheter ablation for atrial fibrillation (AF) has not been clearly determined. We hypothesized that the provocation of immediate recurrence of atrial premature beats (IRAPB) and additional ablation improves the clinical outcome of AF ablation. Materials and Methods We enrolled 200 patients with AF (76.5% males; 57.4±11.1 years old; 64.3% paroxysmal AF) who underwent catheter ablation. Post-procedure IRAPB was defined as frequent atrial premature beats (≥6/min) under isoproterenol infusion (5 µg/min), monitored for 10 min after internal cardioversion, and we ablated mappable IRAPBs. Post-procedural IRAPB provocations were conducted in 100 patients. We compared the patients who showed IRAPB with those who did not. We also compared the IRAPB provocation group with 100 age-, sex-, and AF-type-matched patients who completed ablation without provocation (No-Test group). Results 1) Among the post-procedural IRAPB provocation group, 33% showed IRAPB and required additional ablation with a longer procedure time (p=0.001) than those without IRAPB, without increasing the complication rate. 2) During 18.0±6.6 months of follow-up, the patients who showed IRAPB had a worse clinical recurrence rate than those who did not (27.3% vs. 9.0%; p=0.016), in spite of additional IRAPB ablation. 3) However, the clinical recurrence rate was significantly lower in the IRAPB provocation group (15.0%) than in the No-Test group (28.0%; p=0.025) without lengthening of the procedure time or raising complication rate. Conclusion The presence of post-procedural IRAPB was associated with a higher recurrence rate after AF ablation. However, IRAPB provocation and additional ablation might facilitate a better clinical outcome. A further prospective randomized study is warranted. PMID:26632385

  11. On the statistical significance of the bulk flow measured by the Planck satellite

    NASA Astrophysics Data System (ADS)

    Atrio-Barandela, F.

    2013-09-01

    A recent analysis of data collected by the Planck satellite detected a net dipole at the location of X-ray selected galaxy clusters, corresponding to a large-scale bulk flow extending at least to z ~ 0.18, the median redshift of the cluster sample. The amplitude of this flow, as measured with Planck, is consistent with earlier findings based on data from the Wilkinson Microwave Anisotropy Probe (WMAP). However, the uncertainty assigned to the dipole by the Planck team is much larger than that found in the WMAP studies, leading the authors of the Planck study to conclude that the observed bulk flow is not statistically significant. Here, we show that two of the three implementations of random sampling used in the error analysis of the Planck study lead to systematic overestimates in the uncertainty of the measured dipole. Random simulations of the sky do not take into account that the actual realization of the sky leads to filtered data that have a 12% lower root-mean-square dispersion than the average simulation. Using rotations around the Galactic pole (the Z axis), increases the uncertainty of the X and Y components of the dipole and artificially reduces the significance of the dipole detection from 98-99% to less than 90% confidence. When either effect is taken into account, the corrected errors agree with those obtained using random distributions of clusters on Planck data, and the resulting statistical significance of the dipole measured by Planck is consistent with that of the WMAP results.

  12. Statistical significance of trends and trend differences in layer-average atmospheric temperature time series

    NASA Astrophysics Data System (ADS)

    Santer, B. D.; Wigley, T. M. L.; Boyle, J. S.; Gaffen, D. J.; Hnilo, J. J.; Nychka, D.; Parker, D. E.; Taylor, K. E.

    2000-03-01

    This paper examines trend uncertainties in layer-average free atmosphere temperatures arising from the use of different trend estimation methods. It also considers statistical issues that arise in assessing the significance of individual trends and of trend differences between data sets. Possible causes of these trends are not addressed. We use data from satellite and radiosonde measurements and from two reanalysis projects. To facilitate intercomparison, we compute from reanalyses and radiosonde data temperatures equivalent to those from the satellite-based Microwave Sounding Unit (MSU). We compare linear trends based on minimization of absolute deviations (LA) and minimization of squared deviations (LS). Differences are generally less than 0.05°C/decade over 1959-1996. Over 1979-1993, they exceed 0.10°C/decade for lower tropospheric time series and 0.15°C/decade for the lower stratosphere. Trend fitting by the LA method can degrade the lower-tropospheric trend agreement of 0.03°C/decade (over 1979-1996) previously reported for the MSU and radiosonde data. In assessing trend significance we employ two methods to account for temporal autocorrelation effects. With our preferred method, virtually none of the individual 1979-1993 trends in deep-layer temperatures are significantly different from zero. To examine trend differences between data sets we compute 95% confidence intervals for individual trends and show that these overlap for almost all data sets considered. Confidence intervals for lower-tropospheric trends encompass both zero and the model-projected trends due to anthropogenic effects. We also test the significance of a trend in d(t), the time series of differences between a pair of data sets. Use of d(t) removes variability common to both time series and facilitates identification of small trend differences. This more discerning test reveals that roughly 30% of the data set comparisons have significant differences in lower-tropospheric trends

  13. Statistically significant faunal differences among Middle Ordovician age, Chickamauga Group bryozoan bioherms, central Alabama

    SciTech Connect

    Crow, C.J.

    1985-01-01

    Middle Ordovician age Chickamauga Group carbonates crop out along the Birmingham and Murphrees Valley anticlines in central Alabama. The macrofossil contents on exposed surfaces of seven bioherms have been counted to determine their various paleontologic characteristics. Twelve groups of organisms are present in these bioherms. Dominant organisms include bryozoans, algae, brachiopods, sponges, pelmatozoans, stromatoporoids and corals. Minor accessory fauna include predators, scavengers and grazers such as gastropods, ostracods, trilobites, cephalopods and pelecypods. Vertical and horizontal niche zonation has been detected for some of the bioherm dwelling fauna. No one bioherm of those studied exhibits all 12 groups of organisms; rather, individual bioherms display various subsets of the total diversity. Statistical treatment (G-test) of the diversity data indicates a lack of statistical homogeneity of the bioherms, both within and between localities. Between-locality population heterogeneity can be ascribed to differences in biologic responses to such gross environmental factors as water depth and clarity, and energy levels. At any one locality, gross aspects of the paleoenvironments are assumed to have been more uniform. Significant differences among bioherms at any one locality may have resulted from patchy distribution of species populations, differential preservation and other factors.

  14. A Network-Based Method to Assess the Statistical Significance of Mild Co-Regulation Effects

    PubMed Central

    Horvát, Emőke-Ágnes; Zhang, Jitao David; Uhlmann, Stefan; Sahin, Özgür; Zweig, Katharina Anna

    2013-01-01

    Recent development of high-throughput, multiplexing technology has initiated projects that systematically investigate interactions between two types of components in biological networks, for instance transcription factors and promoter sequences, or microRNAs (miRNAs) and mRNAs. In terms of network biology, such screening approaches primarily attempt to elucidate relations between biological components of two distinct types, which can be represented as edges between nodes in a bipartite graph. However, it is often desirable not only to determine regulatory relationships between nodes of different types, but also to understand the connection patterns of nodes of the same type. Especially interesting is the co-occurrence of two nodes of the same type, i.e., the number of their common neighbours, which current high-throughput screening analysis fails to address. The co-occurrence gives the number of circumstances under which both of the biological components are influenced in the same way. Here we present SICORE, a novel network-based method to detect pairs of nodes with a statistically significant co-occurrence. We first show the stability of the proposed method on artificial data sets: when randomly adding and deleting observations we obtain reliable results even with noise exceeding the expected level in large-scale experiments. Subsequently, we illustrate the viability of the method based on the analysis of a proteomic screening data set to reveal regulatory patterns of human microRNAs targeting proteins in the EGFR-driven cell cycle signalling system. Since statistically significant co-occurrence may indicate functional synergy and the mechanisms underlying canalization, and thus hold promise in drug target identification and therapeutic development, we provide a platform-independent implementation of SICORE with a graphical user interface as a novel tool in the arsenal of high-throughput screening analysis. PMID:24039936

  15. Statistical significant changes in ground thermal conditions of alpine Austria during the last decade

    NASA Astrophysics Data System (ADS)

    Kellerer-Pirklbauer, Andreas

    2016-04-01

    Longer data series (e.g. >10 a) of ground temperatures in alpine regions are helpful to improve the understanding regarding the effects of present climate change on distribution and thermal characteristics of seasonal frost- and permafrost-affected areas. Beginning in 2004 - and more intensively since 2006 - a permafrost and seasonal frost monitoring network was established in Central and Eastern Austria by the University of Graz. This network consists of c.60 ground temperature (surface and near-surface) monitoring sites which are located at 1922-3002 m a.s.l., at latitude 46°55'-47°22'N and at longitude 12°44'-14°41'E. These data allow conclusions about general ground thermal conditions, potential permafrost occurrence, trend during the observation period, and regional pattern of changes. Calculations and analyses of several different temperature-related parameters were accomplished. At an annual scale a region-wide statistical significant warming during the observation period was revealed by e.g. an increase in mean annual temperature values (mean, maximum) or the significant lowering of the surface frost number (F+). At a seasonal scale no significant trend of any temperature-related parameter was in most cases revealed for spring (MAM) and autumn (SON). Winter (DJF) shows only a weak warming. In contrast, the summer (JJA) season reveals in general a significant warming as confirmed by several different temperature-related parameters such as e.g. mean seasonal temperature, number of thawing degree days, number of freezing degree days, or days without night frost. On a monthly basis August shows the statistically most robust and strongest warming of all months, although regional differences occur. Despite the fact that the general ground temperature warming during the last decade is confirmed by the field data in the study region, complications in trend analyses arise by temperature anomalies (e.g. warm winter 2006/07) or substantial variations in the winter

  16. Statistics, Probability, Significance, Likelihood: Words Mean What We Define Them to Mean

    ERIC Educational Resources Information Center

    Drummond, Gordon B.; Tom, Brian D. M.

    2011-01-01

    Statisticians use words deliberately and specifically, but not necessarily in the way they are used colloquially. For example, in general parlance "statistics" can mean numerical information, usually data. In contrast, one large statistics textbook defines the term "statistic" to denote "a characteristic of a "sample", such as the average score",…

  17. Application of universal kriging for estimation of earthquake ground motion: Statistical significance of results

    SciTech Connect

    Carr, J.R.; Roberts, K.P.

    1989-02-01

    Universal kriging is compared with ordinary kriging for estimation of earthquake ground motion. Ordinary kriging is based on a stationary random function model; universal kriging is based on a nonstationary random function model representing first-order drift. Accuracy of universal kriging is compared with that for ordinary kriging; cross-validation is used as the basis for comparison. Hypothesis testing on these results shows that accuracy obtained using universal kriging is not significantly different from accuracy obtained using ordinary kriging. Test based on normal distribution assumptions are applied to errors measured in the cross-validation procedure; t and F tests reveal no evidence to suggest universal and ordinary kriging are different for estimation of earthquake ground motion. Nonparametric hypothesis tests applied to these errors and jackknife statistics yield the same conclusion: universal and ordinary kriging are not significantly different for this application as determined by a cross-validation procedure. These results are based on application to four independent data sets (four different seismic events).

  18. Key statistics related to CO/sub 2/ emissions: Significant contributing countries

    SciTech Connect

    Kellogg, M.A.; Edmonds, J.A.; Scott, M.J.; Pomykala, J.S.

    1987-07-01

    This country selection task report describes and applies a methodology for identifying a set of countries responsible for significant present and anticipated future emissions of CO/sub 2/ and other radiatively important gases (RIGs). The identification of countries responsible for CO/sub 2/ and other RIGs emissions will help determine to what extent a select number of countries might be capable of influencing future emissions. Once identified, those countries could potentially exercise cooperative collective control of global emissions and thus mitigate the associated adverse affects of those emissions. The methodology developed consists of two approaches: the resource approach and the emissions approach. While conceptually very different, both approaches yield the same fundamental conclusion. The core of any international initiative to control global emissions must include three key countries: the US, USSR, and the People's Republic of China. It was also determined that broader control can be achieved through the inclusion of sixteen additional countries with significant contributions to worldwide emissions.

  19. Weighted Feature Significance: A Simple, Interpretable Model of Compound Toxicity Based on the Statistical Enrichment of Structural Features

    PubMed Central

    Huang, Ruili; Southall, Noel; Xia, Menghang; Cho, Ming-Hsuang; Jadhav, Ajit; Nguyen, Dac-Trung; Inglese, James; Tice, Raymond R.; Austin, Christopher P.

    2009-01-01

    In support of the U.S. Tox21 program, we have developed a simple and chemically intuitive model we call weighted feature significance (WFS) to predict the toxicological activity of compounds, based on the statistical enrichment of structural features in toxic compounds. We trained and tested the model on the following: (1) data from quantitative high–throughput screening cytotoxicity and caspase activation assays conducted at the National Institutes of Health Chemical Genomics Center, (2) data from Salmonella typhimurium reverse mutagenicity assays conducted by the U.S. National Toxicology Program, and (3) hepatotoxicity data published in the Registry of Toxic Effects of Chemical Substances. Enrichments of structural features in toxic compounds are evaluated for their statistical significance and compiled into a simple additive model of toxicity and then used to score new compounds for potential toxicity. The predictive power of the model for cytotoxicity was validated using an independent set of compounds from the U.S. Environmental Protection Agency tested also at the National Institutes of Health Chemical Genomics Center. We compared the performance of our WFS approach with classical classification methods such as Naive Bayesian clustering and support vector machines. In most test cases, WFS showed similar or slightly better predictive power, especially in the prediction of hepatotoxic compounds, where WFS appeared to have the best performance among the three methods. The new algorithm has the important advantages of simplicity, power, interpretability, and ease of implementation. PMID:19805409

  20. Proteny: discovering and visualizing statistically significant syntenic clusters at the proteome level

    PubMed Central

    Gehrmann, Thies; Reinders, Marcel J.T.

    2015-01-01

    Background: With more and more genomes being sequenced, detecting synteny between genomes becomes more and more important. However, for microorganisms the genomic divergence quickly becomes large, resulting in different codon usage and shuffling of gene order and gene elements such as exons. Results: We present Proteny, a methodology to detect synteny between diverged genomes. It operates on the amino acid sequence level to be insensitive to codon usage adaptations and clusters groups of exons disregarding order to handle diversity in genomic ordering between genomes. Furthermore, Proteny assigns significance levels to the syntenic clusters such that they can be selected on statistical grounds. Finally, Proteny provides novel ways to visualize results at different scales, facilitating the exploration and interpretation of syntenic regions. We test the performance of Proteny on a standard ground truth dataset, and we illustrate the use of Proteny on two closely related genomes (two different strains of Aspergillus niger) and on two distant genomes (two species of Basidiomycota). In comparison to other tools, we find that Proteny finds clusters with more true homologies in fewer clusters that contain more genes, i.e. Proteny is able to identify a more consistent synteny. Further, we show how genome rearrangements, assembly errors, gene duplications and the conservation of specific genes can be easily studied with Proteny. Availability and implementation: Proteny is freely available at the Delft Bioinformatics Lab website http://bioinformatics.tudelft.nl/dbl/software. Contact: t.gehrmann@tudelft.nl Supplementary information: Supplementary data are available at Bioinformatics online. PMID:26116928

  1. Biological meaning, statistical significance, and classification of local spatial similarities in nonhomologous proteins.

    PubMed Central

    Alexandrov, N. N.; Go, N.

    1994-01-01

    We have completed an exhaustive search for the common spatial arrangements of backbone fragments (SARFs) in nonhomologous proteins. This type of local structural similarity, incorporating short fragments of backbone atoms, arranged not necessarily in the same order along the polypeptide chain, appears to be important for protein function and stability. To estimate the statistical significance of the similarities, we have introduced a similarity score. We present several locally similar structures, with a large similarity score, which have not yet been reported. On the basis of the results of pairwise comparison, we have performed hierarchical cluster analysis of protein structures. Our analysis is not limited by comparison of single chains but also includes complex molecules consisting of several subunits. The SARFs with backbone fragments from different polypeptide chains provide a stable interaction between subunits in protein molecules. In many cases the active site of enzyme is located at the same position relative to the common SARFs, implying a function of the certain SARFs as a universal interface of the protein-substrate interaction. PMID:8069217

  2. Significantly improved cyclability of lithium manganese oxide under elevated temperature by an easily oxidized electrolyte additive

    NASA Astrophysics Data System (ADS)

    Zhu, Yunmin; Rong, Haibo; Mai, Shaowei; Luo, Xueyi; Li, Xiaoping; Li, Weishan

    2015-12-01

    Spinel lithium manganese oxide, LiMn2O4, is a promising cathode for lithium ion battery in large-scale applications, because it possesses many advantages compared with currently used layered lithium cobalt oxide (LiCoO2) and olivine phosphate (LiFePO4), including naturally abundant resource, environmental friendliness and high and long work potential plateau. Its poor cyclability under high temperature, however, limits its application. In this work, we report a significant cyclability improvement of LiMn2O4 under elevated temperature by using dimethyl phenylphonite (DMPP) as an electrolyte additive. Charge/discharge tests demonstrate that the application of 0.5 wt.% DMPP yields a capacity retention improvement from 16% to 82% for LiMn2O4 after 200 cycles under 55 °C at 1 C (1C = 148 mAh g-1) between 3 and 4.5 V. Electrochemical and physical characterizations indicate that DMPP is electrochemically oxidized at the potential lower than that for lithium extraction, forming a protective cathode interphase on LiMn2O4, which suppresses the electrolyte decomposition and prevents LiMn2O4 from crystal destruction.

  3. Using the bootstrap to establish statistical significance for relative validity comparisons among patient-reported outcome measures

    PubMed Central

    2013-01-01

    Background Relative validity (RV), a ratio of ANOVA F-statistics, is often used to compare the validity of patient-reported outcome (PRO) measures. We used the bootstrap to establish the statistical significance of the RV and to identify key factors affecting its significance. Methods Based on responses from 453 chronic kidney disease (CKD) patients to 16 CKD-specific and generic PRO measures, RVs were computed to determine how well each measure discriminated across clinically-defined groups of patients compared to the most discriminating (reference) measure. Statistical significance of RV was quantified by the 95% bootstrap confidence interval. Simulations examined the effects of sample size, denominator F-statistic, correlation between comparator and reference measures, and number of bootstrap replicates. Results The statistical significance of the RV increased as the magnitude of denominator F-statistic increased or as the correlation between comparator and reference measures increased. A denominator F-statistic of 57 conveyed sufficient power (80%) to detect an RV of 0.6 for two measures correlated at r = 0.7. Larger denominator F-statistics or higher correlations provided greater power. Larger sample size with a fixed denominator F-statistic or more bootstrap replicates (beyond 500) had minimal impact. Conclusions The bootstrap is valuable for establishing the statistical significance of RV estimates. A reasonably large denominator F-statistic (F > 57) is required for adequate power when using the RV to compare the validity of measures with small or moderate correlations (r < 0.7). Substantially greater power can be achieved when comparing measures of a very high correlation (r > 0.9). PMID:23721463

  4. Additives

    NASA Technical Reports Server (NTRS)

    Smalheer, C. V.

    1973-01-01

    The chemistry of lubricant additives is discussed to show what the additives are chemically and what functions they perform in the lubrication of various kinds of equipment. Current theories regarding the mode of action of lubricant additives are presented. The additive groups discussed include the following: (1) detergents and dispersants, (2) corrosion inhibitors, (3) antioxidants, (4) viscosity index improvers, (5) pour point depressants, and (6) antifouling agents.

  5. Investigation of smart inspection of critical layer reticles using additional designer data to determine defect significance

    NASA Astrophysics Data System (ADS)

    Volk, William W.; Hess, Carl; Ruch, Wayne; Yu, Zongchang; Ma, Weimin; Fisher, Lisa; Vickery, Carl; Ma, Z. Mark

    2003-12-01

    With expected implementation of low k1 lithography on 193nm scanners for 65nm node wafer production, high resolution defect inspection will be needed to insure reticle quality and reticle manufacture process monitoring. Reticle cost and reticle defectivity are both increasing with each shrink to the next node. Simultaneously, system on chip (SoC) designs are increasing in which a large area of the exposure field typically contains dummy patterns and other features which are not electrically active. Knowing which defects will electrically impact device yield and performance can improve reticle manufacturing yield and cycle time -- resulting in lower reticle costs. This investigation examines the feasibility of using additional design data layers for die-to-database reticle inspection to determine in real time the relevance of a reticle defect by its location in the device (Smart InspectionTM). The impact to data preparation and inspection throughput is evaluated. The current prototype algorithm is built on the XPA and XPE die-to-database algorithms for chrome-on-glass and EPSM reticles, respectively. The algorithms implement variable sensitivity based on the additional design data regions. During defect review the defects are intelligently binned into the different predetermined design regions. Tests show the new Smart Inspection algorithm provides the capability of using higher than normal sensitivity in critical regions while reducing sensitivity in less critical regions to filter total defect counts and allow for the review of just defects that matter. Performance characterization of a variable sensitivity Smart Inspection algorithm is discussed in addition to the filtering of the total defect count during review to show the defects that matter to device performance. Using seven critical layer production reticles from a system on chip device we examine the applications of Smart Inspection by layer including active, poly, contact, metal and via layers. Data volume

  6. Statistical Versus Clinical Significance for Infants with Brain Injury: Reanalysis of Outcome Data from a Randomized Controlled Study

    PubMed Central

    Badr, Lina Kurdahi

    2009-01-01

    By adopting more appropriate statistical methods to appraise data from a previously published randomized controlled trial (RCT), we evaluated the statistical and clinical significance of an intervention on the 18 month neurodevelopmental outcome of infants with suspected brain injury. The intervention group (n =32) received extensive, individualized cognitive/sensorimotor stimulation by public health nurses (PHNs) while the control group (n = 30) received standard follow-up care. At 18 months 43 infants remained in the study (22 = intervention, 21 = control). The results indicate that there was a significant statistical change within groups and a clinical significance whereby more infants in the intervention group improved in mental, motor and neurological functioning at 18 months compared to the control group. The benefits of looking at clinical significance from a meaningful aspect for practitioners are emphasized. PMID:19276403

  7. Statistical physics inspired methods to assign statistical significance in bioinformatics and proteomics: From sequence comparison to mass spectrometry based peptide sequencing

    NASA Astrophysics Data System (ADS)

    Alves, Gelio

    After the sequencing of many complete genomes, we are in a post-genomic era in which the most important task has changed from gathering genetic information to organizing the mass of data as well as under standing how components interact with each other. The former is usually undertaking using bioinformatics methods, while the latter task is generally termed proteomics. Success in both parts demands correct statistical significance assignments for results found. In my dissertation. I study two concrete examples: global sequence alignment statistics and peptide sequencing/identification using mass spectrometry. High-performance liquid chromatography coupled to a mass spectrometer (HPLC/MS/MS), enabling peptide identifications and thus protein identifications, has become the tool of choice in large-scale proteomics experiments. Peptide identification is usually done by database searches methods. The lack of robust statistical significance assignment among current methods motivated the development of a novel de novo algorithm, RAId, whose score statistics then provide statistical significance for high scoring peptides found in our custom, enzyme-digested peptide library. The ease of incorporating post-translation modifications is another important feature of RAId. To organize the massive protein/DNA data accumulated, biologists often cluster proteins according to their similarity via tools such as sequence alignment. Homologous proteins share similar domains. To assess the similarity of two domains usually requires alignment from head to toe, ie. a global alignment. A good alignment score statistics with an appropriate null model enable us to distinguish the biologically meaningful similarity from chance similarity. There has been much progress in local alignment statistics, which characterize score statistics when alignments tend to appear as a short segment of the whole sequence. For global alignment, which is useful in domain alignment, there is still much room for

  8. Statistically significant performance results of a mine detector and fusion algorithm from an x-band high-resolution SAR

    NASA Astrophysics Data System (ADS)

    Williams, Arnold C.; Pachowicz, Peter W.

    2004-09-01

    Current mine detection research indicates that no single sensor or single look from a sensor will detect mines/minefields in a real-time manner at a performance level suitable for a forward maneuver unit. Hence, the integrated development of detectors and fusion algorithms are of primary importance. A problem in this development process has been the evaluation of these algorithms with relatively small data sets, leading to anecdotal and frequently over trained results. These anecdotal results are often unreliable and conflicting among various sensors and algorithms. Consequently, the physical phenomena that ought to be exploited and the performance benefits of this exploitation are often ambiguous. The Army RDECOM CERDEC Night Vision Laboratory and Electron Sensors Directorate has collected large amounts of multisensor data such that statistically significant evaluations of detection and fusion algorithms can be obtained. Even with these large data sets care must be taken in algorithm design and data processing to achieve statistically significant performance results for combined detectors and fusion algorithms. This paper discusses statistically significant detection and combined multilook fusion results for the Ellipse Detector (ED) and the Piecewise Level Fusion Algorithm (PLFA). These statistically significant performance results are characterized by ROC curves that have been obtained through processing this multilook data for the high resolution SAR data of the Veridian X-Band radar. We discuss the implications of these results on mine detection and the importance of statistical significance, sample size, ground truth, and algorithm design in performance evaluation.

  9. Determination of significant variables in compound wear using a statistical model

    SciTech Connect

    Pumwa, J.; Griffin, R.B.; Smith, C.M.

    1997-07-01

    This paper will report on a study of dry compound wear of normalized 1018 steel on A2 tool steel. Compound wear is a combination of sliding and impact wear. The compound wear machine consisted of an A2 tool steel wear plate that could be rotated, and an indentor head that held the 1018 carbon steel wear pins. The variables in the system were the rpm of the wear plate, the force with which the indentor strikes the wear plate, and the frequency with which the indentor strikes the wear plate. A statistically designed experiment was used to analyze the effects of the different variables on the compound wear process. The model developed showed that wear could be reasonably well predicted using a defined variable that was called the workrate. The paper will discuss the results of the modeling and the metallurgical changes that occurred at the indentor interface, with the wear plate, during the wear process.

  10. Statistical significance of hair analysis of clenbuterol to discriminate therapeutic use from contamination.

    PubMed

    Krumbholz, Aniko; Anielski, Patricia; Gfrerer, Lena; Graw, Matthias; Geyer, Hans; Schänzer, Wilhelm; Dvorak, Jiri; Thieme, Detlef

    2014-01-01

    Clenbuterol is a well-established β2-agonist, which is prohibited in sports and strictly regulated for use in the livestock industry. During the last few years clenbuterol-positive results in doping controls and in samples from residents or travellers from a high-risk country were suspected to be related the illegal use of clenbuterol for fattening. A sensitive liquid chromatography-tandem mass spectrometry (LC-MS/MS) method was developed to detect low clenbuterol residues in hair with a detection limit of 0.02 pg/mg. A sub-therapeutic application study and a field study with volunteers, who have a high risk of contamination, were performed. For the application study, a total dosage of 30 µg clenbuterol was applied to 20 healthy volunteers on 5 subsequent days. One month after the beginning of the application, clenbuterol was detected in the proximal hair segment (0-1 cm) in concentrations between 0.43 and 4.76 pg/mg. For the second part, samples of 66 Mexican soccer players were analyzed. In 89% of these volunteers, clenbuterol was detectable in their hair at concentrations between 0.02 and 1.90 pg/mg. A comparison of both parts showed no statistical difference between sub-therapeutic application and contamination. In contrast, discrimination to a typical abuse of clenbuterol is apparently possible. Due to these findings results of real doping control samples can be evaluated. PMID:25388545

  11. Clinical Significance: A Statistical Approach to Defining Meaningful Change in Psychotherapy Research.

    ERIC Educational Resources Information Center

    Jacobson, Neil S.; Truax, Paula

    1991-01-01

    Describes ways of operationalizing clinically significant change, defined as extent to which therapy moves someone outside range of dysfunctional population or within range of functional population. Uses examples to show how clients can be categorized on basis of this definition. Proposes reliable change index (RC) to determine whether magnitude…

  12. A Visitor's Guide to Effect Sizes--Statistical Significance versus Practical (Clinical) Importance of Research Findings

    ERIC Educational Resources Information Center

    Hojat, Mohammadreza; Xu, Gang

    2004-01-01

    Effect Sizes (ES) are an increasingly important index used to quantify the degree of practical significance of study results. This paper gives an introduction to the computation and interpretation of effect sizes from the perspective of the consumer of the research literature. The key points made are: (1) "ES" is a useful indicator of the…

  13. A Proposed New "What if Reliability" Analysis for Assessing the Statistical Significance of Bivariate Relationships

    ERIC Educational Resources Information Center

    Onwuegbuzie, Anthony J.; Roberts, J. Kyle; Daniel, Larry G.

    2005-01-01

    In this article, the authors (a) illustrate how displaying disattenuated correlation coefficients alongside their unadjusted counterparts will allow researchers to assess the impact of unreliability on bivariate relationships and (b) demonstrate how a proposed new "what if reliability" analysis can complement null hypothesis significance tests of…

  14. On the statistical significance of excess events: Remarks of caution and the need for a standard method of calculation

    NASA Technical Reports Server (NTRS)

    Staubert, R.

    1985-01-01

    Methods for calculating the statistical significance of excess events and the interpretation of the formally derived values are discussed. It is argued that a simple formula for a conservative estimate should generally be used in order to provide a common understanding of quoted values.

  15. The statistical significance of error probability as determined from decoding simulations for long codes

    NASA Technical Reports Server (NTRS)

    Massey, J. L.

    1976-01-01

    The very low error probability obtained with long error-correcting codes results in a very small number of observed errors in simulation studies of practical size and renders the usual confidence interval techniques inapplicable to the observed error probability. A natural extension of the notion of a 'confidence interval' is made and applied to such determinations of error probability by simulation. An example is included to show the surprisingly great significance of as few as two decoding errors in a very large number of decoding trials.

  16. Statistics

    Cancer.gov

    Links to sources of cancer-related statistics, including the Surveillance, Epidemiology and End Results (SEER) Program, SEER-Medicare datasets, cancer survivor prevalence data, and the Cancer Trends Progress Report.

  17. Myths and Misconceptions Revisited - What are the (Statistically Significant) methods to prevent employee injuries

    SciTech Connect

    Potts, T.T.; Hylko, J.M.; Almond, D.

    2007-07-01

    A company's overall safety program becomes an important consideration to continue performing work and for procuring future contract awards. When injuries or accidents occur, the employer ultimately loses on two counts - increased medical costs and employee absences. This paper summarizes the human and organizational components that contributed to successful safety programs implemented by WESKEM, LLC's Environmental, Safety, and Health Departments located in Paducah, Kentucky, and Oak Ridge, Tennessee. The philosophy of 'safety, compliance, and then production' and programmatic components implemented at the start of the contracts were qualitatively identified as contributing factors resulting in a significant accumulation of safe work hours and an Experience Modification Rate (EMR) of <1.0. Furthermore, a study by the Associated General Contractors of America quantitatively validated components, already found in the WESKEM, LLC programs, as contributing factors to prevent employee accidents and injuries. Therefore, an investment in the human and organizational components now can pay dividends later by reducing the EMR, which is the key to reducing Workers' Compensation premiums. Also, knowing your employees' demographics and taking an active approach to evaluate and prevent fatigue may help employees balance work and non-work responsibilities. In turn, this approach can assist employers in maintaining a healthy and productive workforce. For these reasons, it is essential that safety needs be considered as the starting point when performing work. (authors)

  18. Test of significant toxicity: a statistical application for assessing whether an effluent or site water is truly toxic.

    PubMed

    Denton, Debra L; Diamond, Jerry; Zheng, Lei

    2011-05-01

    The U.S. Environmental Protection Agency (U.S. EPA) and state agencies implement the Clean Water Act, in part, by evaluating the toxicity of effluent and surface water samples. A common goal for both regulatory authorities and permittees is confidence in an individual test result (e.g., no-observed-effect concentration [NOEC], pass/fail, 25% effective concentration [EC25]), which is used to make regulatory decisions, such as reasonable potential determinations, permit compliance, and watershed assessments. This paper discusses an additional statistical approach (test of significant toxicity [TST]), based on bioequivalence hypothesis testing, or, more appropriately, test of noninferiority, which examines whether there is a nontoxic effect at a single concentration of concern compared with a control. Unlike the traditional hypothesis testing approach in whole effluent toxicity (WET) testing, TST is designed to incorporate explicitly both α and β error rates at levels of toxicity that are unacceptable and acceptable, given routine laboratory test performance for a given test method. Regulatory management decisions are used to identify unacceptable toxicity levels for acute and chronic tests, and the null hypothesis is constructed such that test power is associated with the ability to declare correctly a truly nontoxic sample as acceptable. This approach provides a positive incentive to generate high-quality WET data to make informed decisions regarding regulatory decisions. This paper illustrates how α and β error rates were established for specific test method designs and tests the TST approach using both simulation analyses and actual WET data. In general, those WET test endpoints having higher routine (e.g., 50th percentile) within-test control variation, on average, have higher method-specific α values (type I error rate), to maintain a desired type II error rate. This paper delineates the technical underpinnings of this approach and demonstrates the benefits

  19. Statistical inference for the additive hazards model under outcome-dependent sampling

    PubMed Central

    Yu, Jichang; Liu, Yanyan; Sandler, Dale P.; Zhou, Haibo

    2015-01-01

    Cost-effective study design and proper inference procedures for data from such designs are always of particular interests to study investigators. In this article, we propose a biased sampling scheme, an outcome-dependent sampling (ODS) design for survival data with right censoring under the additive hazards model. We develop a weighted pseudo-score estimator for the regression parameters for the proposed design and derive the asymptotic properties of the proposed estimator. We also provide some suggestions for using the proposed method by evaluating the relative efficiency of the proposed method against simple random sampling design and derive the optimal allocation of the subsamples for the proposed design. Simulation studies show that the proposed ODS design is more powerful than other existing designs and the proposed estimator is more efficient than other estimators. We apply our method to analyze a cancer study conducted at NIEHS, the Cancer Incidence and Mortality of Uranium Miners Study, to study the risk of radon exposure to cancer. PMID:26379363

  20. Detecting multiple periodicities in observational data with the multifrequency periodogram - I. Analytic assessment of the statistical significance

    NASA Astrophysics Data System (ADS)

    Baluev, Roman V.

    2013-11-01

    We consider the `multifrequency' periodogram, in which the putative signal is modelled as a sum of two or more sinusoidal harmonics with independent frequencies. It is useful in cases when the data may contain several periodic components, especially when their interaction with each other and with the data sampling patterns might produce misleading results. Although the multifrequency statistic itself was constructed earlier, for example by G. Foster in his CLEANest algorithm, its probabilistic properties (the detection significance levels) are still poorly known and much of what is deemed known is not rigorous. These detection levels are nonetheless important for data analysis. We argue that to prove the simultaneous existence of all n components revealed in a multiperiodic variation, it is mandatory to apply at least 2n - 1 significance tests, among which most involve various multifrequency statistics, and only n tests are single-frequency ones. The main result of this paper is an analytic estimation of the statistical significance of the frequency tuples that the multifrequency periodogram can reveal. Using the theory of extreme values of random fields (the generalized Rice method), we find a useful approximation to the relevant false alarm probability. For the double-frequency periodogram, this approximation is given by the elementary formula (π/16)W2e- zz2, where W denotes the normalized width of the settled frequency range, and z is the observed periodogram maximum. We carried out intensive Monte Carlo simulations to show that the practical quality of this approximation is satisfactory. A similar analytic expression for the general multifrequency periodogram is also given, although with less numerical verification.

  1. Significant statistically relationship between the great volcanic eruptions and the count of sunspots from 1610 to the present

    NASA Astrophysics Data System (ADS)

    Casati, Michele

    2014-05-01

    The assertion that solar activity may play a significant role in the trigger of large volcanic eruptions is, and has been discussed by many geophysicists. Numerous scientific papers have established a possible correlation between these events and the electromagnetic coupling between the Earth and the Sun, but none of them has been able to highlight a possible statistically significant relationship between large volcanic eruptions and any of the series, such as geomagnetic activity, solar wind, sunspots number. In our research, we compare the 148 volcanic eruptions with index VEI4, the major 37 historical volcanic eruptions equal to or greater than index VEI5, recorded from 1610 to 2012 , with its sunspots number. Staring, as the threshold value, a monthly sunspot number of 46 (recorded during the great eruption of Krakatoa VEI6 historical index, August 1883), we note some possible relationships and conduct a statistical test. • Of the historical 31 large volcanic eruptions with index VEI5+, recorded between 1610 and 1955, 29 of these were recorded when the SSN<46. The remaining 2 eruptions were not recorded when the SSN<46, but rather during solar maxima of the solar cycle of the year 1739 and in the solar cycle No. 14 (Shikotsu eruption of 1739 and Ksudach 1907). • Of the historical 8 large volcanic eruptions with index VEI6+, recorded from 1610 to the present, 7 of these were recorded with SSN<46 and more specifically, within the three large solar minima known : Maunder (1645-1710), Dalton (1790-1830) and during the solar minimums occurred between 1880 and 1920. As the only exception, we note the eruption of Pinatubo of June 1991, recorded in the solar maximum of cycle 22. • Of the historical 6 major volcanic eruptions with index VEI5+, recorded after 1955, 5 of these were not recorded during periods of low solar activity, but rather during solar maxima, of the cycles 19,21 and 22. The significant tests, conducted with the chi-square χ ² = 7,782, detect a

  2. Mechanical and Electrical Properties of a Polyimide Film Significantly Enhanced by the Addition of Single-Wall Carbon Nanotubes

    NASA Technical Reports Server (NTRS)

    Meador, Michael A.

    2005-01-01

    Single-wall carbon nanotubes have been shown to possess a combination of outstanding mechanical, electrical, and thermal properties. The use of carbon nanotubes as an additive to improve the mechanical properties of polymers and/or enhance their thermal and electrical conductivity has been a topic of intense interest. Nanotube-modified polymeric materials could find a variety of applications in NASA missions including large-area antennas, solar arrays, and solar sails; radiation shielding materials for vehicles, habitats, and extravehicular activity suits; and multifunctional materials for vehicle structures and habitats. Use of these revolutionary materials could reduce vehicle weight significantly and improve vehicle performance and capabilities.

  3. Combined Statistical Analyses of Peptide Intensities and Peptide Occurrences Improves Identification of Significant Peptides from MS-based Proteomics Data

    SciTech Connect

    Webb-Robertson, Bobbie-Jo M.; McCue, Lee Ann; Waters, Katrina M.; Matzke, Melissa M.; Jacobs, Jon M.; Metz, Thomas O.; Varnum, Susan M.; Pounds, Joel G.

    2010-11-01

    Liquid chromatography-mass spectrometry-based (LC-MS) proteomics uses peak intensities of proteolytic peptides to infer the differential abundance of peptides/proteins. However, substantial run-to-run variability in peptide intensities and observations (presence/absence) of peptides makes data analysis quite challenging. The missing abundance values in LC-MS proteomics data are difficult to address with traditional imputation-based approaches because the mechanisms by which data are missing are unknown a priori. Data can be missing due to random mechanisms such as experimental error, or non-random mechanisms such as a true biological effect. We present a statistical approach that uses a test of independence known as a G-test to test the null hypothesis of independence between the number of missing values and the experimental groups. We pair the G-test results evaluating independence of missing data (IMD) with a standard analysis of variance (ANOVA) that uses only means and variances computed from the observed data. Each peptide is therefore represented by two statistical confidence metrics, one for qualitative differential observation and one for quantitative differential intensity. We use two simulated and two real LC-MS datasets to demonstrate the robustness and sensitivity of the ANOVA-IMD approach for assigning confidence to peptides with significant differential abundance among experimental groups.

  4. The statistical significance test of regional climate change caused by land use and land cover variation in West China

    NASA Astrophysics Data System (ADS)

    Wang, H. J.; Shi, W. L.; Chen, X. H.

    2006-05-01

    The West Development Policy being implemented in China is causing significant land use and land cover (LULC) changes in West China. With the up-to-date satellite database of the Global Land Cover Characteristics Database (GLCCD) that characterizes the lower boundary conditions, the regional climate model RIEMS-TEA is used to simulate possible impacts of the significant LULC variation. The model was run for five continuous three-month periods from 1 June to 1 September of 1993, 1994, 1995, 1996, and 1997, and the results of the five groups are examined by means of a student t-test to identify the statistical significance of regional climate variation. The main results are: (1) The regional climate is affected by the LULC variation because the equilibrium of water and heat transfer in the air-vegetation interface is changed. (2) The integrated impact of the LULC variation on regional climate is not only limited to West China where the LULC varies, but also to some areas in the model domain where the LULC does not vary at all. (3) The East Asian monsoon system and its vertical structure are adjusted by the large scale LULC variation in western China, where the consequences axe the enhancement of the westward water vapor transfer from the east east and the relevant increase of wet-hydrostatic energy in the middle-upper atmospheric layers. (4) The ecological engineering in West China affects significantly the regional climate in Northwest China, North China and the middle-lower reaches of the Yangtze River; there are obvious effects in South, Northeast, and Southwest China, but minor effects in Tibet.

  5. Testing earthquake prediction algorithms: Statistically significant advance prediction of the largest earthquakes in the Circum-Pacific, 1992-1997

    USGS Publications Warehouse

    Kossobokov, V.G.; Romashkova, L.L.; Keilis-Borok, V. I.; Healy, J.H.

    1999-01-01

    Algorithms M8 and MSc (i.e., the Mendocino Scenario) were used in a real-time intermediate-term research prediction of the strongest earthquakes in the Circum-Pacific seismic belt. Predictions are made by M8 first. Then, the areas of alarm are reduced by MSc at the cost that some earthquakes are missed in the second approximation of prediction. In 1992-1997, five earthquakes of magnitude 8 and above occurred in the test area: all of them were predicted by M8 and MSc identified correctly the locations of four of them. The space-time volume of the alarms is 36% and 18%, correspondingly, when estimated with a normalized product measure of empirical distribution of epicenters and uniform time. The statistical significance of the achieved results is beyond 99% both for M8 and MSc. For magnitude 7.5 + , 10 out of 19 earthquakes were predicted by M8 in 40% and five were predicted by M8-MSc in 13% of the total volume considered. This implies a significance level of 81% for M8 and 92% for M8-MSc. The lower significance levels might result from a global change in seismic regime in 1993-1996, when the rate of the largest events has doubled and all of them become exclusively normal or reversed faults. The predictions are fully reproducible; the algorithms M8 and MSc in complete formal definitions were published before we started our experiment [Keilis-Borok, V.I., Kossobokov, V.G., 1990. Premonitory activation of seismic flow: Algorithm M8, Phys. Earth and Planet. Inter. 61, 73-83; Kossobokov, V.G., Keilis-Borok, V.I., Smith, S.W., 1990. Localization of intermediate-term earthquake prediction, J. Geophys. Res., 95, 19763-19772; Healy, J.H., Kossobokov, V.G., Dewey, J.W., 1992. A test to evaluate the earthquake prediction algorithm, M8. U.S. Geol. Surv. OFR 92-401]. M8 is available from the IASPEI Software Library [Healy, J.H., Keilis-Borok, V.I., Lee, W.H.K. (Eds.), 1997. Algorithms for Earthquake Statistics and Prediction, Vol. 6. IASPEI Software Library]. ?? 1999 Elsevier

  6. Significant reserve additions from oligocene Hackberry Sands utilizing 3-D seismic, upper Texas and Louisiana Gulf Coast

    SciTech Connect

    Zamboras, R.L.

    1995-10-01

    The Oligocene Hackberry sands of the Hackberry Embayment represent a complex and elusive exploration target. 3-D seismic evaluation along the headward erosional limits of the embayment provides a reconstructive framework of tectonic and sedimentation patterns which facilitate hydrocarbon exploration. The 3-D seismic along the Orange County, Texas portion of the Oligocene Hackberry trend indicates: (1) similarities of Hackberry structural and depositional setting to that of the underlying Eocene Yegua Formation; (2) four distinct cyclical sedimentation episodes associated with basin floor slump faulting: (3) the usefulness of seismic attributes as direct hydrocarbon indicators, and (4) the potential for significant oil and gas reserves additions in a mature trend. The Hackberry embayment represents a microcosm of the basin structural and depositional processes. Utilizing 3-D seismic to lower risk and finding cost will renew interest in trends such as the Hackberry of the Upper Texas-Louisiana Gulf Coast.

  7. Appropriate Fe (II) Addition Significantly Enhances Anaerobic Ammonium Oxidation (Anammox) Activity through Improving the Bacterial Growth Rate

    PubMed Central

    Liu, Yiwen; Ni, Bing-Jie

    2015-01-01

    The application of anaerobic ammonium oxidation (Anammox) process is often limited by the slow growth rate of Anammox bacteria. As the essential substrate element that required for culturing Anammox sludge, Fe (II) is expected to affect Anammox bacterial growth. This work systematically studied the effects of Fe (II) addition on Anammox activity based on the kinetic analysis of specific growth rate using data from batch tests with an enriched Anammox sludge at different dosing levels. Results clearly demonstrated that appropriate Fe (II) dosing (i.e., 0.09 mM) significantly enhanced the specific Anammox growth rate up to 0.172 d−1 compared to 0.118 d−1 at regular Fe (II) level (0.03 mM). The relationship between Fe (II) concentration and specific Anammox growth rate was found to be well described by typical substrate inhibition kinetics, which was integrated into currently well-established Anammox model to describe the enhanced Anammox growth with Fe (II) addition. The validity of the integrated Anammox model was verified using long-term experimental data from three independent Anammox reactors with different Fe (II) dosing levels. This Fe (II)-based approach could be potentially implemented to enhance the process rate for possible mainstream application of Anammox technology, in order for an energy autarchic wastewater treatment. PMID:25644239

  8. Appropriate Fe (II) Addition Significantly Enhances Anaerobic Ammonium Oxidation (Anammox) Activity through Improving the Bacterial Growth Rate

    NASA Astrophysics Data System (ADS)

    Liu, Yiwen; Ni, Bing-Jie

    2015-02-01

    The application of anaerobic ammonium oxidation (Anammox) process is often limited by the slow growth rate of Anammox bacteria. As the essential substrate element that required for culturing Anammox sludge, Fe (II) is expected to affect Anammox bacterial growth. This work systematically studied the effects of Fe (II) addition on Anammox activity based on the kinetic analysis of specific growth rate using data from batch tests with an enriched Anammox sludge at different dosing levels. Results clearly demonstrated that appropriate Fe (II) dosing (i.e., 0.09 mM) significantly enhanced the specific Anammox growth rate up to 0.172 d-1 compared to 0.118 d-1 at regular Fe (II) level (0.03 mM). The relationship between Fe (II) concentration and specific Anammox growth rate was found to be well described by typical substrate inhibition kinetics, which was integrated into currently well-established Anammox model to describe the enhanced Anammox growth with Fe (II) addition. The validity of the integrated Anammox model was verified using long-term experimental data from three independent Anammox reactors with different Fe (II) dosing levels. This Fe (II)-based approach could be potentially implemented to enhance the process rate for possible mainstream application of Anammox technology, in order for an energy autarchic wastewater treatment.

  9. A randomized trial in a massive online open course shows people don't know what a statistically significant relationship looks like, but they can learn.

    PubMed

    Fisher, Aaron; Anderson, G Brooke; Peng, Roger; Leek, Jeff

    2014-01-01

    Scatterplots are the most common way for statisticians, scientists, and the public to visually detect relationships between measured variables. At the same time, and despite widely publicized controversy, P-values remain the most commonly used measure to statistically justify relationships identified between variables. Here we measure the ability to detect statistically significant relationships from scatterplots in a randomized trial of 2,039 students in a statistics massive open online course (MOOC). Each subject was shown a random set of scatterplots and asked to visually determine if the underlying relationships were statistically significant at the P < 0.05 level. Subjects correctly classified only 47.4% (95% CI [45.1%-49.7%]) of statistically significant relationships, and 74.6% (95% CI [72.5%-76.6%]) of non-significant relationships. Adding visual aids such as a best fit line or scatterplot smooth increased the probability a relationship was called significant, regardless of whether the relationship was actually significant. Classification of statistically significant relationships improved on repeat attempts of the survey, although classification of non-significant relationships did not. Our results suggest: (1) that evidence-based data analysis can be used to identify weaknesses in theoretical procedures in the hands of average users, (2) data analysts can be trained to improve detection of statistically significant results with practice, but (3) data analysts have incorrect intuition about what statistically significant relationships look like, particularly for small effects. We have built a web tool for people to compare scatterplots with their corresponding p-values which is available here: http://glimmer.rstudio.com/afisher/EDA/. PMID:25337457

  10. A randomized trial in a massive online open course shows people don’t know what a statistically significant relationship looks like, but they can learn

    PubMed Central

    Fisher, Aaron; Anderson, G. Brooke; Peng, Roger

    2014-01-01

    Scatterplots are the most common way for statisticians, scientists, and the public to visually detect relationships between measured variables. At the same time, and despite widely publicized controversy, P-values remain the most commonly used measure to statistically justify relationships identified between variables. Here we measure the ability to detect statistically significant relationships from scatterplots in a randomized trial of 2,039 students in a statistics massive open online course (MOOC). Each subject was shown a random set of scatterplots and asked to visually determine if the underlying relationships were statistically significant at the P < 0.05 level. Subjects correctly classified only 47.4% (95% CI [45.1%–49.7%]) of statistically significant relationships, and 74.6% (95% CI [72.5%–76.6%]) of non-significant relationships. Adding visual aids such as a best fit line or scatterplot smooth increased the probability a relationship was called significant, regardless of whether the relationship was actually significant. Classification of statistically significant relationships improved on repeat attempts of the survey, although classification of non-significant relationships did not. Our results suggest: (1) that evidence-based data analysis can be used to identify weaknesses in theoretical procedures in the hands of average users, (2) data analysts can be trained to improve detection of statistically significant results with practice, but (3) data analysts have incorrect intuition about what statistically significant relationships look like, particularly for small effects. We have built a web tool for people to compare scatterplots with their corresponding p-values which is available here: http://glimmer.rstudio.com/afisher/EDA/. PMID:25337457

  11. An Exploratory Statistical Analysis of a Planet Approach-Phase Guidance Scheme Using Angular Measurements with Significant Error

    NASA Technical Reports Server (NTRS)

    Friedlander, Alan L.; Harry, David P., III

    1960-01-01

    An exploratory analysis of vehicle guidance during the approach to a target planet is presented. The objective of the guidance maneuver is to guide the vehicle to a specific perigee distance with a high degree of accuracy and minimum corrective velocity expenditure. The guidance maneuver is simulated by considering the random sampling of real measurements with significant error and reducing this information to prescribe appropriate corrective action. The instrumentation system assumed includes optical and/or infrared devices to indicate range and a reference angle in the trajectory plane. Statistical results are obtained by Monte-Carlo techniques and are shown as the expectation of guidance accuracy and velocity-increment requirements. Results are nondimensional and applicable to any planet within limits of two-body assumptions. The problem of determining how many corrections to make and when to make them is a consequence of the conflicting requirement of accurate trajectory determination and propulsion. Optimum values were found for a vehicle approaching a planet along a parabolic trajectory with an initial perigee distance of 5 radii and a target perigee of 1.02 radii. In this example measurement errors were less than i minute of arc. Results indicate that four corrections applied in the vicinity of 50, 16, 15, and 1.5 radii, respectively, yield minimum velocity-increment requirements. Thrust devices capable of producing a large variation of velocity-increment size are required. For a vehicle approaching the earth, miss distances within 32 miles are obtained with 90-percent probability. Total velocity increments used in guidance are less than 3300 feet per second with 90-percent probability. It is noted that the above representative results are valid only for the particular guidance scheme hypothesized in this analysis. A parametric study is presented which indicates the effects of measurement error size, initial perigee, and initial energy on the guidance

  12. A Study to Determine if Addition of Palatal Petechiae to Centor Criteria Adds More Significance to Clinical Diagnosis of Acute Strep Pharyngitis in Children.

    PubMed

    Nibhanipudi, Kumara V

    2016-01-01

    Objective. A study to determine if addition of palatal petechiae to Centor criteria adds more value for clinical diagnosis of acute strep pharyngitis in children. Hypothesis. In children, Centor Criteria does not cover all the symptoms and signs of acute strep pharyngitis. We hypothesize that addition of palatal petechiae to Centor Criteria will increase the possibility of clinical diagnosis of group A streptococcal pharyngitis in children. Methods. One hundred patients with a complaint of sore throat were enrolled in the study. All the patients were examined clinically using the Centor Criteria. They were also examined for other signs and symptoms like petechial lesions over the palate, abdominal pain, and skin rash. All the patients were given rapid strep tests, and throat cultures were sent. No antibiotics were given until culture results were obtained. Results. The sample size was 100 patients. All 100 had fever, sore throat, and erythema of tonsils. Twenty of the 100 patients had tonsillar exudates, 85/100 had tender anterior cervical lymph nodes, and 86/100 had no cough. In total, 9 out of the 100 patients had positive throat cultures. We observed that petechiae over the palate, a very significant sign, is not included in the Centor Criteria. Palatal petechiae were present in 8 out of the 100 patients. Six out of these 8 with palatal petechiae had positive throat culture for strep (75%). Only 7 out of 20 with exudates had positive strep culture. Sixteen out of the 100 patients had rapid strep test positive. Those 84/100 who had negative rapid strep also had negative throat culture. Statistics. We used Fisher's exact test, comparing throat culture positive and negative versus presence of exudates and palatal hemorrhages with positive and negative throat cultures and the resultant P value <.0001. Conclusion. Our study concludes that addition of petechiae over the palate to Centor Criteria will increase the possibility of diagnosing acute group A streptococcal

  13. A Study to Determine if Addition of Palatal Petechiae to Centor Criteria Adds More Significance to Clinical Diagnosis of Acute Strep Pharyngitis in Children

    PubMed Central

    Nibhanipudi, Kumara V.

    2016-01-01

    Objective. A study to determine if addition of palatal petechiae to Centor criteria adds more value for clinical diagnosis of acute strep pharyngitis in children. Hypothesis. In children, Centor Criteria does not cover all the symptoms and signs of acute strep pharyngitis. We hypothesize that addition of palatal petechiae to Centor Criteria will increase the possibility of clinical diagnosis of group A streptococcal pharyngitis in children. Methods. One hundred patients with a complaint of sore throat were enrolled in the study. All the patients were examined clinically using the Centor Criteria. They were also examined for other signs and symptoms like petechial lesions over the palate, abdominal pain, and skin rash. All the patients were given rapid strep tests, and throat cultures were sent. No antibiotics were given until culture results were obtained. Results. The sample size was 100 patients. All 100 had fever, sore throat, and erythema of tonsils. Twenty of the 100 patients had tonsillar exudates, 85/100 had tender anterior cervical lymph nodes, and 86/100 had no cough. In total, 9 out of the 100 patients had positive throat cultures. We observed that petechiae over the palate, a very significant sign, is not included in the Centor Criteria. Palatal petechiae were present in 8 out of the 100 patients. Six out of these 8 with palatal petechiae had positive throat culture for strep (75%). Only 7 out of 20 with exudates had positive strep culture. Sixteen out of the 100 patients had rapid strep test positive. Those 84/100 who had negative rapid strep also had negative throat culture. Statistics. We used Fisher’s exact test, comparing throat culture positive and negative versus presence of exudates and palatal hemorrhages with positive and negative throat cultures and the resultant P value <.0001. Conclusion. Our study concludes that addition of petechiae over the palate to Centor Criteria will increase the possibility of diagnosing acute group A streptococcal

  14. How to read a paper. Statistics for the non-statistician. II: "Significant" relations and their pitfalls.

    PubMed Central

    Greenhalgh, T.

    1997-01-01

    It is possible to be seriously misled by taking the statistical competence (and/or the intellectual honesty) of authors for granted. Some common errors committed (deliberately or inadvertently) by the authors of papers are given in the final box. PMID:9277611

  15. Gluten-free dough-making of specialty breads: Significance of blended starches, flours and additives on dough behaviour.

    PubMed

    Collar, Concha; Conte, Paola; Fadda, Costantino; Piga, Antonio

    2015-10-01

    The capability of different gluten-free (GF) basic formulations made of flour (rice, amaranth and chickpea) and starch (corn and cassava) blends, to make machinable and viscoelastic GF-doughs in absence/presence of single hydrocolloids (guar gum, locust bean and psyllium fibre), proteins (milk and egg white) and surfactants (neutral, anionic and vegetable oil) have been investigated. Macroscopic (high deformation) and macromolecular (small deformation) mechanical, viscometric (gelatinization, pasting, gelling) and thermal (gelatinization, melting, retrogradation) approaches were performed on the different matrices in order to (a) identify similarities and differences in GF-doughs in terms of a small number of rheological and thermal analytical parameters according to the formulations and (b) to assess single and interactive effects of basic ingredients and additives on GF-dough performance to achieve GF-flat breads. Larger values for the static and dynamic mechanical characteristics and higher viscometric profiles during both cooking and cooling corresponded to doughs formulated with guar gum and Psyllium fibre added to rice flour/starch and rice flour/corn starch/chickpea flour, while surfactant- and protein-formulated GF-doughs added to rice flour/starch/amaranth flour based GF-doughs exhibited intermediate and lower values for the mechanical parameters and poorer viscometric profiles. In addition, additive-free formulations exhibited higher values for the temperature of both gelatinization and retrogradation and lower enthalpies for the thermal transitions. Single addition of 10% of either chickpea flour or amaranth flour to rice flour/starch blends provided a large GF-dough hardening effect in presence of corn starch and an intermediate effect in presence of cassava starch (chickpea), and an intermediate reinforcement of GF-dough regardless the source of starch (amaranth). At macromolecular level, both chickpea and amaranth flours, singly added, determined

  16. Trigonal pyramidal carbon geometry as model for electrophilic addition-substitution and elimination reactions and its significance in enzymatic processes

    NASA Astrophysics Data System (ADS)

    Buck, Henk M.

    Various examples are given in which compounds are characterized as products or intermediates in a (distorted) trigonal pyramidal (TP) geometry. These observations have taken place mainly in the field of carbocation chemistry. Special attention is given to carbenium ions formed by halogen addition to 1,1-diarylsubstituted ethylenes focused on the electronic effects of the C-halogen bond as axial bond in a TP geometry with regard to the ?-distribution in the rest of the molecular system. The experimental verification is accompanied by quantum chemical calculations. We also used the TP structure as a reactive model for specific enzymatic reactions. The relevance of this geometry is shown for the dehalogenation reaction of the nucleophilic displacement in dichloroethane catalyzed by haloalkane dehalogenase and for the decarboxylation of L-ornithine with ornithine decarboxylase under loss of carbon dioxide.

  17. The clinical significance and management of patients with incomplete coronary angiography and the value of additional computed tomography coronary angiography.

    PubMed

    Pregowski, Jerzy; Kepka, Cezary; Kruk, Mariusz; Mintz, Gary S; Kalinczuk, Lukasz; Ciszewski, Michal; Kochanowski, Lukasz; Wolny, Rafal; Chmielak, Zbigniew; Jastrzębski, Jan; Klopotowski, Mariusz; Zalewska, Joanna; Demkow, Marcin; Karcz, Maciej; Witkowski, Adam

    2014-04-01

    To assess the anatomical background and significance of incomplete invasive coronary angiography (ICA) and to evaluate the value of coronary computed tomography angiography (CTA) in this scenario. The current study is an analysis of high volume center experience with prospective registry of coronary CTA and ICA. The target population was identified through a review of the electronic database. We included consecutive patients referred for coronary CTA after ICA, which did not visualize at least one native coronary artery or by-pass graft. Between January 2009 and April 2013, 13,603 diagnostic ICA were performed. There were 45 (0.3 %) patients referred for coronary CTA after incomplete ICA. Patients were divided into 3 groups: angina symptoms without previous coronary artery by-pass grafting (CABG) (n = 11,212), angina symptoms with previous CABG (n = 986), and patients prior to valvular surgery (n = 925). ICA did not identify by-pass grafts in 21 (2.2 %) patients and in 24 (0.2 %) cases of native arteries. The explanations for an incomplete ICA included: 11 ostium anomalies, 2 left main spasms, 5 access site problems, 5 ascending aorta aneurysms, and 2 tortuous take-off of a subclavian artery. However, in 20 (44 %) patients no specific reason for the incomplete ICA was identified. After coronary CTA revascularization was performed in 11 (24 %) patients: 6 successful repeat ICA and percutaneous intervention and 5 CABG. Incomplete ICA constitutes rare, but a significant clinical problem. Coronary CTA provides adequate clinical information in these patients. PMID:24623270

  18. Analysis/plot generation code with significance levels computed using Kolmogorov-Smirnov statistics valid for both large and small samples

    SciTech Connect

    Kurtz, S.E.; Fields, D.E.

    1983-10-01

    This report describes a version of the TERPED/P computer code that is very useful for small data sets. A new algorithm for determining the Kolmogorov-Smirnov (KS) statistics is used to extend program applicability. The TERPED/P code facilitates the analysis of experimental data and assists the user in determining its probability distribution function. Graphical and numerical tests are performed interactively in accordance with the user's assumption of normally or log-normally distributed data. Statistical analysis options include computation of the chi-square statistic and the KS one-sample test statistic and the corresponding significance levels. Cumulative probability plots of the user's data are generated either via a local graphics terminal, a local line printer or character-oriented terminal, or a remote high-resolution graphics device such as the FR80 film plotter or the Calcomp paper plotter. Several useful computer methodologies suffer from limitations of their implementations of the KS nonparametric test. This test is one of the more powerful analysis tools for examining the validity of an assumption about the probability distribution of a set of data. KS algorithms are found in other analysis codes, including the Statistical Analysis Subroutine (SAS) package and earlier versions of TERPED. The inability of these algorithms to generate significance levels for sample sizes less than 50 has limited their usefulness. The release of the TERPED code described herein contains algorithms to allow computation of the KS statistic and significance level for data sets of, if the user wishes, as few as three points. Values computed for the KS statistic are within 3% of the correct value for all data set sizes.

  19. The Importance of Integrating Clinical Relevance and Statistical Significance in the Assessment of Quality of Care –Illustrated Using the Swedish Stroke Register

    PubMed Central

    Lindmark, Anita; van Rompaye, Bart; Goetghebeur, Els; Glader, Eva-Lotta; Eriksson, Marie

    2016-01-01

    Background When profiling hospital performance, quality inicators are commonly evaluated through hospital-specific adjusted means with confidence intervals. When identifying deviations from a norm, large hospitals can have statistically significant results even for clinically irrelevant deviations while important deviations in small hospitals can remain undiscovered. We have used data from the Swedish Stroke Register (Riksstroke) to illustrate the properties of a benchmarking method that integrates considerations of both clinical relevance and level of statistical significance. Methods The performance measure used was case-mix adjusted risk of death or dependency in activities of daily living within 3 months after stroke. A hospital was labeled as having outlying performance if its case-mix adjusted risk exceeded a benchmark value with a specified statistical confidence level. The benchmark was expressed relative to the population risk and should reflect the clinically relevant deviation that is to be detected. A simulation study based on Riksstroke patient data from 2008–2009 was performed to investigate the effect of the choice of the statistical confidence level and benchmark value on the diagnostic properties of the method. Results Simulations were based on 18,309 patients in 76 hospitals. The widely used setting, comparing 95% confidence intervals to the national average, resulted in low sensitivity (0.252) and high specificity (0.991). There were large variations in sensitivity and specificity for different requirements of statistical confidence. Lowering statistical confidence improved sensitivity with a relatively smaller loss of specificity. Variations due to different benchmark values were smaller, especially for sensitivity. This allows the choice of a clinically relevant benchmark to be driven by clinical factors without major concerns about sufficiently reliable evidence. Conclusions The study emphasizes the importance of combining clinical relevance

  20. Analytic estimation of statistical significance maps for support vector machine based multi-variate image analysis and classification

    PubMed Central

    Gaonkar, Bilwaj; Davatzikos, Christos

    2013-01-01

    Multivariate pattern analysis (MVPA) methods such as support vector machines (SVMs) have been increasingly applied to fMRI and sMRI analyses, enabling the detection of distinctive imaging patterns. However, identifying brain regions that significantly contribute to the classification/group separation requires computationally expensive permutation testing. In this paper we show that the results of SVM-permutation testing can be analytically approximated. This approximation leads to more than a thousand fold speed up of the permutation testing procedure, thereby rendering it feasible to perform such tests on standard computers. The speed up achieved makes SVM based group difference analysis competitive with standard univariate group difference analysis methods. PMID:23583748

  1. t-Test at the Probe Level: An Alternative Method to Identify Statistically Significant Genes for Microarray Data

    PubMed Central

    Boareto, Marcelo; Caticha, Nestor

    2014-01-01

    Microarray data analysis typically consists in identifying a list of differentially expressed genes (DEG), i.e., the genes that are differentially expressed between two experimental conditions. Variance shrinkage methods have been considered a better choice than the standard t-test for selecting the DEG because they correct the dependence of the error with the expression level. This dependence is mainly caused by errors in background correction, which more severely affects genes with low expression values. Here, we propose a new method for identifying the DEG that overcomes this issue and does not require background correction or variance shrinkage. Unlike current methods, our methodology is easy to understand and implement. It consists of applying the standard t-test directly on the normalized intensity data, which is possible because the probe intensity is proportional to the gene expression level and because the t-test is scale- and location-invariant. This methodology considerably improves the sensitivity and robustness of the list of DEG when compared with the t-test applied to preprocessed data and to the most widely used shrinkage methods, Significance Analysis of Microarrays (SAM) and Linear Models for Microarray Data (LIMMA). Our approach is useful especially when the genes of interest have small differences in expression and therefore get ignored by standard variance shrinkage methods.

  2. Testing for Additivity in Chemical Mixtures Using a Fixed-Ratio Ray Design and Statistical Equivalence Testing Methods

    EPA Science Inventory

    Fixed-ratio ray designs have been used for detecting and characterizing interactions of large numbers of chemicals in combination. Single chemical dose-response data are used to predict an “additivity curve” along an environmentally relevant ray. A “mixture curve” is estimated fr...

  3. Statistical evaluation of the significance of the influence of abrupt changes in solar activity on the dynamics of the epidemic process

    NASA Technical Reports Server (NTRS)

    Druzhinin, I. P.; Khamyanova, N. V.; Yagodinskiy, V. N.

    1974-01-01

    Statistical evaluations of the significance of the relationship of abrupt changes in solar activity and discontinuities in the multi-year pattern of an epidemic process are reported. They reliably (with probability of more than 99.9%) show the real nature of this relationship and its great specific weight (about half) in the formation of discontinuities in the multi-year pattern of the processes in question.

  4. Multi-spectral detection of statistically significant components in pre-seismic electromagnetic emissions related with Athens 1999, M = 5.9 earthquake

    NASA Astrophysics Data System (ADS)

    Kalimeris, A.; Potirakis, S. M.; Eftaxias, K.; Antonopoulos, G.; Kopanas, J.; Nomikos, C.

    2016-05-01

    A multi-spectral analysis of the kHz electromagnetic time series associated with Athens' earthquake (M = 5.9, 7 September 1999) is presented here, that results to the reliable discrimination of the fracto-electromagnetic emissions from the natural geo-electromagnetic field background. Five spectral analysis methods are utilized in order to resolve the statistically significant variability modes of the studied dynamical system out of a red noise background (the revised Multi-Taper Method, the Singular Spectrum Analysis, and the Wavelet Analysis among them). The performed analysis reveals the existence of three distinct epochs in the time series for the period before the earthquake, a "quiet", a "transitional" and an "active" epoch. Towards the end of the active epoch, during a sub-period which is approximately starting two days before the earthquake, the dynamical system passes into a high activity state, where electromagnetic signal emissions become powerful and statistically significant almost in all time-scales. The temporal behavior of the studied system in each one of these epochs is further searched through mathematical reconstruction in the time domain of those spectral features that were found to be statistically significant. The transition of the system from the quiet to the active state proved to be detectable first in the long time-scales and afterwards in the short scales. Finally, a Hurst exponent analysis revealed persistent characteristics embedded in the two strong EM bursts observed during the "active" epoch.

  5. A novel pairwise comparison method for in silico discovery of statistically significant cis-regulatory elements in eukaryotic promoter regions: application to Arabidopsis.

    PubMed

    Shamloo-Dashtpagerdi, Roohollah; Razi, Hooman; Aliakbari, Massumeh; Lindlöf, Angelica; Ebrahimi, Mahdi; Ebrahimie, Esmaeil

    2015-01-01

    Cis regulatory elements (CREs), located within promoter regions, play a significant role in the blueprint for transcriptional regulation of genes. There is a growing interest to study the combinatorial nature of CREs including presence or absence of CREs, the number of occurrences of each CRE, as well as of their order and location relative to their target genes. Comparative promoter analysis has been shown to be a reliable strategy to test the significance of each component of promoter architecture. However, it remains unclear what level of difference in the number of occurrences of each CRE is of statistical significance in order to explain different expression patterns of two genes. In this study, we present a novel statistical approach for pairwise comparison of promoters of Arabidopsis genes in the context of number of occurrences of each CRE within the promoters. First, using the sample of 1000 Arabidopsis promoters, the results of the goodness of fit test and non-parametric analysis revealed that the number of occurrences of CREs in a promoter sequence is Poisson distributed. As a promoter sequence contained functional and non-functional CREs, we addressed the issue of the statistical distribution of functional CREs by analyzing the ChIP-seq datasets. The results showed that the number of occurrences of functional CREs over the genomic regions was determined as being Poisson distributed. In accordance with the obtained distribution of CREs occurrences, we suggested the Audic and Claverie (AC) test to compare two promoters based on the number of occurrences for the CREs. Superiority of the AC test over Chi-square (2×2) and Fisher's exact tests was also shown, as the AC test was able to detect a higher number of significant CREs. The two case studies on the Arabidopsis genes were performed in order to biologically verify the pairwise test for promoter comparison. Consequently, a number of CREs with significantly different occurrences was identified between

  6. IMGT/HighV-QUEST Statistical Significance of IMGT Clonotype (AA) Diversity per Gene for Standardized Comparisons of Next Generation Sequencing Immunoprofiles of Immunoglobulins and T Cell Receptors.

    PubMed

    Aouinti, Safa; Malouche, Dhafer; Giudicelli, Véronique; Kossida, Sofia; Lefranc, Marie-Paule

    2015-01-01

    The adaptive immune responses of humans and of other jawed vertebrate species (gnasthostomata) are characterized by the B and T cells and their specific antigen receptors, the immunoglobulins (IG) or antibodies and the T cell receptors (TR) (up to 2.1012 different IG and TR per individual). IMGT, the international ImMunoGeneTics information system (http://www.imgt.org), was created in 1989 by Marie-Paule Lefranc (Montpellier University and CNRS) to manage the huge and complex diversity of these antigen receptors. IMGT built on IMGT-ONTOLOGY concepts of identification (keywords), description (labels), classification (gene and allele nomenclature) and numerotation (IMGT unique numbering), is at the origin of immunoinformatics, a science at the interface between immunogenetics and bioinformatics. IMGT/HighV-QUEST, the first web portal, and so far the only one, for the next generation sequencing (NGS) analysis of IG and TR, is the paradigm for immune repertoire standardized outputs and immunoprofiles of the adaptive immune responses. It provides the identification of the variable (V), diversity (D) and joining (J) genes and alleles, analysis of the V-(D)-J junction and complementarity determining region 3 (CDR3) and the characterization of the 'IMGT clonotype (AA)' (AA for amino acid) diversity and expression. IMGT/HighV-QUEST compares outputs of different batches, up to one million nucleotide sequencesfor the statistical module. These high throughput IG and TR repertoire immunoprofiles are of prime importance in vaccination, cancer, infectious diseases, autoimmunity and lymphoproliferative disorders, however their comparative statistical analysis still remains a challenge. We present a standardized statistical procedure to analyze IMGT/HighV-QUEST outputs for the evaluation of the significance of the IMGT clonotype (AA) diversity differences in proportions, per gene of a given group, between NGS IG and TR repertoire immunoprofiles. The procedure is generic and

  7. SU-F-BRD-05: Dosimetric Comparison of Protocol-Based SBRT Lung Treatment Modalities: Statistically Significant VMAT Advantages Over Fixed- Beam IMRT

    SciTech Connect

    Best, R; Harrell, A; Geesey, C; Libby, B; Wijesooriya, K

    2014-06-15

    Purpose: The purpose of this study is to inter-compare and find statistically significant differences between flattened field fixed-beam (FB) IMRT with flattening-filter free (FFF) volumetric modulated arc therapy (VMAT) for stereotactic body radiation therapy SBRT. Methods: SBRT plans using FB IMRT and FFF VMAT were generated for fifteen SBRT lung patients using 6 MV beams. For each patient, both IMRT and VMAT plans were created for comparison. Plans were generated utilizing RTOG 0915 (peripheral, 10 patients) and RTOG 0813 (medial, 5 patients) lung protocols. Target dose, critical structure dose, and treatment time were compared and tested for statistical significance. Parameters of interest included prescription isodose surface coverage, target dose heterogeneity, high dose spillage (location and volume), low dose spillage (location and volume), lung dose spillage, and critical structure maximum- and volumetric-dose limits. Results: For all criteria, we found equivalent or higher conformality with VMAT plans as well as reduced critical structure doses. Several differences passed a Student's t-test of significance: VMAT reduced the high dose spillage, evaluated with conformality index (CI), by an average of 9.4%±15.1% (p=0.030) compared to IMRT. VMAT plans reduced the lung volume receiving 20 Gy by 16.2%±15.0% (p=0.016) compared with IMRT. For the RTOG 0915 peripheral lesions, the volumes of lung receiving 12.4 Gy and 11.6 Gy were reduced by 27.0%±13.8% and 27.5%±12.6% (for both, p<0.001) in VMAT plans. Of the 26 protocol pass/fail criteria, VMAT plans were able to achieve an average of 0.2±0.7 (p=0.026) more constraints than the IMRT plans. Conclusions: FFF VMAT has dosimetric advantages over fixed beam IMRT for lung SBRT. Significant advantages included increased dose conformity, and reduced organs-at-risk doses. The overall improvements in terms of protocol pass/fail criteria were more modest and will require more patient data to establish difference

  8. CorSig: A General Framework for Estimating Statistical Significance of Correlation and Its Application to Gene Co-Expression Analysis

    PubMed Central

    Wang, Hong-Qiang; Tsai, Chung-Jui

    2013-01-01

    With the rapid increase of omics data, correlation analysis has become an indispensable tool for inferring meaningful associations from a large number of observations. Pearson correlation coefficient (PCC) and its variants are widely used for such purposes. However, it remains challenging to test whether an observed association is reliable both statistically and biologically. We present here a new method, CorSig, for statistical inference of correlation significance. CorSig is based on a biology-informed null hypothesis, i.e., testing whether the true PCC (ρ) between two variables is statistically larger than a user-specified PCC cutoff (τ), as opposed to the simple null hypothesis of ρ = 0 in existing methods, i.e., testing whether an association can be declared without a threshold. CorSig incorporates Fisher's Z transformation of the observed PCC (r), which facilitates use of standard techniques for p-value computation and multiple testing corrections. We compared CorSig against two methods: one uses a minimum PCC cutoff while the other (Zhu's procedure) controls correlation strength and statistical significance in two discrete steps. CorSig consistently outperformed these methods in various simulation data scenarios by balancing between false positives and false negatives. When tested on real-world Populus microarray data, CorSig effectively identified co-expressed genes in the flavonoid pathway, and discriminated between closely related gene family members for their differential association with flavonoid and lignin pathways. The p-values obtained by CorSig can be used as a stand-alone parameter for stratification of co-expressed genes according to their correlation strength in lieu of an arbitrary cutoff. CorSig requires one single tunable parameter, and can be readily extended to other correlation measures. Thus, CorSig should be useful for a wide range of applications, particularly for network analysis of high-dimensional genomic data. Software

  9. Tribological characteristics of bisphenol AF bis(diphenyl phosphate) as an antiwear additive in polyalkylene glycol and polyurea grease for significantly improved lubrication

    NASA Astrophysics Data System (ADS)

    Zhu, Lili; Wu, Xinhu; Zhao, Gaiqing; Wang, Xiaobo

    2016-02-01

    A new antiwear additive of Bisphenol AF bis(diphenyl phosphate) (BAFDP) was synthesized and characterized. The tribological behaviors of the additive for polyalkylene glycol (PAG) and polyurea grease (PG) application in steel/steel contacts were evaluated on an Optimol SRV-IV oscillating reciprocating friction and wear tester at elevated temperature. The results revealed that BAFDP could drastically reduce friction and wear of sliding pairs in both PAG and also in PG at 100 °C. The tribological properties of BAFDP are superior to the normally used zinc dialkyldithiophosphate-based additive package (ZDDP) in PAG and PG. Moreover, BAFDP as additive for PAG and PG displays relatively significant tribological properties in temperature-ramp tests by performing well at 50-300 °C, indicating the excellent high temperature friction reduction and anti-wear capacity of BAFDP. XPS results showed that boundary lubrication films composed of Fe(OH)O, Fe3O4, FePO4, FeF2, FeF3, compounds containing the Psbnd O bonds, nitrogen oxide, and so forth, were formed on the worn surface, which contributed to excellent friction reduction and antiwear performance.

  10. Quantifying garnet-melt trace element partitioning using lattice-strain theory: assessment of statistically significant controls and a new predictive model

    NASA Astrophysics Data System (ADS)

    Draper, David S.; van Westrenen, Wim

    2007-12-01

    As a complement to our efforts to update and revise the thermodynamic basis for predicting garnet-melt trace element partitioning using lattice-strain theory (van Westrenen and Draper in Contrib Mineral Petrol, this issue), we have performed detailed statistical evaluations of possible correlations between intensive and extensive variables and experimentally determined garnet-melt partitioning values for trivalent cations (rare earth elements, Y, and Sc) entering the dodecahedral garnet X-site. We applied these evaluations to a database containing over 300 partition coefficient determinations, compiled both from literature values and from our own work designed in part to expand that database. Available data include partitioning measurements in ultramafic to basaltic to intermediate bulk compositions, and recent studies in Fe-rich systems relevant to extraterrestrial petrogenesis, at pressures sufficiently high such that a significant component of majorite, the high-pressure form of garnet, is present. Through the application of lattice-strain theory, we obtained best-fit values for the ideal ionic radius of the dodecahedral garnet X-site, r 0(3+), its apparent Young’s modulus E(3+), and the strain-free partition coefficient D 0(3+) for a fictive REE element J of ionic radius r 0(3+). Resulting values of E, D 0, and r 0 were used in multiple linear regressions involving sixteen variables that reflect the possible influence of garnet composition and stoichiometry, melt composition and structure, major-element partitioning, pressure, and temperature. We find no statistically significant correlations between fitted r 0 and E values and any combination of variables. However, a highly robust correlation between fitted D 0 and garnet-melt Fe Mg exchange and D Mg is identified. The identification of more explicit melt-compositional influence is a first for this type of predictive modeling. We combine this statistically-derived expression for predicting D 0 with the new

  11. Significance of Additional Non-Mass Enhancement in Patients with Breast Cancer on Preoperative 3T Dynamic Contrast Enhanced MRI of the Breast

    PubMed Central

    Cho, Yun Hee; Cho, Kyu Ran; Park, Eun Kyung; Seo, Bo Kyoung; Woo, Ok Hee; Cho, Sung Bum; Bae, Jeoung Won

    2016-01-01

    Background In preoperative assessment of breast cancer, MRI has been shown to identify more additional breast lesions than are detectable using conventional imaging techniques. The characterization of additional lesions is more important than detection for optimal surgical treatment. Additional breast lesions can be included in focus, mass, and non-mass enhancement (NME) on MRI. According to the fifth edition of the breast imaging reporting and data system (BI-RADS®), which includes several changes in the NME descriptors, few studies to date have evaluated NME in preoperative assessment of breast cancer. Objectives We investigated the diagnostic accuracy of BI-RADS descriptors in predicting malignancy for additional NME lesions detected on preoperative 3T dynamic contrast enhanced MRI (DCE-MRI) in patients with newly diagnosed breast cancer. Patients and Methods Between January 2008 and December 2012, 88 patients were enrolled in our study, all with NME lesions other than the index cancer on preoperative 3T DCE-MRI and all with accompanying histopathologic examination. The MRI findings were analyzed according to the BI-RADS MRI lexicon. We evaluated the size, distribution, internal enhancement pattern, and location of NME lesions relative to the index cancer (i.e., same quadrant, different quadrant, or contralateral breast). Results On histopathologic analysis of the 88 NME lesions, 73 (83%) were malignant and 15 (17%) were benign. Lesion size did not differ significantly between malignant and benign lesions (P = 0.410). Malignancy was more frequent in linear (P = 0.005) and segmental (P = 0.011) distributions, and benignancy was more frequent in focal (P = 0.004) and regional (P < 0.001) NME lesions. The highest positive predictive value (PPV) for malignancy occurred in segmental (96.8%), linear (95.1%), clustered ring (100%), and clumped (92.0%) enhancement. Asymmetry demonstrated a high positive predictive value of 85.9%. The frequency of malignancy was higher

  12. Addition of a third field significantly increases dose to the brachial plexus for patients undergoing tangential whole-breast therapy after lumpectomy

    SciTech Connect

    Stanic, Sinisa; Mathai, Mathew; Mayadev, Jyoti S.; Do, Ly V.; Purdy, James A.; Chen, Allen M.

    2012-07-01

    Our goal was to evaluate brachial plexus (BP) dose with and without the use of supraclavicular (SCL) irradiation in patients undergoing breast-conserving therapy with whole-breast radiation therapy (RT) after lumpectomy. Using the standardized Radiation Therapy Oncology Group (RTOG)-endorsed guidelines delineation, we contoured the BP for 10 postlumpectomy breast cancer patients. The radiation dose to the whole breast was 50.4 Gy using tangential fields in 1.8-Gy fractions, followed by a conedown to the operative bed using electrons (10 Gy). The prescription dose to the SCL field was 50.4 Gy, delivered to 3-cm depth. The mean BP volume was 14.5 {+-} 1.5 cm{sup 3}. With tangential fields alone, the median mean dose to the BP was 0.57 Gy, the median maximum dose was 1.93 Gy, and the irradiated volume of the BP receiving 40, 45, and 50 Gy was 0%. When the third (SCL field) was added, the dose to the BP was significantly increased (P = .01): the median mean dose to the BP was 40.60 Gy, and the median maximum dose was 52.22 Gy. With 3-field RT, the median irradiated volume of the BP receiving 40, 45, and 50 Gy was 83.5%, 68.5%, and 24.6%, respectively. The addition of the SCL field significantly increases dose to the BP. The possibility of increasing the risk of BP morbidity should be considered in the context of clinical decision making.

  13. Multiparametric PET/CT-perfusion does not add significant additional information for initial staging in lung cancer compared with standard PET/CT

    PubMed Central

    2014-01-01

    Background The purpose of this study was to assess the relationship of CT-perfusion (CTP), 18F-FDG-PET/CT and histological parameters, and the possible added value of CTP to FDG-PET/CT in the initial staging of lung cancer. Methods Fifty-four consecutive patients (median age 65 years, 15 females, 39 males) with suspected lung cancer were evaluated prospectively by CT-perfusion scan and 18F-FDG-PET/CT scan. Overall, 46 tumors were identified. CTP parameters blood flow (BF), blood volume (BV), and mean transit time (MTT) of the tumor tissue were calculated. Intratumoral microvessel density (MVD) was assessed quantitatively. Differences in CTP parameters concerning tumor type, location, PET positivity of lymph nodes, TNM status, and UICC stage were analyzed. Spearman correlation analyses between CTP and 18F-FDG-PET/CT parameters (SUVmax, SUVmean, PETvol, and TLG), MVD, tumor size, and tumor stage were performed. Results The mean BF (mL/100 mL min-1), BV (mL/100 mL), and MTT (s) was 35.5, 8.4, and 14.2, respectively. The BF and BV were lower in tumors with PET-positive lymph nodes (p = 0.02). However, the CTP values were not significantly different among the N stages. The CTP values were not different, depending on tumor size and location. No significant correlation was found between CTP parameters and MVD. Conclusions Overall, the CTP information showed only little additional information for the initial staging compared with standard FDG-PET/CT. Low perfusion in lung tumors might possibly be associated with metabolically active regional lymph nodes. Apart from that, both CTP and 18F-FDG-PET/CT parameter sets may reflect different pathophysiological mechanisms in lung cancer. PMID:24450990

  14. Bayesian Statistics.

    ERIC Educational Resources Information Center

    Meyer, Donald L.

    Bayesian statistical methodology and its possible uses in the behavioral sciences are discussed in relation to the solution of problems in both the use and teaching of fundamental statistical methods, including confidence intervals, significance tests, and sampling. The Bayesian model explains these statistical methods and offers a consistent…

  15. An additional fluorenylmethoxycarbonyl (Fmoc) moiety in di-Fmoc-functionalized L-lysine induces pH-controlled ambidextrous gelation with significant advantages.

    PubMed

    Reddy, Samala Murali Mohan; Shanmugam, Ganesh; Duraipandy, Natarajan; Kiran, Manikantan Syamala; Mandal, Asit Baran

    2015-11-01

    In recent years, several fluorenylmethoxycarbonyl (Fmoc)-functionalized amino acids and peptides have been used to construct hydrogels, which find a wide range of applications. Although several hydrogels have been prepared from mono Fmoc-functionalized amino acids, herein, we demonstrate the importance of an additional Fmoc-moiety in the hydrogelation of double Fmoc-functionalized L-lysine [Fmoc(Nα)-L-lysine(NεFmoc)-OH, (Fmoc-K(Fmoc))] as a low molecular weight gelator (LMWG). Unlike other Fmoc-functionalized amino acid gelators, Fmoc-K(Fmoc) exhibits pH-controlled ambidextrous gelation (hydrogelation at different pH values as well as organogelation), which is significant among the gelators. Distinct fibrous morphologies were observed for Fmoc-K(Fmoc) hydrogels formed at different pH values, which are different from organogels in which Fmoc-K(Fmoc) showed bundles of long fibers. In both hydrogels and organogels, the self-assembly of Fmoc-K(Fmoc) was driven by aromatic π-π stacking and hydrogen bonding interactions, as evidenced from spectroscopic analyses. Characterization of Fmoc-K(Fmoc) gels using several biophysical methods indicates that Fmoc-K(Fmoc) has several advantages and significant importance as a LMWG. The advantages of Fmoc-K(Fmoc) include pH-controlled ambidextrous gelation, pH stimulus response, high thermal stability (∼100 °C) even at low minimum hydrogelation concentration (0.1 wt%), thixotropic property, high kinetic and mechanical stability, dye removal properties, cell viability to the selected cell type, and as a drug carrier. While single Fmoc-functionalized L-lysine amino acids failed to exhibit gelation under similar experimental conditions, the pH-controlled ambidextrous gelation of Fmoc-K(Fmoc) demonstrates the benefit of a second Fmoc moiety in inducing gelation in a LMWG. We thus strongly believe that the current findings provide a lead to construct or design various new synthetic Fmoc-based LMW organic gelators for several

  16. [Intravesical instillation of bacillus Calmette-Guerin for superficial bladder carcinoma: study on significance of additional maintenance instillations of bacillus Calmette-Guerin].

    PubMed

    Yabusaki, N; Komatsu, H; Tago, K; Yamada, Y; Ueno, A

    1991-02-01

    The efficacy of maintenance bacillus Calmette-Guerin (BCG) instillations for superficial bladder tumors was studied by prospective randomized trial. From June 1985 to October 1988, 42 newly diagnosed patients with superficial bladder carcinoma (pTa or pT1) were treated by transurethral tumor resection and subsequent five daily instillations of mitomycin C. Then they were divided into non-maintenance group (22 patients) and maintenance group (20 patients) by randomization. The patients received six weekly instillations of 80 mg of BCG. Tokyo strain (Japan BCG manufacturing Co., Tokyo, Japan), suspended in 40 ml of physiological saline, and the patients in the maintenance group received four additional instillations of BCG every three months. We could not complete the six-week course of BCG instillations in three patients due to adverse effects (two in non-maintenance group and one in maintenance group) and we lost six patients from follow-up within one year (one in non-maintenance group and five in maintenance group). The mean follow-up period of the remaining 33 patients was 28.1 months. Of these 33 patients, six patients had been found to have recurrent tumors, and the over-all three-year non-recurrence rate was 82%. Before employing BCG, when we used only mitomycin C after TUR-Bt, the three year non-recurrence rate was 58%. This indicates prophylactic effect of BCG instillations. The stage of the initial tumor of the six recurrent cases were all pT1b. The non-recurrence rate of the patients with pT1b tumor was significantly lower than that of the patients with pTa and pT1a tumor. However, multiplicity and grade of tumors did not affect the non-recurrence rate.(ABSTRACT TRUNCATED AT 250 WORDS) PMID:1904120

  17. CHOICE OF INDICATOR DETERMINES THE SIGNIFICANCE AND RISK OBTAINED FROM THE STATISTICAL ASSOCIATION BETWEN FINE PARTICULATE MATTER MASS AND CARDIOVASCULAR MORTALITY

    EPA Science Inventory

    Minor changes in the indicator used to measure fine PM, which cause only modest changes in Mass concentrations, can lead to dramatic changes in the statistical relationship of fine PM mass with cardiovascular mortality. An epidemiologic study in Phoenix (Mar et al., 2000), augme...

  18. Percentage of Biopsy Cores Positive for Malignancy and Biochemical Failure Following Prostate Cancer Radiotherapy in 3,264 Men: Statistical Significance Without Predictive Performance

    SciTech Connect

    Williams, Scott G. Buyyounouski, Mark K.; Pickles, Tom; Kestin, Larry; Martinez, Alvaro; Hanlon, Alexandra L.; Duchesne, Gillian M.

    2008-03-15

    Purpose: To define and incorporate the impact of the percentage of positive biopsy cores (PPC) into a predictive model of prostate cancer radiotherapy biochemical outcome. Methods and Materials: The data of 3264 men with clinically localized prostate cancer treated with external beam radiotherapy at four institutions were retrospectively analyzed. Standard prognostic and treatment factors plus the number of biopsy cores collected and the number positive for malignancy by transrectal ultrasound-guided biopsy were available. The primary endpoint was biochemical failure (bF, Phoenix definition). Multivariate proportional hazards analyses were performed and expressed as a nomogram and the model's predictive ability assessed using the concordance index (c-index). Results: The cohort consisted of 21% low-, 51% intermediate-, and 28% high-risk cancer patients, and 30% had androgen deprivation with radiotherapy. The median PPC was 50% (interquartile range [IQR] 29-67%), and median follow-up was 51 months (IQR 29-71 months). Percentage of positive biopsy cores displayed an independent association with the risk of bF (p = 0.01), as did age, prostate-specific antigen value, Gleason score, clinical stage, androgen deprivation duration, and radiotherapy dose (p < 0.001 for all). Including PPC increased the c-index from 0.72 to 0.73 in the overall model. The influence of PPC varied significantly with radiotherapy dose and clinical stage (p = 0.02 for both interactions), with doses <66 Gy and palpable tumors showing the strongest relationship between PPC and bF. Intermediate-risk patients were poorly discriminated regardless of PPC inclusion (c-index 0.65 for both models). Conclusions: Outcome models incorporating PPC show only minor additional ability to predict biochemical failure beyond those containing standard prognostic factors.

  19. Addition of an N-terminal epitope tag significantly increases the activity of plant fatty acid desaturases expressed in yeast cells

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Saccharomyces cerevisiae shows great potential for development of bioreactor systems geared towards the production of high-value lipids such as polyunsaturated omega-3 fatty acids, the yields of which are largely dependent on the activity of ectopically-expressed enzymes. Here we show that the addit...

  20. Dodecahedranes and the significance of nuclear spin statistics for substructures under SU (m)↓SO(3) × 20 duality, within the specialised Racah symmetry chains for NMR

    NASA Astrophysics Data System (ADS)

    Temme, F. P.

    1992-12-01

    Realisation of the invariance properties of the p ⩽ 2 number partitional inventory components of the 20-fold spin algebra associated with [A] 20 nuclear spin clusters under SU2 × L20 allows the mappings {[λ] → Γ} to be derived. In addition, recent general inner tensor product expressions under Ln, for n even (odd), also facilitates the evaluation of many higher [λ] ( L20; p = 3) correlative mappings onto SU3↓SO(3) × L↓20T  A 5 subduced symmetry from SU2 duality, thus providing results that determine the nature of adapted NMR bases for both dodecahedrane and its d 20 analogue. The significance of this work lies in the pertinence of nuclear spin statistics to both selective MQ-NMR and to other spectroscopic aspects of cage clusters, e.g., [ 13C] n, n = 20, 60, fullerenes. Mappings onto Ln irreps sets of specific p ⩽ 3 number partitions arise in combinatorial treatment of {M iti} Rota fields, defining scalar invariants in the context of Cayley algebra. Inclusion of the Ln group in the specific Racah chain for NMR symmetry gives rise to significant further physical insight.

  1. Phase stabilization of magnetite (Fe3O4) nanoparticles with B2O3 addition: A significant enhancement on the phase transition temperature

    NASA Astrophysics Data System (ADS)

    Topal, Uğur; Aksan, Mehmet Ali

    2016-05-01

    Magnetite nanoparticles (MNPs) are extensively investigated for biomedical applications, particularly as contrast agents for Magnetic Resonance Imaging and as drug delivery agent and heat mediators for cancer therapy. Tuning the magnetic properties of the magnetite nanoparticles with doping of foreign atoms has a crucial importance for determining the application areas of these materials and so attracts much interests. On the other hand the doping with foreign atoms requires high temperature annealing, and it causes a phase transition to the hematite phase above 400 °C. In this work the phase transition temperature from the magnetite to the hematite phase has been increased by 200 °C, which is the highest enhancement reported in literature. It was achieved by addition of the appropriate amounts of B2O3. Our experiments indicates that the 5.0 wt% of B2O3 addition stabilizes and keeps the existence of single phase magnetite up to 600 °C.

  2. Significant Promotion Effect of Mo Additive on a Novel Ce-Zr Mixed Oxide Catalyst for the Selective Catalytic Reduction of NO(x) with NH3.

    PubMed

    Ding, Shipeng; Liu, Fudong; Shi, Xiaoyan; Liu, Kuo; Lian, Zhihua; Xie, Lijuan; He, Hong

    2015-05-13

    A novel Mo-promoted Ce-Zr mixed oxide catalyst prepared by a homogeneous precipitation method was used for the selective catalytic reduction (SCR) of NO(x) with NH3. The optimal catalyst showed high NH3-SCR activity, SO2/H2O durability, and thermal stability under test conditions. The addition of Mo inhibited growth of the CeO2 particle size, improved the redox ability, and increased the amount of surface acidity, especially the Lewis acidity, all of which were favorable for the excellent NH3-SCR performance. It is believed that the catalyst is promising for the removal of NO(x) from diesel engine exhaust. PMID:25894854

  3. From Bayes through Marginal Utility to Effect Sizes: A Guide to Understanding the Clinical and Statistical Significance of the Results of Autism Research Findings

    ERIC Educational Resources Information Center

    Cicchetti, Domenic V.; Koenig, Kathy; Klin, Ami; Volkmar, Fred R.; Paul, Rhea; Sparrow, Sara

    2011-01-01

    The objectives of this report are: (a) to trace the theoretical roots of the concept clinical significance that derives from Bayesian thinking, Marginal Utility/Diminishing Returns in Economics, and the "just noticeable difference", in Psychophysics. These concepts then translated into: Effect Size (ES), strength of agreement, clinical…

  4. An Estimation of the Likelihood of Significant Eruptions During 2000-2009 Using Poisson Statistics on Two-Point Moving Averages of the Volcanic Time Series

    NASA Technical Reports Server (NTRS)

    Wilson, Robert M.

    2001-01-01

    Since 1750, the number of cataclysmic volcanic eruptions (volcanic explosivity index (VEI)>=4) per decade spans 2-11, with 96 percent located in the tropics and extra-tropical Northern Hemisphere. A two-point moving average of the volcanic time series has higher values since the 1860's than before, being 8.00 in the 1910's (the highest value) and 6.50 in the 1980's, the highest since the 1910's peak. Because of the usual behavior of the first difference of the two-point moving averages, one infers that its value for the 1990's will measure approximately 6.50 +/- 1, implying that approximately 7 +/- 4 cataclysmic volcanic eruptions should be expected during the present decade (2000-2009). Because cataclysmic volcanic eruptions (especially those having VEI>=5) nearly always have been associated with short-term episodes of global cooling, the occurrence of even one might confuse our ability to assess the effects of global warming. Poisson probability distributions reveal that the probability of one or more events with a VEI>=4 within the next ten years is >99 percent. It is approximately 49 percent for an event with a VEI>=5, and 18 percent for an event with a VEI>=6. Hence, the likelihood that a climatically significant volcanic eruption will occur within the next ten years appears reasonably high.

  5. Cosmic statistics of statistics

    NASA Astrophysics Data System (ADS)

    Szapudi, István; Colombi, Stéphane; Bernardeau, Francis

    1999-12-01

    The errors on statistics measured in finite galaxy catalogues are exhaustively investigated. The theory of errors on factorial moments by Szapudi & Colombi is applied to cumulants via a series expansion method. All results are subsequently extended to the weakly non-linear regime. Together with previous investigations this yields an analytic theory of the errors for moments and connected moments of counts in cells from highly non-linear to weakly non-linear scales. For non-linear functions of unbiased estimators, such as the cumulants, the phenomenon of cosmic bias is identified and computed. Since it is subdued by the cosmic errors in the range of applicability of the theory, correction for it is inconsequential. In addition, the method of Colombi, Szapudi & Szalay concerning sampling effects is generalized, adapting the theory for inhomogeneous galaxy catalogues. While previous work focused on the variance only, the present article calculates the cross-correlations between moments and connected moments as well for a statistically complete description. The final analytic formulae representing the full theory are explicit but somewhat complicated. Therefore we have made available a fortran program capable of calculating the described quantities numerically (for further details e-mail SC at colombi@iap.fr). An important special case is the evaluation of the errors on the two-point correlation function, for which this should be more accurate than any method put forward previously. This tool will be immensely useful in the future for assessing the precision of measurements from existing catalogues, as well as aiding the design of new galaxy surveys. To illustrate the applicability of the results and to explore the numerical aspects of the theory qualitatively and quantitatively, the errors and cross-correlations are predicted under a wide range of assumptions for the future Sloan Digital Sky Survey. The principal results concerning the cumulants ξ, Q3 and Q4 is that

  6. Suite versus composite statistics

    USGS Publications Warehouse

    Balsillie, J.H.; Tanner, W.F.

    1999-01-01

    Suite and composite methodologies, two statistically valid approaches for producing statistical descriptive measures, are investigated for sample groups representing a probability distribution where, in addition, each sample is probability distribution. Suite and composite means (first moment measures) are always equivalent. Composite standard deviations (second moment measures) are always larger than suite standard deviations. Suite and composite values for higher moment measures have more complex relationships. Very seldom, however, are they equivalent, and they normally yield statistically significant but different results. Multiple samples are preferable to single samples (including composites) because they permit the investigator to examine sample-to-sample variability. These and other relationships for suite and composite probability distribution analyses are investigated and reported using granulometric data.

  7. The addition of rituximab to fludarabine and cyclophosphamide chemotherapy results in a significant improvement in overall survival in patients with newly diagnosed mantle cell lymphoma: results of a randomized UK National Cancer Research Institute trial

    PubMed Central

    Rule, Simon; Smith, Paul; Johnson, Peter W.M.; Bolam, Simon; Follows, George; Gambell, Joanne; Hillmen, Peter; Jack, Andrew; Johnson, Stephen; Kirkwood, Amy A; Kruger, Anton; Pocock, Christopher; Seymour, John F.; Toncheva, Milena; Walewski, Jan; Linch, David

    2016-01-01

    Mantle cell lymphoma is an incurable and generally aggressive lymphoma that is more common in elderly patients. Whilst a number of different chemotherapeutic regimens are active in this disease, there is no established gold standard therapy. Rituximab has been used widely to good effect in B-cell malignancies but there is no evidence that it improves outcomes when added to chemotherapy in this disease. We performed a randomized, open-label, multicenter study looking at the addition of rituximab to the standard chemotherapy regimen of fludarabine and cyclophosphamide in patients with newly diagnosed mantle cell lymphoma. A total of 370 patients were randomized. With a median follow up of six years, rituximab improved the median progression-free survival from 14.9 to 29.8 months (P<0.001) and overall survival from 37.0 to 44.5 months (P=0.005). This equates to absolute differences of 9.0% and 22.1% for overall and progression-free survival, respectively, at two years. Overall response rates were similar, but complete response rates were significantly higher in the rituximab arm: 52.7% vs. 39.9% (P=0.014). There was no clinically significant additional toxicity observed with the addition of rituximab. Overall, approximately 18% of patients died of non-lymphomatous causes, most commonly infections. The addition of rituximab to fludarabine and cyclophosphamide chemotherapy significantly improves outcomes in patients with mantle cell lymphoma. However, these regimens have significant late toxicity and should be used with caution. This trial has been registered (ISRCTN81133184 and clinicaltrials.gov:00641095) and is supported by the UK National Cancer Research Network. PMID:26611473

  8. The addition of rituximab to fludarabine and cyclophosphamide chemotherapy results in a significant improvement in overall survival in patients with newly diagnosed mantle cell lymphoma: results of a randomized UK National Cancer Research Institute trial.

    PubMed

    Rule, Simon; Smith, Paul; Johnson, Peter W M; Bolam, Simon; Follows, George; Gambell, Joanne; Hillmen, Peter; Jack, Andrew; Johnson, Stephen; Kirkwood, Amy A; Kruger, Anton; Pocock, Christopher; Seymour, John F; Toncheva, Milena; Walewski, Jan; Linch, David

    2016-02-01

    Mantle cell lymphoma is an incurable and generally aggressive lymphoma that is more common in elderly patients. Whilst a number of different chemotherapeutic regimens are active in this disease, there is no established gold standard therapy. Rituximab has been used widely to good effect in B-cell malignancies but there is no evidence that it improves outcomes when added to chemotherapy in this disease. We performed a randomized, open-label, multicenter study looking at the addition of rituximab to the standard chemotherapy regimen of fludarabine and cyclophosphamide in patients with newly diagnosed mantle cell lymphoma. A total of 370 patients were randomized. With a median follow up of six years, rituximab improved the median progression-free survival from 14.9 to 29.8 months (P<0.001) and overall survival from 37.0 to 44.5 months (P=0.005). This equates to absolute differences of 9.0% and 22.1% for overall and progression-free survival, respectively, at two years. Overall response rates were similar, but complete response rates were significantly higher in the rituximab arm: 52.7% vs. 39.9% (P=0.014). There was no clinically significant additional toxicity observed with the addition of rituximab. Overall, approximately 18% of patients died of non-lymphomatous causes, most commonly infections. The addition of rituximab to fludarabine and cyclophosphamide chemotherapy significantly improves outcomes in patients with mantle cell lymphoma. However, these regimens have significant late toxicity and should be used with caution. This trial has been registered (ISRCTN81133184 and clinicaltrials.gov:00641095) and is supported by the UK National Cancer Research Network. PMID:26611473

  9. In search of a statistical probability model for petroleum-resource assessment : a critique of the probabilistic significance of certain concepts and methods used in petroleum-resource assessment : to that end, a probabilistic model is sketched

    USGS Publications Warehouse

    Grossling, Bernardo F.

    1975-01-01

    Exploratory drilling is still in incipient or youthful stages in those areas of the world where the bulk of the potential petroleum resources is yet to be discovered. Methods of assessing resources from projections based on historical production and reserve data are limited to mature areas. For most of the world's petroleum-prospective areas, a more speculative situation calls for a critical review of resource-assessment methodology. The language of mathematical statistics is required to define more rigorously the appraisal of petroleum resources. Basically, two approaches have been used to appraise the amounts of undiscovered mineral resources in a geologic province: (1) projection models, which use statistical data on the past outcome of exploration and development in the province; and (2) estimation models of the overall resources of the province, which use certain known parameters of the province together with the outcome of exploration and development in analogous provinces. These two approaches often lead to widely different estimates. Some of the controversy that arises results from a confusion of the probabilistic significance of the quantities yielded by each of the two approaches. Also, inherent limitations of analytic projection models-such as those using the logistic and Gomperts functions --have often been ignored. The resource-assessment problem should be recast in terms that provide for consideration of the probability of existence of the resource and of the probability of discovery of a deposit. Then the two above-mentioned models occupy the two ends of the probability range. The new approach accounts for (1) what can be expected with reasonably high certainty by mere projections of what has been accomplished in the past; (2) the inherent biases of decision-makers and resource estimators; (3) upper bounds that can be set up as goals for exploration; and (4) the uncertainties in geologic conditions in a search for minerals. Actual outcomes can then

  10. Significant improvement of the recombinant Borrelia-specific immunoglobulin G immunoblot test by addition of VlsE and a DbpA homologue derived from Borrelia garinii for diagnosis of early neuroborreliosis.

    PubMed

    Schulte-Spechtel, Ulrike; Lehnert, Gisela; Liegl, Gaby; Fingerle, Volker; Heimerl, Christiane; Johnson, Barbara J B; Wilske, Bettina

    2003-03-01

    We investigated whether the recombinant Borrelia Western blot test previously described (B. Wilske, C. Habermann, V. Fingerle, B. Hillenbrand, S. Jauris-Heipke, G. Lehnert, I. Pradel, D. Rössler, and U. Schulte-Spechtel, Med. Microbiol. Immunol. 188:139-144, 1999) can be improved by the addition of VlsE and additional DbpA and OspC homologues. By using a panel of sera from 36 neuroborreliosis patients and 67 control patients, the diagnostic sensitivity of the recombinant immunoblot test was significantly increased (86.1% versus 52.7%) without loss of specificity and was higher (86.1% versus 63.8%) than that of the conventional whole-cell lysate immunoblot test (U. Hauser, G. Lehnert, R. Lobentanzer, and B. Wilske, J. Clin. Microbiol. 35:1433-1444, 1997). Improvement was mainly due to the presence of VlsE and DbpA. PMID:12624072

  11. Descriptive statistics.

    PubMed

    Shi, Runhua; McLarty, Jerry W

    2009-10-01

    In this article, we introduced basic concepts of statistics, type of distributions, and descriptive statistics. A few examples were also provided. The basic concepts presented herein are only a fraction of the concepts related to descriptive statistics. Also, there are many commonly used distributions not presented herein, such as Poisson distributions for rare events and exponential distributions, F distributions, and logistic distributions. More information can be found in many statistics books and publications. PMID:19891281

  12. Statistical Diversions

    ERIC Educational Resources Information Center

    Petocz, Peter; Sowey, Eric

    2008-01-01

    As a branch of knowledge, Statistics is ubiquitous and its applications can be found in (almost) every field of human endeavour. In this article, the authors track down the possible source of the link between the "Siren song" and applications of Statistics. Answers to their previous five questions and five new questions on Statistics are presented.

  13. Statistical Software.

    ERIC Educational Resources Information Center

    Callamaras, Peter

    1983-01-01

    This buyer's guide to seven major types of statistics software packages for microcomputers reviews Edu-Ware Statistics 3.0; Financial Planning; Speed Stat; Statistics with DAISY; Human Systems Dynamics package of Stats Plus, ANOVA II, and REGRESS II; Maxistat; and Moore-Barnes' MBC Test Construction and MBC Correlation. (MBR)

  14. Significant lexical relationships

    SciTech Connect

    Pedersen, T.; Kayaalp, M.; Bruce, R.

    1996-12-31

    Statistical NLP inevitably deals with a large number of rare events. As a consequence, NLP data often violates the assumptions implicit in traditional statistical procedures such as significance testing. We describe a significance test, an exact conditional test, that is appropriate for NLP data and can be performed using freely available software. We apply this test to the study of lexical relationships and demonstrate that the results obtained using this test are both theoretically more reliable and different from the results obtained using previously applied tests.

  15. Significant Treasures.

    ERIC Educational Resources Information Center

    Andrews, Ian A.

    1999-01-01

    Provides a crossword puzzle with an answer key corresponding to the book entitled "Significant Treasures/Tresors Parlants" that is filled with color and black-and-white prints of paintings and artifacts from 131 museums and art galleries as a sampling of the 2,200 such Canadian institutions. (CMK)

  16. Titanic: A Statistical Exploration.

    ERIC Educational Resources Information Center

    Takis, Sandra L.

    1999-01-01

    Uses the available data about the Titanic's passengers to interest students in exploring categorical data and the chi-square distribution. Describes activities incorporated into a statistics class and gives additional resources for collecting information about the Titanic. (ASK)

  17. Information geometry of Bayesian statistics

    NASA Astrophysics Data System (ADS)

    Matsuzoe, Hiroshi

    2015-01-01

    A survey of geometry of Bayesian statistics is given. From the viewpoint of differential geometry, a prior distribution in Bayesian statistics is regarded as a volume element on a statistical model. In this paper, properties of Bayesian estimators are studied by applying equiaffine structures of statistical manifolds. In addition, geometry of anomalous statistics is also studied. Deformed expectations and deformed independeces are important in anomalous statistics. After summarizing geometry of such deformed structues, a generalization of maximum likelihood method is given. A suitable weight on a parameter space is important in Bayesian statistics, whereas a suitable weight on a sample space is important in anomalous statistics.

  18. Food additives

    MedlinePlus

    Food additives are substances that become part of a food product when they are added during the processing or making of that food. "Direct" food additives are often added during processing to: Add nutrients ...

  19. Statistical databases

    SciTech Connect

    Kogalovskii, M.R.

    1995-03-01

    This paper presents a review of problems related to statistical database systems, which are wide-spread in various fields of activity. Statistical databases (SDB) are referred to as databases that consist of data and are used for statistical analysis. Topics under consideration are: SDB peculiarities, properties of data models adequate for SDB requirements, metadata functions, null-value problems, SDB compromise protection problems, stored data compression techniques, and statistical data representation means. Also examined is whether the present Database Management Systems (DBMS) satisfy the SDB requirements. Some actual research directions in SDB systems are considered.

  20. Morbidity statistics

    PubMed Central

    Smith, Alwyn

    1969-01-01

    This paper is based on an analysis of questionnaires sent to the health ministries of Member States of WHO asking for information about the extent, nature, and scope of morbidity statistical information. It is clear that most countries collect some statistics of morbidity and many countries collect extensive data. However, few countries relate their collection to the needs of health administrators for information, and many countries collect statistics principally for publication in annual volumes which may appear anything up to 3 years after the year to which they refer. The desiderata of morbidity statistics may be summarized as reliability, representativeness, and relevance to current health problems. PMID:5306722

  1. Food additives

    PubMed Central

    Spencer, Michael

    1974-01-01

    Food additives are discussed from the food technology point of view. The reasons for their use are summarized: (1) to protect food from chemical and microbiological attack; (2) to even out seasonal supplies; (3) to improve their eating quality; (4) to improve their nutritional value. The various types of food additives are considered, e.g. colours, flavours, emulsifiers, bread and flour additives, preservatives, and nutritional additives. The paper concludes with consideration of those circumstances in which the use of additives is (a) justified and (b) unjustified. PMID:4467857

  2. Statistical Diversions

    ERIC Educational Resources Information Center

    Petocz, Peter; Sowey, Eric

    2008-01-01

    In this article, the authors focus on hypothesis testing--that peculiarly statistical way of deciding things. Statistical methods for testing hypotheses were developed in the 1920s and 1930s by some of the most famous statisticians, in particular Ronald Fisher, Jerzy Neyman and Egon Pearson, who laid the foundations of almost all modern methods of…

  3. On Statistical Testing.

    ERIC Educational Resources Information Center

    Huberty, Carl J.

    An approach to statistical testing, which combines Neyman-Pearson hypothesis testing and Fisher significance testing, is recommended. The use of P-values in this approach is discussed in some detail. The author also discusses some problems which are often found in introductory statistics textbooks. The problems involve the definitions of…

  4. Statistics Clinic

    NASA Technical Reports Server (NTRS)

    Feiveson, Alan H.; Foy, Millennia; Ploutz-Snyder, Robert; Fiedler, James

    2014-01-01

    Do you have elevated p-values? Is the data analysis process getting you down? Do you experience anxiety when you need to respond to criticism of statistical methods in your manuscript? You may be suffering from Insufficient Statistical Support Syndrome (ISSS). For symptomatic relief of ISSS, come for a free consultation with JSC biostatisticians at our help desk during the poster sessions at the HRP Investigators Workshop. Get answers to common questions about sample size, missing data, multiple testing, when to trust the results of your analyses and more. Side effects may include sudden loss of statistics anxiety, improved interpretation of your data, and increased confidence in your results.

  5. SEER Statistics

    Cancer.gov

    The Surveillance, Epidemiology, and End Results (SEER) Program of the National Cancer Institute works to provide information on cancer statistics in an effort to reduce the burden of cancer among the U.S. population.

  6. Cancer Statistics

    MedlinePlus

    ... cancer statistics across the world. U.S. Cancer Mortality Trends The best indicator of progress against cancer is ... the number of cancer survivors has increased. These trends show that progress is being made against the ...

  7. Statistical Physics

    NASA Astrophysics Data System (ADS)

    Hermann, Claudine

    Statistical Physics bridges the properties of a macroscopic system and the microscopic behavior of its constituting particles, otherwise impossible due to the giant magnitude of Avogadro's number. Numerous systems of today's key technologies - such as semiconductors or lasers - are macroscopic quantum objects; only statistical physics allows for understanding their fundamentals. Therefore, this graduate text also focuses on particular applications such as the properties of electrons in solids with applications, and radiation thermodynamics and the greenhouse effect.

  8. Food additives.

    PubMed

    Berglund, F

    1978-01-01

    The use of additives to food fulfils many purposes, as shown by the index issued by the Codex Committee on Food Additives: Acids, bases and salts; Preservatives, Antioxidants and antioxidant synergists; Anticaking agents; Colours; Emulfifiers; Thickening agents; Flour-treatment agents; Extraction solvents; Carrier solvents; Flavours (synthetic); Flavour enhancers; Non-nutritive sweeteners; Processing aids; Enzyme preparations. Many additives occur naturally in foods, but this does not exclude toxicity at higher levels. Some food additives are nutrients, or even essential nutritents, e.g. NaCl. Examples are known of food additives causing toxicity in man even when used according to regulations, e.g. cobalt in beer. In other instances, poisoning has been due to carry-over, e.g. by nitrate in cheese whey - when used for artificial feed for infants. Poisonings also occur as the result of the permitted substance being added at too high levels, by accident or carelessness, e.g. nitrite in fish. Finally, there are examples of hypersensitivity to food additives, e.g. to tartrazine and other food colours. The toxicological evaluation, based on animal feeding studies, may be complicated by impurities, e.g. orthotoluene-sulfonamide in saccharin; by transformation or disappearance of the additive in food processing in storage, e.g. bisulfite in raisins; by reaction products with food constituents, e.g. formation of ethylurethane from diethyl pyrocarbonate; by metabolic transformation products, e.g. formation in the gut of cyclohexylamine from cyclamate. Metabolic end products may differ in experimental animals and in man: guanylic acid and inosinic acid are metabolized to allantoin in the rat but to uric acid in man. The magnitude of the safety margin in man of the Acceptable Daily Intake (ADI) is not identical to the "safety factor" used when calculating the ADI. The symptoms of Chinese Restaurant Syndrome, although not hazardous, furthermore illustrate that the whole ADI

  9. A Significant Statistical Advancement on the Predictive Values of ERCC1 Polymorphisms for Clinical Outcomes of Platinum-Based Chemotherapy in Non-Small Cell Lung Cancer: An Updated Meta-Analysis

    PubMed Central

    Han, Yali; Liu, Jie; Sun, Meili; Zhang, Zongpu; Liu, Chuanyong; Sun, Yuping

    2016-01-01

    Background. There is no definitive conclusion so far on the predictive values of ERCC1 polymorphisms for clinical outcomes of platinum-based chemotherapy in non-small cell lung cancer (NSCLC). We updated this meta-analysis with an expectation to obtain some statistical advancement on this issue. Methods. Relevant studies were identified by searching MEDLINE, EMBASE databases from inception to April 2015. Primary outcomes included objective response rate (ORR), progression-free survival (PFS), and overall survival (OS). All analyses were performed using the Review Manager version 5.3 and the Stata version 12.0. Results. A total of 33 studies including 5373 patients were identified. ERCC1 C118T and C8092A could predict both ORR and OS for platinum-based chemotherapy in Asian NSCLC patients (CT + TT versus CC, ORR: OR = 0.80, 95% CI = 0.67–0.94; OS: HR = 1.24, 95% CI = 1.01–1.53) (CA + AA versus CC, ORR: OR = 0.76, 95% CI = 0.60–0.96; OS: HR = 1.37, 95% CI = 1.06–1.75). Conclusions. Current evidence strongly indicated the prospect of ERCC1 C118T and C8092A as predictive biomarkers for platinum-based chemotherapy in Asian NSCLC patients. However, the results should be interpreted with caution and large prospective studies are still required to further investigate these findings. PMID:27057082

  10. Statistical optics

    NASA Astrophysics Data System (ADS)

    Goodman, J. W.

    This book is based on the thesis that some training in the area of statistical optics should be included as a standard part of any advanced optics curriculum. Random variables are discussed, taking into account definitions of probability and random variables, distribution functions and density functions, an extension to two or more random variables, statistical averages, transformations of random variables, sums of real random variables, Gaussian random variables, complex-valued random variables, and random phasor sums. Other subjects examined are related to random processes, some first-order properties of light waves, the coherence of optical waves, some problems involving high-order coherence, effects of partial coherence on imaging systems, imaging in the presence of randomly inhomogeneous media, and fundamental limits in photoelectric detection of light. Attention is given to deterministic versus statistical phenomena and models, the Fourier transform, and the fourth-order moment of the spectrum of a detected speckle image.

  11. Statistics Revelations

    ERIC Educational Resources Information Center

    Chicot, Katie; Holmes, Hilary

    2012-01-01

    The use, and misuse, of statistics is commonplace, yet in the printed format data representations can be either over simplified, supposedly for impact, or so complex as to lead to boredom, supposedly for completeness and accuracy. In this article the link to the video clip shows how dynamic visual representations can enliven and enhance the…

  12. Statistical Fun

    ERIC Educational Resources Information Center

    Catley, Alan

    2007-01-01

    Following the announcement last year that there will be no more math coursework assessment at General Certificate of Secondary Education (GCSE), teachers will in the future be able to devote more time to preparing learners for formal examinations. One of the key things that the author has learned when teaching statistics is that it makes for far…

  13. Potlining Additives

    SciTech Connect

    Rudolf Keller

    2004-08-10

    In this project, a concept to improve the performance of aluminum production cells by introducing potlining additives was examined and tested. Boron oxide was added to cathode blocks, and titanium was dissolved in the metal pool; this resulted in the formation of titanium diboride and caused the molten aluminum to wet the carbonaceous cathode surface. Such wetting reportedly leads to operational improvements and extended cell life. In addition, boron oxide suppresses cyanide formation. This final report presents and discusses the results of this project. Substantial economic benefits for the practical implementation of the technology are projected, especially for modern cells with graphitized blocks. For example, with an energy savings of about 5% and an increase in pot life from 1500 to 2500 days, a cost savings of $ 0.023 per pound of aluminum produced is projected for a 200 kA pot.

  14. Phosphazene additives

    SciTech Connect

    Harrup, Mason K; Rollins, Harry W

    2013-11-26

    An additive comprising a phosphazene compound that has at least two reactive functional groups and at least one capping functional group bonded to phosphorus atoms of the phosphazene compound. One of the at least two reactive functional groups is configured to react with cellulose and the other of the at least two reactive functional groups is configured to react with a resin, such as an amine resin of a polycarboxylic acid resin. The at least one capping functional group is selected from the group consisting of a short chain ether group, an alkoxy group, or an aryloxy group. Also disclosed are an additive-resin admixture, a method of treating a wood product, and a wood product.

  15. Statistics in medicine.

    PubMed

    Januszyk, Michael; Gurtner, Geoffrey C

    2011-01-01

    The scope of biomedical research has expanded rapidly during the past several decades, and statistical analysis has become increasingly necessary to understand the meaning of large and diverse quantities of raw data. As such, a familiarity with this lexicon is essential for critical appraisal of medical literature. This article attempts to provide a practical overview of medical statistics, with an emphasis on the selection, application, and interpretation of specific tests. This includes a brief review of statistical theory and its nomenclature, particularly with regard to the classification of variables. A discussion of descriptive methods for data presentation is then provided, followed by an overview of statistical inference and significance analysis, and detailed treatment of specific statistical tests and guidelines for their interpretation. PMID:21200241

  16. "Clinical" Significance: "Clinical" Significance and "Practical" Significance are NOT the Same Things

    ERIC Educational Resources Information Center

    Peterson, Lisa S.

    2008-01-01

    Clinical significance is an important concept in research, particularly in education and the social sciences. The present article first compares clinical significance to other measures of "significance" in statistics. The major methods used to determine clinical significance are explained and the strengths and weaknesses of clinical significance…

  17. Statistics 101 for Radiologists.

    PubMed

    Anvari, Arash; Halpern, Elkan F; Samir, Anthony E

    2015-10-01

    Diagnostic tests have wide clinical applications, including screening, diagnosis, measuring treatment effect, and determining prognosis. Interpreting diagnostic test results requires an understanding of key statistical concepts used to evaluate test efficacy. This review explains descriptive statistics and discusses probability, including mutually exclusive and independent events and conditional probability. In the inferential statistics section, a statistical perspective on study design is provided, together with an explanation of how to select appropriate statistical tests. Key concepts in recruiting study samples are discussed, including representativeness and random sampling. Variable types are defined, including predictor, outcome, and covariate variables, and the relationship of these variables to one another. In the hypothesis testing section, we explain how to determine if observed differences between groups are likely to be due to chance. We explain type I and II errors, statistical significance, and study power, followed by an explanation of effect sizes and how confidence intervals can be used to generalize observed effect sizes to the larger population. Statistical tests are explained in four categories: t tests and analysis of variance, proportion analysis tests, nonparametric tests, and regression techniques. We discuss sensitivity, specificity, accuracy, receiver operating characteristic analysis, and likelihood ratios. Measures of reliability and agreement, including κ statistics, intraclass correlation coefficients, and Bland-Altman graphs and analysis, are introduced. PMID:26466186

  18. Statistical Optics

    NASA Astrophysics Data System (ADS)

    Goodman, Joseph W.

    2000-07-01

    The Wiley Classics Library consists of selected books that have become recognized classics in their respective fields. With these new unabridged and inexpensive editions, Wiley hopes to extend the life of these important works by making them available to future generations of mathematicians and scientists. Currently available in the Series: T. W. Anderson The Statistical Analysis of Time Series T. S. Arthanari & Yadolah Dodge Mathematical Programming in Statistics Emil Artin Geometric Algebra Norman T. J. Bailey The Elements of Stochastic Processes with Applications to the Natural Sciences Robert G. Bartle The Elements of Integration and Lebesgue Measure George E. P. Box & Norman R. Draper Evolutionary Operation: A Statistical Method for Process Improvement George E. P. Box & George C. Tiao Bayesian Inference in Statistical Analysis R. W. Carter Finite Groups of Lie Type: Conjugacy Classes and Complex Characters R. W. Carter Simple Groups of Lie Type William G. Cochran & Gertrude M. Cox Experimental Designs, Second Edition Richard Courant Differential and Integral Calculus, Volume I RIchard Courant Differential and Integral Calculus, Volume II Richard Courant & D. Hilbert Methods of Mathematical Physics, Volume I Richard Courant & D. Hilbert Methods of Mathematical Physics, Volume II D. R. Cox Planning of Experiments Harold S. M. Coxeter Introduction to Geometry, Second Edition Charles W. Curtis & Irving Reiner Representation Theory of Finite Groups and Associative Algebras Charles W. Curtis & Irving Reiner Methods of Representation Theory with Applications to Finite Groups and Orders, Volume I Charles W. Curtis & Irving Reiner Methods of Representation Theory with Applications to Finite Groups and Orders, Volume II Cuthbert Daniel Fitting Equations to Data: Computer Analysis of Multifactor Data, Second Edition Bruno de Finetti Theory of Probability, Volume I Bruno de Finetti Theory of Probability, Volume 2 W. Edwards Deming Sample Design in Business Research

  19. "Old" tail lobes provide significant additional substorm power

    NASA Astrophysics Data System (ADS)

    Mishin, V.; Mishin, V. V.; Karavaev, Y.

    2012-12-01

    In each polar cap (PC) we mark out "old PC" observed during quiet time before the event under consideration, and "new PC" that emerges during rounding the old one and expanding the PC total area. Old and new PCs correspond in the magnetosphere to the old and new tail lobes, respectively. The new lobe variable magnetic flux Ψ1 is usually assumed to be active, i.e. it provides transport of the electromagnetic energy flux (Poynting flux) ɛ' from solar wind into the magnetosphere. The old lobe magnetic flux Ψ2 is usually supposed to be passive, i.e. it remains constant during the disturbance and does not participate in the transporting process which would mean the old PC electric field absolute screening from the convection electric field created by the magnetopause reconnection. In fact, screening is observed, but it is far from absolute. We suggest a model of screening and determine its quantitative characteristics in the selected superstorm. The coefficient of a screening is the β = Ψ2/Ψ02, where Ψ02 = const is open magnetic flux through the old PC measured prior to the substorm, and Ψ2 is variable magnetic flux during the substorm. We consider three various regimes of disturbance. In each, the coefficient β decreased during the loading phase and increased at the unloading phase, but the rates and amplitudes of variations exhibited a strong dependence on the regime. We interpreted decrease in β as a result of involving the old PC magnetic flux Ψ2, which was considered to be constant earlier, to the ' transport process of the Poynting flux from the solar wind into the magnetosphere. A weakening of the transport process at the subsequent unloading phase creates increase in β. Estimates showed that coefficient β during each regime and the computed Poynting flux varied manifolds. In general, unlike the existing substorm conception, the new scenario describes an unknown earlier tail lobe activation process during a substorm growth phase that effectively increases the accumulated tail energy for the expansion and recovery phases.

  20. [Statistical materials].

    PubMed

    1986-01-01

    Official population data for the USSR are presented for 1985 and 1986. Part 1 (pp. 65-72) contains data on capitals of union republics and cities with over one million inhabitants, including population estimates for 1986 and vital statistics for 1985. Part 2 (p. 72) presents population estimates by sex and union republic, 1986. Part 3 (pp. 73-6) presents data on population growth, including birth, death, and natural increase rates, 1984-1985; seasonal distribution of births and deaths; birth order; age-specific birth rates in urban and rural areas and by union republic; marriages; age at marriage; and divorces. PMID:12178831

  1. Intervention for Maltreating Fathers: Statistically and Clinically Significant Change

    ERIC Educational Resources Information Center

    Scott, Katreena L.; Lishak, Vicky

    2012-01-01

    Objective: Fathers are seldom the focus of efforts to address child maltreatment and little is currently known about the effectiveness of intervention for this population. To address this gap, we examined the efficacy of a community-based group treatment program for fathers who had abused or neglected their children or exposed their children to…

  2. Worry, Intolerance of Uncertainty, and Statistics Anxiety

    ERIC Educational Resources Information Center

    Williams, Amanda S.

    2013-01-01

    Statistics anxiety is a problem for most graduate students. This study investigates the relationship between intolerance of uncertainty, worry, and statistics anxiety. Intolerance of uncertainty was significantly related to worry, and worry was significantly related to three types of statistics anxiety. Six types of statistics anxiety were…

  3. Lubricant and additive effects on spur gear fatigue life

    NASA Technical Reports Server (NTRS)

    Townsend, D. P.; Zaretsky, E. V.; Scibbe, H. W.

    1985-01-01

    Spur gear endurance tests were conducted with six lubricants using a single lot of consumable-electrode vacuum melted (CVM) AISI 9310 spur gears. The sixth lubricant was divided into four batches each of which had a different additive content. Lubricants tested with a phosphorus-type load carrying additive showed a statistically significant improvement in life over lubricants without this type of additive. The presence of sulfur type antiwear additives in the lubricant did not appear to affect the surface fatigue life of the gears. No statistical difference in life was produced with those lubricants of different base stocks but with similar viscosity, pressure-viscosity coefficients and antiwear additives. Gears tested with a 0.1 wt % sulfur and 0.1 wt % phosphorus EP additives in the lubricant had reactive films that were 200 to 400 (0.8 to 1.6 microns) thick.

  4. Using scientifically and statistically sufficient statistics in comparing image segmentations.

    PubMed

    Chi, Yueh-Yun; Muller, Keith E

    2010-01-01

    Automatic computer segmentation in three dimensions creates opportunity to reduce the cost of three-dimensional treatment planning of radiotherapy for cancer treatment. Comparisons between human and computer accuracy in segmenting kidneys in CT scans generate distance values far larger in number than the number of CT scans. Such high dimension, low sample size (HDLSS) data present a grand challenge to statisticians: how do we find good estimates and make credible inference? We recommend discovering and using scientifically and statistically sufficient statistics as an additional strategy for overcoming the curse of dimensionality. First, we reduced the three-dimensional array of distances for each image comparison to a histogram to be modeled individually. Second, we used non-parametric kernel density estimation to explore distributional patterns and assess multi-modality. Third, a systematic exploratory search for parametric distributions and truncated variations led to choosing a Gaussian form as approximating the distribution of a cube root transformation of distance. Fourth, representing each histogram by an individually estimated distribution eliminated the HDLSS problem by reducing on average 26,000 distances per histogram to just 2 parameter estimates. In the fifth and final step we used classical statistical methods to demonstrate that the two human observers disagreed significantly less with each other than with the computer segmentation. Nevertheless, the size of all disagreements was clinically unimportant relative to the size of a kidney. The hierarchal modeling approach to object-oriented data created response variables deemed sufficient by both the scientists and statisticians. We believe the same strategy provides a useful addition to the imaging toolkit and will succeed with many other high throughput technologies in genetics, metabolomics and chemical analysis. PMID:24967000

  5. Candidate Assembly Statistical Evaluation

    1998-07-15

    The Savannah River Site (SRS) receives aluminum clad spent Material Test Reactor (MTR) fuel from all over the world for storage and eventual reprocessing. There are hundreds of different kinds of MTR fuels and these fuels will continue to be received at SRS for approximately ten more years. SRS''s current criticality evaluation methodology requires the modeling of all MTR fuels utilizing Monte Carlo codes, which is extremely time consuming and resource intensive. Now that amore » significant number of MTR calculations have been conducted it is feasible to consider building statistical models that will provide reasonable estimations of MTR behavior. These statistical models can be incorporated into a standardized model homogenization spreadsheet package to provide analysts with a means of performing routine MTR fuel analyses with a minimal commitment of time and resources. This became the purpose for development of the Candidate Assembly Statistical Evaluation (CASE) program at SRS.« less

  6. Obesity Statistics.

    PubMed

    Smith, Kristy Breuhl; Smith, Michael Seth

    2016-03-01

    Obesity is a chronic disease that is strongly associated with an increase in mortality and morbidity including, certain types of cancer, cardiovascular disease, disability, diabetes mellitus, hypertension, osteoarthritis, and stroke. In adults, overweight is defined as a body mass index (BMI) of 25 kg/m(2) to 29 kg/m(2) and obesity as a BMI of greater than 30 kg/m(2). If current trends continue, it is estimated that, by the year 2030, 38% of the world's adult population will be overweight and another 20% obese. Significant global health strategies must reduce the morbidity and mortality associated with the obesity epidemic. PMID:26896205

  7. Perception in statistical graphics

    NASA Astrophysics Data System (ADS)

    VanderPlas, Susan Ruth

    There has been quite a bit of research on statistical graphics and visualization, generally focused on new types of graphics, new software to create graphics, interactivity, and usability studies. Our ability to interpret and use statistical graphics hinges on the interface between the graph itself and the brain that perceives and interprets it, and there is substantially less research on the interplay between graph, eye, brain, and mind than is sufficient to understand the nature of these relationships. The goal of the work presented here is to further explore the interplay between a static graph, the translation of that graph from paper to mental representation (the journey from eye to brain), and the mental processes that operate on that graph once it is transferred into memory (mind). Understanding the perception of statistical graphics should allow researchers to create more effective graphs which produce fewer distortions and viewer errors while reducing the cognitive load necessary to understand the information presented in the graph. Taken together, these experiments should lay a foundation for exploring the perception of statistical graphics. There has been considerable research into the accuracy of numerical judgments viewers make from graphs, and these studies are useful, but it is more effective to understand how errors in these judgments occur so that the root cause of the error can be addressed directly. Understanding how visual reasoning relates to the ability to make judgments from graphs allows us to tailor graphics to particular target audiences. In addition, understanding the hierarchy of salient features in statistical graphics allows us to clearly communicate the important message from data or statistical models by constructing graphics which are designed specifically for the perceptual system.

  8. DESIGNING ENVIRONMENTAL MONITORING DATABASES FOR STATISTIC ASSESSMENT

    EPA Science Inventory

    Databases designed for statistical analyses have characteristics that distinguish them from databases intended for general use. EMAP uses a probabilistic sampling design to collect data to produce statistical assessments of environmental conditions. In addition to supporting the ...

  9. A SIGNIFICANCE TEST FOR THE LASSO1

    PubMed Central

    Lockhart, Richard; Taylor, Jonathan; Tibshirani, Ryan J.; Tibshirani, Robert

    2014-01-01

    In the sparse linear regression setting, we consider testing the significance of the predictor variable that enters the current lasso model, in the sequence of models visited along the lasso solution path. We propose a simple test statistic based on lasso fitted values, called the covariance test statistic, and show that when the true model is linear, this statistic has an Exp(1) asymptotic distribution under the null hypothesis (the null being that all truly active variables are contained in the current lasso model). Our proof of this result for the special case of the first predictor to enter the model (i.e., testing for a single significant predictor variable against the global null) requires only weak assumptions on the predictor matrix X. On the other hand, our proof for a general step in the lasso path places further technical assumptions on X and the generative model, but still allows for the important high-dimensional case p > n, and does not necessarily require that the current lasso model achieves perfect recovery of the truly active variables. Of course, for testing the significance of an additional variable between two nested linear models, one typically uses the chi-squared test, comparing the drop in residual sum of squares (RSS) to a χ12 distribution. But when this additional variable is not fixed, and has been chosen adaptively or greedily, this test is no longer appropriate: adaptivity makes the drop in RSS stochastically much larger than χ12 under the null hypothesis. Our analysis explicitly accounts for adaptivity, as it must, since the lasso builds an adaptive sequence of linear models as the tuning parameter λ decreases. In this analysis, shrinkage plays a key role: though additional variables are chosen adaptively, the coefficients of lasso active variables are shrunken due to the l1 penalty. Therefore, the test statistic (which is based on lasso fitted values) is in a sense balanced by these two opposing properties—adaptivity and

  10. Cosmetic Plastic Surgery Statistics

    MedlinePlus

    2014 Cosmetic Plastic Surgery Statistics Cosmetic Procedure Trends 2014 Plastic Surgery Statistics Report Please credit the AMERICAN SOCIETY OF PLASTIC SURGEONS when citing statistical data or using ...

  11. Statistical Studies of Supernova Environments

    NASA Astrophysics Data System (ADS)

    Anderson, Joseph P.; James, Phil A.; Habergham, Stacey M.; Galbany, Lluís; Kuncarayakti, Hanindyo

    2015-05-01

    Mapping the diversity of SNe to progenitor properties is key to our understanding of stellar evolution and explosive stellar death. Investigations of the immediate environments of SNe allow statistical constraints to be made on progenitor properties such as mass and metallicity. Here, we review the progress that has been made in this field. Pixel statistics using tracers of e.g. star formation within galaxies show intriguing differences in the explosion sites of, in particular SNe types II and Ibc (SNe II and SNe Ibc respectively), suggesting statistical differences in population ages. Of particular interest is that SNe Ic are significantly more associated with host galaxy Hα emission than SNe Ib, implying shorter lifetimes for the former. In addition, such studies have shown (unexpectedly) that the interacting SNe IIn do not explode in regions containing the most massive stars, which suggests that at least a significant fraction of their progenitors arise from the lower end of the core-collapse SN mass range. Host H ii region spectroscopy has been obtained for a significant number of core-collapse events, however definitive conclusions on differences between distinct SN types have to-date been elusive. Single stellar evolution models predict that the relative fraction of SNe Ibc to SNe II should increase with increasing metallicity, due to the dependence of mass-loss rates on progenitor metallicity. We present a meta-analysis of all current host H ii region oxygen abundances for CC SNe. It is concluded that the SN II to SN Ibc ratio shows little variation with oxygen abundance, with only a suggestion that the ratio increases in the lowest bin. Radial distributions of different SNe are discussed, where a central excess of SNe Ibc has been observed within disturbed galaxy systems, which is difficult to ascribe to metallicity or selection effects. Environment studies are also being undertaken for SNe Ia, where constraints can be made on the shortest delay times of

  12. Statistics Anxiety among Postgraduate Students

    ERIC Educational Resources Information Center

    Koh, Denise; Zawi, Mohd Khairi

    2014-01-01

    Most postgraduate programmes, that have research components, require students to take at least one course of research statistics. Not all postgraduate programmes are science based, there are a significant number of postgraduate students who are from the social sciences that will be taking statistics courses, as they try to complete their…

  13. Additive Manufacturing Infrared Inspection

    NASA Technical Reports Server (NTRS)

    Gaddy, Darrell

    2014-01-01

    Additive manufacturing is a rapid prototyping technology that allows parts to be built in a series of thin layers from plastic, ceramics, and metallics. Metallic additive manufacturing is an emerging form of rapid prototyping that allows complex structures to be built using various metallic powders. Significant time and cost savings have also been observed using the metallic additive manufacturing compared with traditional techniques. Development of the metallic additive manufacturing technology has advanced significantly over the last decade, although many of the techniques to inspect parts made from these processes have not advanced significantly or have limitations. Several external geometry inspection techniques exist such as Coordinate Measurement Machines (CMM), Laser Scanners, Structured Light Scanning Systems, or even traditional calipers and gages. All of the aforementioned techniques are limited to external geometry and contours or must use a contact probe to inspect limited internal dimensions. This presentation will document the development of a process for real-time dimensional inspection technique and digital quality record of the additive manufacturing process using Infrared camera imaging and processing techniques.

  14. Impaired Statistical Learning in Developmental Dyslexia

    PubMed Central

    Thiessen, Erik D.; Holt, Lori L.

    2015-01-01

    Purpose Developmental dyslexia (DD) is commonly thought to arise from phonological impairments. However, an emerging perspective is that a more general procedural learning deficit, not specific to phonological processing, may underlie DD. The current study examined if individuals with DD are capable of extracting statistical regularities across sequences of passively experienced speech and nonspeech sounds. Such statistical learning is believed to be domain-general, to draw upon procedural learning systems, and to relate to language outcomes. Method DD and control groups were familiarized with a continuous stream of syllables or sine-wave tones, the ordering of which was defined by high or low transitional probabilities across adjacent stimulus pairs. Participants subsequently judged two 3-stimulus test items with either high or low statistical coherence as being the most similar to the sounds heard during familiarization. Results As with control participants, the DD group was sensitive to the transitional probability structure of the familiarization materials as evidenced by above-chance performance. However, the performance of participants with DD was significantly poorer than controls across linguistic and nonlinguistic stimuli. In addition, reading-related measures were significantly correlated with statistical learning performance of both speech and nonspeech material. Conclusion Results are discussed in light of procedural learning impairments among participants with DD. PMID:25860795

  15. Thermodynamics of cellular statistical inference

    NASA Astrophysics Data System (ADS)

    Lang, Alex; Fisher, Charles; Mehta, Pankaj

    2014-03-01

    Successful organisms must be capable of accurately sensing the surrounding environment in order to locate nutrients and evade toxins or predators. However, single cell organisms face a multitude of limitations on their accuracy of sensing. Berg and Purcell first examined the canonical example of statistical limitations to cellular learning of a diffusing chemical and established a fundamental limit to statistical accuracy. Recent work has shown that the Berg and Purcell learning limit can be exceeded using Maximum Likelihood Estimation. Here, we recast the cellular sensing problem as a statistical inference problem and discuss the relationship between the efficiency of an estimator and its thermodynamic properties. We explicitly model a single non-equilibrium receptor and examine the constraints on statistical inference imposed by noisy biochemical networks. Our work shows that cells must balance sample number, specificity, and energy consumption when performing statistical inference. These tradeoffs place significant constraints on the practical implementation of statistical estimators in a cell.

  16. SOCR: Statistics Online Computational Resource

    PubMed Central

    Dinov, Ivo D.

    2011-01-01

    The need for hands-on computer laboratory experience in undergraduate and graduate statistics education has been firmly established in the past decade. As a result a number of attempts have been undertaken to develop novel approaches for problem-driven statistical thinking, data analysis and result interpretation. In this paper we describe an integrated educational web-based framework for: interactive distribution modeling, virtual online probability experimentation, statistical data analysis, visualization and integration. Following years of experience in statistical teaching at all college levels using established licensed statistical software packages, like STATA, S-PLUS, R, SPSS, SAS, Systat, etc., we have attempted to engineer a new statistics education environment, the Statistics Online Computational Resource (SOCR). This resource performs many of the standard types of statistical analysis, much like other classical tools. In addition, it is designed in a plug-in object-oriented architecture and is completely platform independent, web-based, interactive, extensible and secure. Over the past 4 years we have tested, fine-tuned and reanalyzed the SOCR framework in many of our undergraduate and graduate probability and statistics courses and have evidence that SOCR resources build student’s intuition and enhance their learning. PMID:21451741

  17. NASA Pocket Statistics

    NASA Technical Reports Server (NTRS)

    1994-01-01

    Pocket Statistics is published for the use of NASA managers and their staff. Included herein is Administrative and Organizational information, summaries of Space Flight Activity including the NASA Major Launch Record, and NASA Procurement, Financial, and Manpower data. The NASA Major Launch Record includes all launches of Scout class and larger vehicles. Vehicle and spacecraft development flights are also included in the Major Launch Record. Shuttle missions are counted as one launch and one payload, where free flying payloads are not involved. Satellites deployed from the cargo bay of the Shuttle and placed in a separate orbit or trajectory are counted as an additional payload.

  18. NASA Pocket Statistics

    NASA Technical Reports Server (NTRS)

    1995-01-01

    NASA Pocket Statistics is published for the use of NASA managers and their staff. Included herein is Administrative and Organizational information, summaries of Space Flight Activity including the NASA Major Launch Record, and NASA Procurement, Financial, and Manpower data. The NASA Major Launch Record includes all launches of Scout class and larger vehicles. Vehicle and spacecraft development flights are also included in the Major Launch Record. Shuttle missions are counted as one launch and one payload, where free flying payloads are not involved. Satellites deployed from the cargo bay of the Shuttle and placed in a separate orbit or trajectory are counted as an additional payload.

  19. NASA Pocket Statistics

    NASA Technical Reports Server (NTRS)

    1996-01-01

    This booklet of pocket statistics includes the 1996 NASA Major Launch Record, NASA Procurement, Financial, and Workforce data. The NASA Major Launch Record includes all launches of Scout class and larger vehicles. Vehicle and spacecraft development flights are also included in the Major Luanch Record. Shuttle missions are counted as one launch and one payload, where free flying payloads are not involved. Satellites deployed from the cargo bay of the Shuttle and placed in a separate orbit or trajectory are counted as an additional payload.

  20. Statistics: are we related?

    PubMed

    Scott, M; Flaherty, D; Currall, J

    2013-03-01

    This short addition to our series on clinical statistics concerns relationships, and answering questions such as "are blood pressure and weight related?" In a later article, we will answer the more interesting question about how they might be related. This article follows on logically from the previous one dealing with categorical data, the major difference being here that we will consider two continuous variables, which naturally leads to the use of a Pearson correlation or occasionally to a Spearman rank correlation coefficient. PMID:23458641

  1. NASA Pocket Statistics

    NASA Technical Reports Server (NTRS)

    1993-01-01

    Pocket Statistics is published for the use of NASA managers and their staff. Included herein is Administrative and Organizational information, summaries of Space Flight Activity including the NASA Major Launch Record, and NASA Procurement, Financial, and Manpower data. The NASA Major Launch Record includes all launches of Scout class and larger vehicles. Vehicle and spacecraft development flights are also included in the Major Launch Record. Shuttle missions are counted as one launch and one payload, where free flying payloads are not involved. Satellites deployed from the cargo bay of the Shuttle and placed in a separate orbit or trajectory are counted as an additional payload.

  2. Predict! Teaching Statistics Using Informational Statistical Inference

    ERIC Educational Resources Information Center

    Makar, Katie

    2013-01-01

    Statistics is one of the most widely used topics for everyday life in the school mathematics curriculum. Unfortunately, the statistics taught in schools focuses on calculations and procedures before students have a chance to see it as a useful and powerful tool. Researchers have found that a dominant view of statistics is as an assortment of tools…

  3. Statistics Poker: Reinforcing Basic Statistical Concepts

    ERIC Educational Resources Information Center

    Leech, Nancy L.

    2008-01-01

    Learning basic statistical concepts does not need to be tedious or dry; it can be fun and interesting through cooperative learning in the small-group activity of Statistics Poker. This article describes a teaching approach for reinforcing basic statistical concepts that can help students who have high anxiety and makes learning and reinforcing…

  4. Understanding British addiction statistics.

    PubMed

    Johnson, B D

    1975-01-01

    The statistical data issued by the Home Office and Department of Health and Social Security are quite detailed and generally valid measures of hard core addiction in Great Britain (Judson, 1973). Since 1968, the main basis of these high quality British statistics is the routine reports filed by Drug Treatment Centres. The well-trained, experienced staff of these clinics make knowledgeable dicsions about a cleint's addiction, efficiently regulate dosage, and otherwise exert some degree of control over addicts (Judson, 1973; Johnson, 1974). The co-operation of police, courts, prison physicians, and general practitioners is also valuable in collecting data on drug addiction and convictions. Information presented in the tables above indicates that a rising problem of herion addiction between 1962 and 1967 were arrested by the introduction of the treatment clinics in 1968. Further, legally maintained heroin addiction has been reduced by almost one-third since 1968, since many herion addicts have been transferred to injectable methadone. The decline in herion prescribing and the relatively steady number of narcotics addicts has apparently occurred in the face of a continuing, and perhaps increasing, demand for heroin and other opiates. With few exceptions of a minor nature analysis of various tables suggests that the official statistics are internally consistent. There are apparently few "hidden" addicts, since few unknown addicts die of overdoses or are arrested by police (Lewis, 1973), although Blumberg (1974) indicates that some unknown users may exist. In addition, may opitate usersnot officially notified are known by clinic doctors as friends of addicts receiving prescriptions (Judson, 1973; Home Office, 1974). In brief, offical British drug statistics seem to be generally valid and demonstrate that heroin and perhaps methadone addiction has been well contained by the treatment clinics. PMID:1039283

  5. Significant Tsunami Events

    NASA Astrophysics Data System (ADS)

    Dunbar, P. K.; Furtney, M.; McLean, S. J.; Sweeney, A. D.

    2014-12-01

    Tsunamis have inflicted death and destruction on the coastlines of the world throughout history. The occurrence of tsunamis and the resulting effects have been collected and studied as far back as the second millennium B.C. The knowledge gained from cataloging and examining these events has led to significant changes in our understanding of tsunamis, tsunami sources, and methods to mitigate the effects of tsunamis. The most significant, not surprisingly, are often the most devastating, such as the 2011 Tohoku, Japan earthquake and tsunami. The goal of this poster is to give a brief overview of the occurrence of tsunamis and then focus specifically on several significant tsunamis. There are various criteria to determine the most significant tsunamis: the number of deaths, amount of damage, maximum runup height, had a major impact on tsunami science or policy, etc. As a result, descriptions will include some of the most costly (2011 Tohoku, Japan), the most deadly (2004 Sumatra, 1883 Krakatau), and the highest runup ever observed (1958 Lituya Bay, Alaska). The discovery of the Cascadia subduction zone as the source of the 1700 Japanese "Orphan" tsunami and a future tsunami threat to the U.S. northwest coast, contributed to the decision to form the U.S. National Tsunami Hazard Mitigation Program. The great Lisbon earthquake of 1755 marked the beginning of the modern era of seismology. Knowledge gained from the 1964 Alaska earthquake and tsunami helped confirm the theory of plate tectonics. The 1946 Alaska, 1952 Kuril Islands, 1960 Chile, 1964 Alaska, and the 2004 Banda Aceh, tsunamis all resulted in warning centers or systems being established.The data descriptions on this poster were extracted from NOAA's National Geophysical Data Center (NGDC) global historical tsunami database. Additional information about these tsunamis, as well as water level data can be found by accessing the NGDC website www.ngdc.noaa.gov/hazard/

  6. Common misconceptions about data analysis and statistics.

    PubMed

    Motulsky, Harvey J

    2015-02-01

    Ideally, any experienced investigator with the right tools should be able to reproduce a finding published in a peer-reviewed biomedical science journal. In fact, the reproducibility of a large percentage of published findings has been questioned. Undoubtedly, there are many reasons for this, but one reason may be that investigators fool themselves due to a poor understanding of statistical concepts. In particular, investigators often make these mistakes: (1) P-Hacking. This is when you reanalyze a data set in many different ways, or perhaps reanalyze with additional replicates, until you get the result you want. (2) Overemphasis on P values rather than on the actual size of the observed effect. (3) Overuse of statistical hypothesis testing, and being seduced by the word "significant". (4) Overreliance on standard errors, which are often misunderstood. PMID:25692012

  7. Common misconceptions about data analysis and statistics.

    PubMed

    Motulsky, Harvey J

    2014-10-01

    Ideally, any experienced investigator with the right tools should be able to reproduce a finding published in a peer-reviewed biomedical science journal. In fact, however, the reproducibility of a large percentage of published findings has been questioned. Undoubtedly, there are many reasons for this, but one reason may be that investigators fool themselves due to a poor understanding of statistical concepts. In particular, investigators often make these mistakes: 1) P-hacking, which is when you reanalyze a data set in many different ways, or perhaps reanalyze with additional replicates, until you get the result you want; 2) overemphasis on P values rather than on the actual size of the observed effect; 3) overuse of statistical hypothesis testing, and being seduced by the word "significant"; and 4) over-reliance on standard errors, which are often misunderstood. PMID:25204545

  8. Common misconceptions about data analysis and statistics.

    PubMed

    Motulsky, Harvey J

    2014-11-01

    Ideally, any experienced investigator with the right tools should be able to reproduce a finding published in a peer-reviewed biomedical science journal. In fact, the reproducibility of a large percentage of published findings has been questioned. Undoubtedly, there are many reasons for this, but one reason maybe that investigators fool themselves due to a poor understanding of statistical concepts. In particular, investigators often make these mistakes: 1. P-Hacking. This is when you reanalyze a data set in many different ways, or perhaps reanalyze with additional replicates, until you get the result you want. 2. Overemphasis on P values rather than on the actual size of the observed effect. 3. Overuse of statistical hypothesis testing, and being seduced by the word "significant". 4. Overreliance on standard errors, which are often misunderstood. PMID:25213136

  9. Statistics of atmospheric correlations.

    PubMed

    Santhanam, M S; Patra, P K

    2001-07-01

    For a large class of quantum systems, the statistical properties of their spectrum show remarkable agreement with random matrix predictions. Recent advances show that the scope of random matrix theory is much wider. In this work, we show that the random matrix approach can be beneficially applied to a completely different classical domain, namely, to the empirical correlation matrices obtained from the analysis of the basic atmospheric parameters that characterize the state of atmosphere. We show that the spectrum of atmospheric correlation matrices satisfy the random matrix prescription. In particular, the eigenmodes of the atmospheric empirical correlation matrices that have physical significance are marked by deviations from the eigenvector distribution. PMID:11461326

  10. Neuroendocrine Tumor: Statistics

    MedlinePlus

    ... Tumor > Neuroendocrine Tumor - Statistics Request Permissions Neuroendocrine Tumor - Statistics Approved by the Cancer.Net Editorial Board , 04/ ... the body. It is important to remember that statistics on how many people survive this type of ...

  11. Antecedents of students' achievement in statistics

    NASA Astrophysics Data System (ADS)

    Awaludin, Izyan Syazana; Razak, Ruzanna Ab; Harris, Hezlin; Selamat, Zarehan

    2015-02-01

    The applications of statistics in most fields have been vast. Many degree programmes at local universities require students to enroll in at least one statistics course. The standard of these courses varies across different degree programmes. This is because of students' diverse academic backgrounds in which some comes far from the field of statistics. The high failure rate in statistics courses for non-science stream students had been concerning every year. The purpose of this research is to investigate the antecedents of students' achievement in statistics. A total of 272 students participated in the survey. Multiple linear regression was applied to examine the relationship between the factors and achievement. We found that statistics anxiety was a significant predictor of students' achievement. We also found that students' age has significant effect to achievement. Older students are more likely to achieve lowers scores in statistics. Student's level of study also has a significant impact on their achievement in statistics.

  12. Statistical Modelling of Compound Floods

    NASA Astrophysics Data System (ADS)

    Bevacqua, Emanuele; Maraun, Douglas; Vrac, Mathieu; Widmann, Martin; Manning, Colin

    2016-04-01

    of interest. This is based on real data for River discharge (Y RIV ER') and Sea level (Y SEA), from the River Têt in south of France. The impact of the compound flood is the water level in the area between the River and Sea station, which we define here as h = αY RIV ER + (1 ‑ α)Y SEA. Here we show the sensitivity of the system to a changes in the two physical parameters. Through variations in α we can study the system in one or two dimensions which allows for the assessment of the risk associated with either of the two variables alone or with a combination of them. Varying instead the second parameter, i.e. the dependence among the variables Y RIV ER and Y SEA, we show how an apparently weak dependence can increase the risk of flooding significantly with respect to the independent case. The model can be applied to future climate inserting predictors into the statistical model as additional conditioning variables. Through conditioning the simulation of the statistical model on the predictors obtained for future projections from Climate Models, both the change of the risk and characteristics of compound floods for the future can be analysed.

  13. Statistics for People Who (Think They) Hate Statistics. Third Edition

    ERIC Educational Resources Information Center

    Salkind, Neil J.

    2007-01-01

    This text teaches an often intimidating and difficult subject in a way that is informative, personable, and clear. The author takes students through various statistical procedures, beginning with correlation and graphical representation of data and ending with inferential techniques and analysis of variance. In addition, the text covers SPSS, and…

  14. [Comment on] Statistical discrimination

    NASA Astrophysics Data System (ADS)

    Chinn, Douglas

    In the December 8, 1981, issue of Eos, a news item reported the conclusion of a National Research Council study that sexual discrimination against women with Ph.D.'s exists in the field of geophysics. Basically, the item reported that even when allowances are made for motherhood the percentage of female Ph.D.'s holding high university and corporate positions is significantly lower than the percentage of male Ph.D.'s holding the same types of positions. The sexual discrimination conclusion, based only on these statistics, assumes that there are no basic psychological differences between men and women that might cause different populations in the employment group studied. Therefore, the reasoning goes, after taking into account possible effects from differences related to anatomy, such as women stopping their careers in order to bear and raise children, the statistical distributions of positions held by male and female Ph.D.'s ought to be very similar to one another. Any significant differences between the distributions must be caused primarily by sexual discrimination.

  15. Statistical Reference Datasets

    National Institute of Standards and Technology Data Gateway

    Statistical Reference Datasets (Web, free access)   The Statistical Reference Datasets is also supported by the Standard Reference Data Program. The purpose of this project is to improve the accuracy of statistical software by providing reference datasets with certified computational results that enable the objective evaluation of statistical software.

  16. Introductory Statistics and Fish Management.

    ERIC Educational Resources Information Center

    Jardine, Dick

    2002-01-01

    Describes how fisheries research and management data (available on a website) have been incorporated into an Introductory Statistics course. In addition to the motivation gained from seeing the practical relevance of the course, some students have participated in the data collection and analysis for the New Hampshire Fish and Game Department. (MM)

  17. Statistics of lattice animals

    NASA Astrophysics Data System (ADS)

    Hsu, Hsiao-Ping; Nadler, Walder; Grassberger, Peter

    2005-07-01

    The scaling behavior of randomly branched polymers in a good solvent is studied in two to nine dimensions, modeled by lattice animals on simple hypercubic lattices. For the simulations, we use a biased sequential sampling algorithm with re-sampling, similar to the pruned-enriched Rosenbluth method (PERM) used extensively for linear polymers. We obtain high statistics of animals with up to several thousand sites in all dimension 2⩽d⩽9. The partition sum (number of different animals) and gyration radii are estimated. In all dimensions we verify the Parisi-Sourlas prediction, and we verify all exactly known critical exponents in dimensions 2, 3, 4, and ⩾8. In addition, we present the hitherto most precise estimates for growth constants in d⩾3. For clusters with one site attached to an attractive surface, we verify the superuniversality of the cross-over exponent at the adsorption transition predicted by Janssen and Lyssy.

  18. Comparison of methods for computing streamflow statistics for Pennsylvania streams

    USGS Publications Warehouse

    Ehlke, Marla H.; Reed, Lloyd A.

    1999-01-01

    Methods for computing streamflow statistics intended for use on ungaged locations on Pennsylvania streams are presented and compared to frequency distributions of gaged streamflow data. The streamflow statistics used in the comparisons include the 7-day 10-year low flow, 50-year flood flow, and the 100-year flood flow; additional statistics are presented. Streamflow statistics for gaged locations on streams in Pennsylvania were computed using three methods for the comparisons: 1) Log-Pearson type III frequency distribution (Log-Pearson) of continuous-record streamflow data, 2) regional regression equations developed by the U.S. Geological Survey in 1982 (WRI 82-21), and 3) regional regression equations developed by the Pennsylvania State University in 1981 (PSU-IV). Log-Pearson distribution was considered the reference method for evaluation of the regional regression equations. Low-flow statistics were computed using the Log-Pearson distribution and WRI 82-21, whereas flood-flow statistics were computed using all three methods. The urban adjustment for PSU-IV was modified from the recommended computation to exclude Philadelphia and the surrounding areas (region 1) from the adjustment. Adjustments for storage area for PSU-IV were also slightly modified. A comparison of the 7-day 10-year low flow computed from Log-Pearson distribution and WRI-82- 21 showed that the methods produced significantly different values for about 7 percent of the state. The same methods produced 50-year and 100-year flood flows that were significantly different for about 24 percent of the state. Flood-flow statistics computed using Log-Pearson distribution and PSU-IV were not significantly different in any regions of the state. These findings are based on a statistical comparison using the t-test on signed ranks and graphical methods.

  19. Addition of Rice Bran Arabinoxylan to Curcumin Therapy May Be of Benefit to Patients With Early-Stage B-Cell Lymphoid Malignancies (Monoclonal Gammopathy of Undetermined Significance, Smoldering Multiple Myeloma, or Stage 0/1 Chronic Lymphocytic Leukemia): A Preliminary Clinical Study.

    PubMed

    Golombick, Terry; Diamond, Terrence H; Manoharan, Arumugam; Ramakrishna, Rajeev

    2016-06-01

    Hypothesis Prior studies on patients with early B-cell lymphoid malignancies suggest that early intervention with curcumin may lead to delay in progressive disease and prolonged survival. These patients are characterized by increased susceptibility to infections. Rice bran arabinoxylan (Ribraxx) has been shown to have immunostimulatory, anti-inflammatory, and proapoptotic effects. We postulated that addition of Ribraxx to curcumin therapy may be of benefit. Study design Monoclonal gammopathy of undetermined significance (MGUS)/smoldering multiple myeloma (SMM) or stage 0/1 chronic lymphocytic leukemia (CLL) patients who had been on oral curcumin therapy for a period of 6 months or more were administered both curcumin (as Curcuforte) and Ribraxx. Methods Ten MGUS/SMM patients and 10 patients with stage 0/1 CLL were administered 6 g of curcumin and 2 g Ribraxx daily. Blood samples were collected at baseline and at 2-month intervals for a period of 6 months, and various markers were monitored. MGUS/SMM patients included full blood count (FBC); paraprotein; free light chains/ratio; C-reactive protein (CRP)and erythrocyte sedimentation rate (ESR); B2 microglobulin and immunological markers. Markers monitored for stage 0/1 CLL were FBC, CRP and ESR, and immunological markers. Results Of 10 MGUS/SMM patients,5 (50%) were neutropenic at baseline, and the Curcuforte/Ribraxx combination therapy showed an increased neutrophil count, varying between 10% and 90% among 8 of the 10 (80%) MGUS/SMM patients. An additional benefit of the combination therapy was the potent effect in reducing the raised ESR in 4 (44%) of the MGUS/SMM patients. Conclusion Addition of Ribraxx to curcumin therapy may be of benefit to patients with early-stage B-cell lymphoid malignancies. PMID:27154182

  20. Reducing statistics anxiety and enhancing statistics learning achievement: effectiveness of a one-minute strategy.

    PubMed

    Chiou, Chei-Chang; Wang, Yu-Min; Lee, Li-Tze

    2014-08-01

    Statistical knowledge is widely used in academia; however, statistics teachers struggle with the issue of how to reduce students' statistics anxiety and enhance students' statistics learning. This study assesses the effectiveness of a "one-minute paper strategy" in reducing students' statistics-related anxiety and in improving students' statistics-related achievement. Participants were 77 undergraduates from two classes enrolled in applied statistics courses. An experiment was implemented according to a pretest/posttest comparison group design. The quasi-experimental design showed that the one-minute paper strategy significantly reduced students' statistics anxiety and improved students' statistics learning achievement. The strategy was a better instructional tool than the textbook exercise for reducing students' statistics anxiety and improving students' statistics achievement. PMID:25153964

  1. Statistical Seismology and Induced Seismicity

    NASA Astrophysics Data System (ADS)

    Tiampo, K. F.; González, P. J.; Kazemian, J.

    2014-12-01

    While seismicity triggered or induced by natural resources production such as mining or water impoundment in large dams has long been recognized, the recent increase in the unconventional production of oil and gas has been linked to rapid rise in seismicity in many places, including central North America (Ellsworth et al., 2012; Ellsworth, 2013). Worldwide, induced events of M~5 have occurred and, although rare, have resulted in both damage and public concern (Horton, 2012; Keranen et al., 2013). In addition, over the past twenty years, the increase in both number and coverage of seismic stations has resulted in an unprecedented ability to precisely record the magnitude and location of large numbers of small magnitude events. The increase in the number and type of seismic sequences available for detailed study has revealed differences in their statistics that previously difficult to quantify. For example, seismic swarms that produce significant numbers of foreshocks as well as aftershocks have been observed in different tectonic settings, including California, Iceland, and the East Pacific Rise (McGuire et al., 2005; Shearer, 2012; Kazemian et al., 2014). Similarly, smaller events have been observed prior to larger induced events in several occurrences from energy production. The field of statistical seismology has long focused on the question of triggering and the mechanisms responsible (Stein et al., 1992; Hill et al., 1993; Steacy et al., 2005; Parsons, 2005; Main et al., 2006). For example, in most cases the associated stress perturbations are much smaller than the earthquake stress drop, suggesting an inherent sensitivity to relatively small stress changes (Nalbant et al., 2005). Induced seismicity provides the opportunity to investigate triggering and, in particular, the differences between long- and short-range triggering. Here we investigate the statistics of induced seismicity sequences from around the world, including central North America and Spain, and

  2. Chemists, Access, Statistics

    NASA Astrophysics Data System (ADS)

    Holmes, Jon L.

    2000-06-01

    IP-number access. Current subscriptions can be upgraded to IP-number access at little additional cost. We are pleased to be able to offer to institutions and libraries this convenient mode of access to subscriber only resources at JCE Online. JCE Online Usage Statistics We are continually amazed by the activity at JCE Online. So far, the year 2000 has shown a marked increase. Given the phenomenal overall growth of the Internet, perhaps our surprise is not warranted. However, during the months of January and February 2000, over 38,000 visitors requested over 275,000 pages. This is a monthly increase of over 33% from the October-December 1999 levels. It is good to know that people are visiting, but we would very much like to know what you would most like to see at JCE Online. Please send your suggestions to JCEOnline@chem.wisc.edu. For those who are interested, JCE Online year-to-date statistics are available. Biographical Snapshots of Famous Chemists: Mission Statement Feature Editor: Barbara Burke Chemistry Department, California State Polytechnic University-Pomona, Pomona, CA 91768 phone: 909/869-3664 fax: 909/869-4616 email: baburke@csupomona.edu The primary goal of this JCE Internet column is to provide information about chemists who have made important contributions to chemistry. For each chemist, there is a short biographical "snapshot" that provides basic information about the person's chemical work, gender, ethnicity, and cultural background. Each snapshot includes links to related websites and to a biobibliographic database. The database provides references for the individual and can be searched through key words listed at the end of each snapshot. All students, not just science majors, need to understand science as it really is: an exciting, challenging, human, and creative way of learning about our natural world. Investigating the life experiences of chemists can provide a means for students to gain a more realistic view of chemistry. In addition students

  3. [Big data in official statistics].

    PubMed

    Zwick, Markus

    2015-08-01

    The concept of "big data" stands to change the face of official statistics over the coming years, having an impact on almost all aspects of data production. The tasks of future statisticians will not necessarily be to produce new data, but rather to identify and make use of existing data to adequately describe social and economic phenomena. Until big data can be used correctly in official statistics, a lot of questions need to be answered and problems solved: the quality of data, data protection, privacy, and the sustainable availability are some of the more pressing issues to be addressed. The essential skills of official statisticians will undoubtedly change, and this implies a number of challenges to be faced by statistical education systems, in universities, and inside the statistical offices. The national statistical offices of the European Union have concluded a concrete strategy for exploring the possibilities of big data for official statistics, by means of the Big Data Roadmap and Action Plan 1.0. This is an important first step and will have a significant influence on implementing the concept of big data inside the statistical offices of Germany. PMID:26077871

  4. Ranald Macdonald and statistical inference.

    PubMed

    Smith, Philip T

    2009-05-01

    Ranald Roderick Macdonald (1945-2007) was an important contributor to mathematical psychology in the UK, as a referee and action editor for British Journal of Mathematical and Statistical Psychology and as a participant and organizer at the British Psychological Society's Mathematics, statistics and computing section meetings. This appreciation argues that his most important contribution was to the foundations of significance testing, where his concern about what information was relevant in interpreting the results of significance tests led him to be a persuasive advocate for the 'Weak Fisherian' form of hypothesis testing. PMID:19351454

  5. Photon statistics: math versus mysticism

    NASA Astrophysics Data System (ADS)

    Kracklauer, A. F.

    2013-10-01

    Critical analysis is given for mystical aspects of the current understanding of interaction between charged particles: wave-particle duality and nonlocal entanglement. A possible statistical effect concerning distribution functions for coincidences between the output channels of beam splitters is described. If this effect is observed in beam splitter data, ten significant evidence for photon splitting, i.e. , against the notion that light is ultimately packaged in finite chunks, has been found. An argument is given for the invalidity of the meaning attached to tests of Bell inequalities. Additionally, a totally classical paradigm for the calculation of the customary expression for the "quantum" coincidence coefficient pertaining to the singlet state is described. If fully accounts for the results of experimental tests of Bell inequalities taken nowadays to prove the reality of entanglement and non-locality in quantum phenomena of, inter alia, light. Described. It fully accounts for the results of experimental tests of Bell inequalities take n nowadays to prove the reality of entanglement and non-locality in quantum phenomena of inter alia, light.

  6. Uterine Cancer Statistics

    MedlinePlus

    ... Research AMIGAS Fighting Cervical Cancer Worldwide Stay Informed Statistics for Other Kinds of Cancer Breast Cervical Colorectal ( ... Skin Vaginal and Vulvar Cancer Home Uterine Cancer Statistics Language: English Español (Spanish) Recommend on Facebook Tweet ...

  7. Mathematical and statistical analysis

    NASA Technical Reports Server (NTRS)

    Houston, A. Glen

    1988-01-01

    The goal of the mathematical and statistical analysis component of RICIS is to research, develop, and evaluate mathematical and statistical techniques for aerospace technology applications. Specific research areas of interest include modeling, simulation, experiment design, reliability assessment, and numerical analysis.

  8. Minnesota Health Statistics 1988.

    ERIC Educational Resources Information Center

    Minnesota State Dept. of Health, St. Paul.

    This document comprises the 1988 annual statistical report of the Minnesota Center for Health Statistics. After introductory technical notes on changes in format, sources of data, and geographic allocation of vital events, an overview is provided of vital health statistics in all areas. Thereafter, separate sections of the report provide tables…

  9. Ethics in Statistics

    ERIC Educational Resources Information Center

    Lenard, Christopher; McCarthy, Sally; Mills, Terence

    2014-01-01

    There are many different aspects of statistics. Statistics involves mathematics, computing, and applications to almost every field of endeavour. Each aspect provides an opportunity to spark someone's interest in the subject. In this paper we discuss some ethical aspects of statistics, and describe how an introduction to ethics has been…

  10. Avoiding Statistical Mistakes

    ERIC Educational Resources Information Center

    Strasser, Nora

    2007-01-01

    Avoiding statistical mistakes is important for educators at all levels. Basic concepts will help you to avoid making mistakes using statistics and to look at data with a critical eye. Statistical data is used at educational institutions for many purposes. It can be used to support budget requests, changes in educational philosophy, changes to…

  11. Statistical quality management

    NASA Astrophysics Data System (ADS)

    Vanderlaan, Paul

    1992-10-01

    Some aspects of statistical quality management are discussed. Quality has to be defined as a concrete, measurable quantity. The concepts of Total Quality Management (TQM), Statistical Process Control (SPC), and inspection are explained. In most cases SPC is better than inspection. It can be concluded that statistics has great possibilities in the field of TQM.

  12. Explorations in statistics: statistical facets of reproducibility.

    PubMed

    Curran-Everett, Douglas

    2016-06-01

    Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This eleventh installment of Explorations in Statistics explores statistical facets of reproducibility. If we obtain an experimental result that is scientifically meaningful and statistically unusual, we would like to know that our result reflects a general biological phenomenon that another researcher could reproduce if (s)he repeated our experiment. But more often than not, we may learn this researcher cannot replicate our result. The National Institutes of Health and the Federation of American Societies for Experimental Biology have created training modules and outlined strategies to help improve the reproducibility of research. These particular approaches are necessary, but they are not sufficient. The principles of hypothesis testing and estimation are inherent to the notion of reproducibility in science. If we want to improve the reproducibility of our research, then we need to rethink how we apply fundamental concepts of statistics to our science. PMID:27231259

  13. Exploring Correlation Coefficients with Golf Statistics

    ERIC Educational Resources Information Center

    Quinn, Robert J

    2006-01-01

    This article explores the relationships between several pairs of statistics kept on professional golfers on the PGA tour. Specifically, two measures related to the player's ability to drive the ball are compared as are two measures related to the player's ability to putt. An additional analysis is made between one statistic related to putting and…

  14. Florida Library Directory with Statistics, 1998.

    ERIC Educational Resources Information Center

    Florida Dept. of State, Tallahassee. Div. of Library and Information Services.

    This 49th annual Florida Library directory with statistics edition includes listings for over 1,000 libraries of all types in Florida, with contact named, phone numbers, addresses, and e-mail and web addresses. In addition, there is a section of library statistics, showing data on the use, resources, and financial condition of Florida's libraries.…

  15. Statistical prediction of cyclostationary processes

    SciTech Connect

    Kim, K.Y.

    2000-03-15

    Considered in this study is a cyclostationary generalization of an EOF-based prediction method. While linear statistical prediction methods are typically optimal in the sense that prediction error variance is minimal within the assumption of stationarity, there is some room for improved performance since many physical processes are not stationary. For instance, El Nino is known to be strongly phase locked with the seasonal cycle, which suggests nonstationarity of the El Nino statistics. Many geophysical and climatological processes may be termed cyclostationary since their statistics show strong cyclicity instead of stationarity. Therefore, developed in this study is a cyclostationary prediction method. Test results demonstrate that performance of prediction methods can be improved significantly by accounting for the cyclostationarity of underlying processes. The improvement comes from an accurate rendition of covariance structure both in space and time.

  16. Thermodynamic Limit in Statistical Physics

    NASA Astrophysics Data System (ADS)

    Kuzemsky, A. L.

    2014-03-01

    The thermodynamic limit in statistical thermodynamics of many-particle systems is an important but often overlooked issue in the various applied studies of condensed matter physics. To settle this issue, we review tersely the past and present disposition of thermodynamic limiting procedure in the structure of the contemporary statistical mechanics and our current understanding of this problem. We pick out the ingenious approach by Bogoliubov, who developed a general formalism for establishing the limiting distribution functions in the form of formal series in powers of the density. In that study, he outlined the method of justification of the thermodynamic limit when he derived the generalized Boltzmann equations. To enrich and to weave our discussion, we take this opportunity to give a brief survey of the closely related problems, such as the equipartition of energy and the equivalence and nonequivalence of statistical ensembles. The validity of the equipartition of energy permits one to decide what are the boundaries of applicability of statistical mechanics. The major aim of this work is to provide a better qualitative understanding of the physical significance of the thermodynamic limit in modern statistical physics of the infinite and "small" many-particle systems.

  17. Quantum Statistical Mechanics

    NASA Astrophysics Data System (ADS)

    Schieve, William C.; Horwitz, Lawrence P.

    2009-04-01

    1. Foundations of quantum statistical mechanics; 2. Elementary examples; 3. Quantum statistical master equation; 4. Quantum kinetic equations; 5. Quantum irreversibility; 6. Entropy and dissipation: the microscopic theory; 7. Global equilibrium: thermostatics and the microcanonical ensemble; 8. Bose-Einstein ideal gas condensation; 9. Scaling, renormalization and the Ising model; 10. Relativistic covariant statistical mechanics of many particles; 11. Quantum optics and damping; 12. Entanglements; 13. Quantum measurement and irreversibility; 14. Quantum Langevin equation: quantum Brownian motion; 15. Linear response: fluctuation and dissipation theorems; 16. Time dependent quantum Green's functions; 17. Decay scattering; 18. Quantum statistical mechanics, extended; 19. Quantum transport with tunneling and reservoir ballistic transport; 20. Black hole thermodynamics; Appendix; Index.

  18. Statistical distribution sampling

    NASA Technical Reports Server (NTRS)

    Johnson, E. S.

    1975-01-01

    Determining the distribution of statistics by sampling was investigated. Characteristic functions, the quadratic regression problem, and the differential equations for the characteristic functions are analyzed.

  19. Significant Scales in Community Structure

    PubMed Central

    Traag, V. A.; Krings, G.; Van Dooren, P.

    2013-01-01

    Many complex networks show signs of modular structure, uncovered by community detection. Although many methods succeed in revealing various partitions, it remains difficult to detect at what scale some partition is significant. This problem shows foremost in multi-resolution methods. We here introduce an efficient method for scanning for resolutions in one such method. Additionally, we introduce the notion of “significance” of a partition, based on subgraph probabilities. Significance is independent of the exact method used, so could also be applied in other methods, and can be interpreted as the gain in encoding a graph by making use of a partition. Using significance, we can determine “good” resolution parameters, which we demonstrate on benchmark networks. Moreover, optimizing significance itself also shows excellent performance. We demonstrate our method on voting data from the European Parliament. Our analysis suggests the European Parliament has become increasingly ideologically divided and that nationality plays no role. PMID:24121597

  20. Illustrating the practice of statistics

    SciTech Connect

    Hamada, Christina A; Hamada, Michael S

    2009-01-01

    The practice of statistics involves analyzing data and planning data collection schemes to answer scientific questions. Issues often arise with the data that must be dealt with and can lead to new procedures. In analyzing data, these issues can sometimes be addressed through the statistical models that are developed. Simulation can also be helpful in evaluating a new procedure. Moreover, simulation coupled with optimization can be used to plan a data collection scheme. The practice of statistics as just described is much more than just using a statistical package. In analyzing the data, it involves understanding the scientific problem and incorporating the scientist's knowledge. In modeling the data, it involves understanding how the data were collected and accounting for limitations of the data where possible. Moreover, the modeling is likely to be iterative by considering a series of models and evaluating the fit of these models. Designing a data collection scheme involves understanding the scientist's goal and staying within hislher budget in terms of time and the available resources. Consequently, a practicing statistician is faced with such tasks and requires skills and tools to do them quickly. We have written this article for students to provide a glimpse of the practice of statistics. To illustrate the practice of statistics, we consider a problem motivated by some precipitation data that our relative, Masaru Hamada, collected some years ago. We describe his rain gauge observational study in Section 2. We describe modeling and an initial analysis of the precipitation data in Section 3. In Section 4, we consider alternative analyses that address potential issues with the precipitation data. In Section 5, we consider the impact of incorporating additional infonnation. We design a data collection scheme to illustrate the use of simulation and optimization in Section 6. We conclude this article in Section 7 with a discussion.

  1. Statistical comparison of dissolution profiles.

    PubMed

    Wang, Yifan; Snee, Ronald D; Keyvan, Golshid; Muzzio, Fernando J

    2016-05-01

    Statistical methods to assess similarity of dissolution profiles are introduced. Sixteen groups of dissolution profiles from a full factorial design were used to demonstrate implementation details. Variables in the design include drug strength, tablet stability time, and dissolution testing condition. The 16 groups were considered similar when compared using the similarity factor f2 (f2 > 50). However, multivariate ANOVA (MANOVA) repeated measures suggested statistical differences. A modified principal component analysis (PCA) was used to describe the dissolution curves in terms of level and shape. The advantage of the modified PCA approach is that the calculated shape principal components will not be confounded by level effect. Effect size test using omega-squared was also used for dissolution comparisons. Effects indicated by omega-squared are independent of sample size and are a necessary supplement to p value reported from the MANOVA table. Methods to compare multiple groups show that product strength and dissolution testing condition had significant effects on both level and shape. For pairwise analysis, a post-hoc analysis using Tukey's method categorized three similar groups, and was consistent with level-shape analysis. All these methods provide valuable information that is missed using f2 method alone to compare average profiles. The improved statistical analysis approach introduced here enables one to better ascertain both statistical significance and clinical relevance, supporting more objective regulatory decisions. PMID:26294289

  2. Statistical Mechanics of Zooplankton

    PubMed Central

    Hinow, Peter; Nihongi, Ai; Strickler, J. Rudi

    2015-01-01

    Statistical mechanics provides the link between microscopic properties of many-particle systems and macroscopic properties such as pressure and temperature. Observations of similar “microscopic” quantities exist for the motion of zooplankton, as well as many species of other social animals. Herein, we propose to take average squared velocities as the definition of the “ecological temperature” of a population under different conditions on nutrients, light, oxygen and others. We test the usefulness of this definition on observations of the crustacean zooplankton Daphnia pulicaria. In one set of experiments, D. pulicaria is infested with the pathogen Vibrio cholerae, the causative agent of cholera. We find that infested D. pulicaria under light exposure have a significantly greater ecological temperature, which puts them at a greater risk of detection by visual predators. In a second set of experiments, we observe D. pulicaria in cold and warm water, and in darkness and under light exposure. Overall, our ecological temperature is a good discriminator of the crustacean’s swimming behavior. PMID:26270537

  3. Statistical Mechanics of Zooplankton.

    PubMed

    Hinow, Peter; Nihongi, Ai; Strickler, J Rudi

    2015-01-01

    Statistical mechanics provides the link between microscopic properties of many-particle systems and macroscopic properties such as pressure and temperature. Observations of similar "microscopic" quantities exist for the motion of zooplankton, as well as many species of other social animals. Herein, we propose to take average squared velocities as the definition of the "ecological temperature" of a population under different conditions on nutrients, light, oxygen and others. We test the usefulness of this definition on observations of the crustacean zooplankton Daphnia pulicaria. In one set of experiments, D. pulicaria is infested with the pathogen Vibrio cholerae, the causative agent of cholera. We find that infested D. pulicaria under light exposure have a significantly greater ecological temperature, which puts them at a greater risk of detection by visual predators. In a second set of experiments, we observe D. pulicaria in cold and warm water, and in darkness and under light exposure. Overall, our ecological temperature is a good discriminator of the crustacean's swimming behavior. PMID:26270537

  4. The Probabilistic Admissible Region with Additional Constraints

    NASA Astrophysics Data System (ADS)

    Roscoe, C.; Hussein, I.; Wilkins, M.; Schumacher, P.

    The admissible region, in the space surveillance field, is defined as the set of physically acceptable orbits (e.g., orbits with negative energies) consistent with one or more observations of a space object. Given additional constraints on orbital semimajor axis, eccentricity, etc., the admissible region can be constrained, resulting in the constrained admissible region (CAR). Based on known statistics of the measurement process, one can replace hard constraints with a probabilistic representation of the admissible region. This results in the probabilistic admissible region (PAR), which can be used for orbit initiation in Bayesian tracking and prioritization of tracks in a multiple hypothesis tracking framework. The PAR concept was introduced by the authors at the 2014 AMOS conference. In that paper, a Monte Carlo approach was used to show how to construct the PAR in the range/range-rate space based on known statistics of the measurement, semimajor axis, and eccentricity. An expectation-maximization algorithm was proposed to convert the particle cloud into a Gaussian Mixture Model (GMM) representation of the PAR. This GMM can be used to initialize a Bayesian filter. The PAR was found to be significantly non-uniform, invalidating an assumption frequently made in CAR-based filtering approaches. Using the GMM or particle cloud representations of the PAR, orbits can be prioritized for propagation in a multiple hypothesis tracking (MHT) framework. In this paper, the authors focus on expanding the PAR methodology to allow additional constraints, such as a constraint on perigee altitude, to be modeled in the PAR. This requires re-expressing the joint probability density function for the attributable vector as well as the (constrained) orbital parameters and range and range-rate. The final PAR is derived by accounting for any interdependencies between the parameters. Noting that the concepts presented are general and can be applied to any measurement scenario, the idea

  5. Robot Trajectories Comparison: A Statistical Approach

    PubMed Central

    Ansuategui, A.; Arruti, A.; Susperregi, L.; Yurramendi, Y.; Jauregi, E.; Lazkano, E.; Sierra, B.

    2014-01-01

    The task of planning a collision-free trajectory from a start to a goal position is fundamental for an autonomous mobile robot. Although path planning has been extensively investigated since the beginning of robotics, there is no agreement on how to measure the performance of a motion algorithm. This paper presents a new approach to perform robot trajectories comparison that could be applied to any kind of trajectories and in both simulated and real environments. Given an initial set of features, it automatically selects the most significant ones and performs a statistical comparison using them. Additionally, a graphical data visualization named polygraph which helps to better understand the obtained results is provided. The proposed method has been applied, as an example, to compare two different motion planners, FM2 and WaveFront, using different environments, robots, and local planners. PMID:25525618

  6. Robot trajectories comparison: a statistical approach.

    PubMed

    Ansuategui, A; Arruti, A; Susperregi, L; Yurramendi, Y; Jauregi, E; Lazkano, E; Sierra, B

    2014-01-01

    The task of planning a collision-free trajectory from a start to a goal position is fundamental for an autonomous mobile robot. Although path planning has been extensively investigated since the beginning of robotics, there is no agreement on how to measure the performance of a motion algorithm. This paper presents a new approach to perform robot trajectories comparison that could be applied to any kind of trajectories and in both simulated and real environments. Given an initial set of features, it automatically selects the most significant ones and performs a statistical comparison using them. Additionally, a graphical data visualization named polygraph which helps to better understand the obtained results is provided. The proposed method has been applied, as an example, to compare two different motion planners, FM(2) and WaveFront, using different environments, robots, and local planners. PMID:25525618

  7. Explorations in Statistics: Regression

    ERIC Educational Resources Information Center

    Curran-Everett, Douglas

    2011-01-01

    Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This seventh installment of "Explorations in Statistics" explores regression, a technique that estimates the nature of the relationship between two things for which we may only surmise a mechanistic or predictive connection.…

  8. Multidimensional Visual Statistical Learning

    ERIC Educational Resources Information Center

    Turk-Browne, Nicholas B.; Isola, Phillip J.; Scholl, Brian J.; Treat, Teresa A.

    2008-01-01

    Recent studies of visual statistical learning (VSL) have demonstrated that statistical regularities in sequences of visual stimuli can be automatically extracted, even without intent or awareness. Despite much work on this topic, however, several fundamental questions remain about the nature of VSL. In particular, previous experiments have not…

  9. Deconstructing Statistical Analysis

    ERIC Educational Resources Information Center

    Snell, Joel

    2014-01-01

    Using a very complex statistical analysis and research method for the sake of enhancing the prestige of an article or making a new product or service legitimate needs to be monitored and questioned for accuracy. 1) The more complicated the statistical analysis, and research the fewer the number of learned readers can understand it. This adds a…

  10. Explorations in Statistics: Power

    ERIC Educational Resources Information Center

    Curran-Everett, Douglas

    2010-01-01

    Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This fifth installment of "Explorations in Statistics" revisits power, a concept fundamental to the test of a null hypothesis. Power is the probability that we reject the null hypothesis when it is false. Four things affect…

  11. Applied Statistics with SPSS

    ERIC Educational Resources Information Center

    Huizingh, Eelko K. R. E.

    2007-01-01

    Accessibly written and easy to use, "Applied Statistics Using SPSS" is an all-in-one self-study guide to SPSS and do-it-yourself guide to statistics. What is unique about Eelko Huizingh's approach is that this book is based around the needs of undergraduate students embarking on their own research project, and its self-help style is designed to…

  12. Overhead Image Statistics

    SciTech Connect

    Vijayaraj, Veeraraghavan; Cheriyadat, Anil M; Bhaduri, Budhendra L; Vatsavai, Raju; Bright, Eddie A

    2008-01-01

    Statistical properties of high-resolution overhead images representing different land use categories are analyzed using various local and global statistical image properties based on the shape of the power spectrum, image gradient distributions, edge co-occurrence, and inter-scale wavelet coefficient distributions. The analysis was performed on a database of high-resolution (1 meter) overhead images representing a multitude of different downtown, suburban, commercial, agricultural and wooded exemplars. Various statistical properties relating to these image categories and their relationship are discussed. The categorical variations in power spectrum contour shapes, the unique gradient distribution characteristics of wooded categories, the similarity in edge co-occurrence statistics for overhead and natural images, and the unique edge co-occurrence statistics of downtown categories are presented in this work. Though previous work on natural image statistics has showed some of the unique characteristics for different categories, the relationships for overhead images are not well understood. The statistical properties of natural images were used in previous studies to develop prior image models, to predict and index objects in a scene and to improve computer vision models. The results from our research findings can be used to augment and adapt computer vision algorithms that rely on prior image statistics to process overhead images, calibrate the performance of overhead image analysis algorithms, and derive features for better discrimination of overhead image categories.

  13. Understanding Undergraduate Statistical Anxiety

    ERIC Educational Resources Information Center

    McKim, Courtney

    2014-01-01

    The purpose of this study was to understand undergraduate students' views of statistics. Results reveal that students with less anxiety have a higher interest in statistics and also believe in their ability to perform well in the course. Also students who have a more positive attitude about the class tend to have a higher belief in their…

  14. Statistics and Measurements

    PubMed Central

    Croarkin, M. Carroll

    2001-01-01

    For more than 50 years, the Statistical Engineering Division (SED) has been instrumental in the success of a broad spectrum of metrology projects at NBS/NIST. This paper highlights fundamental contributions of NBS/NIST statisticians to statistics and to measurement science and technology. Published methods developed by SED staff, especially during the early years, endure as cornerstones of statistics not only in metrology and standards applications, but as data-analytic resources used across all disciplines. The history of statistics at NBS/NIST began with the formation of what is now the SED. Examples from the first five decades of the SED illustrate the critical role of the division in the successful resolution of a few of the highly visible, and sometimes controversial, statistical studies of national importance. A review of the history of major early publications of the division on statistical methods, design of experiments, and error analysis and uncertainty is followed by a survey of several thematic areas. The accompanying examples illustrate the importance of SED in the history of statistics, measurements and standards: calibration and measurement assurance, interlaboratory tests, development of measurement methods, Standard Reference Materials, statistical computing, and dissemination of measurement technology. A brief look forward sketches the expanding opportunity and demand for SED statisticians created by current trends in research and development at NIST.

  15. Water Quality Statistics

    ERIC Educational Resources Information Center

    Hodgson, Ted; Andersen, Lyle; Robison-Cox, Jim; Jones, Clain

    2004-01-01

    Water quality experiments, especially the use of macroinvertebrates as indicators of water quality, offer an ideal context for connecting statistics and science. In the STAR program for secondary students and teachers, water quality experiments were also used as a context for teaching statistics. In this article, we trace one activity that uses…

  16. Application Statistics 1987.

    ERIC Educational Resources Information Center

    Council of Ontario Universities, Toronto.

    Summary statistics on application and registration patterns of applicants wishing to pursue full-time study in first-year places in Ontario universities (for the fall of 1987) are given. Data on registrations were received indirectly from the universities as part of their annual submission of USIS/UAR enrollment data to Statistics Canada and MCU.…

  17. Introduction to Statistical Physics

    NASA Astrophysics Data System (ADS)

    Casquilho, João Paulo; Ivo Cortez Teixeira, Paulo

    2014-12-01

    Preface; 1. Random walks; 2. Review of thermodynamics; 3. The postulates of statistical physics. Thermodynamic equilibrium; 4. Statistical thermodynamics – developments and applications; 5. The classical ideal gas; 6. The quantum ideal gas; 7. Magnetism; 8. The Ising model; 9. Liquid crystals; 10. Phase transitions and critical phenomena; 11. Irreversible processes; Appendixes; Index.

  18. Reform in Statistical Education

    ERIC Educational Resources Information Center

    Huck, Schuyler W.

    2007-01-01

    Two questions are considered in this article: (a) What should professionals in school psychology do in an effort to stay current with developments in applied statistics? (b) What should they do with their existing knowledge to move from surface understanding of statistics to deep understanding? Written for school psychologists who have completed…

  19. Statistical Mapping by Computer.

    ERIC Educational Resources Information Center

    Utano, Jack J.

    The function of a statistical map is to provide readers with a visual impression of the data so that they may be able to identify any geographic characteristics of the displayed phenomena. The increasingly important role played by the computer in the production of statistical maps is manifested by the varied examples of computer maps in recent…

  20. DISABILITY STATISTICS CENTER

    EPA Science Inventory

    The purpose of the Disability Statistics Center is to produce and disseminate statistical information on disability and the status of people with disabilities in American society and to establish and monitor indicators of how conditions are changing over time to meet their health...

  1. Tsallis statistics and neurodegenerative disorders

    NASA Astrophysics Data System (ADS)

    Iliopoulos, Aggelos C.; Tsolaki, Magdalini; Aifantis, Elias C.

    2016-08-01

    In this paper, we perform statistical analysis of time series deriving from four neurodegenerative disorders, namely epilepsy, amyotrophic lateral sclerosis (ALS), Parkinson's disease (PD), Huntington's disease (HD). The time series are concerned with electroencephalograms (EEGs) of healthy and epileptic states, as well as gait dynamics (in particular stride intervals) of the ALS, PD and HDs. We study data concerning one subject for each neurodegenerative disorder and one healthy control. The analysis is based on Tsallis non-extensive statistical mechanics and in particular on the estimation of Tsallis q-triplet, namely {qstat, qsen, qrel}. The deviation of Tsallis q-triplet from unity indicates non-Gaussian statistics and long-range dependencies for all time series considered. In addition, the results reveal the efficiency of Tsallis statistics in capturing differences in brain dynamics between healthy and epileptic states, as well as differences between ALS, PD, HDs from healthy control subjects. The results indicate that estimations of Tsallis q-indices could be used as possible biomarkers, along with others, for improving classification and prediction of epileptic seizures, as well as for studying the gait complex dynamics of various diseases providing new insights into severity, medications and fall risk, improving therapeutic interventions.

  2. Teaching biology through statistics: application of statistical methods in genetics and zoology courses.

    PubMed

    Colon-Berlingeri, Migdalisel; Burrowes, Patricia A

    2011-01-01

    Incorporation of mathematics into biology curricula is critical to underscore for undergraduate students the relevance of mathematics to most fields of biology and the usefulness of developing quantitative process skills demanded in modern biology. At our institution, we have made significant changes to better integrate mathematics into the undergraduate biology curriculum. The curricular revision included changes in the suggested course sequence, addition of statistics and precalculus as prerequisites to core science courses, and incorporating interdisciplinary (math-biology) learning activities in genetics and zoology courses. In this article, we describe the activities developed for these two courses and the assessment tools used to measure the learning that took place with respect to biology and statistics. We distinguished the effectiveness of these learning opportunities in helping students improve their understanding of the math and statistical concepts addressed and, more importantly, their ability to apply them to solve a biological problem. We also identified areas that need emphasis in both biology and mathematics courses. In light of our observations, we recommend best practices that biology and mathematics academic departments can implement to train undergraduates for the demands of modern biology. PMID:21885822

  3. Teaching Biology through Statistics: Application of Statistical Methods in Genetics and Zoology Courses

    PubMed Central

    Colon-Berlingeri, Migdalisel; Burrowes, Patricia A.

    2011-01-01

    Incorporation of mathematics into biology curricula is critical to underscore for undergraduate students the relevance of mathematics to most fields of biology and the usefulness of developing quantitative process skills demanded in modern biology. At our institution, we have made significant changes to better integrate mathematics into the undergraduate biology curriculum. The curricular revision included changes in the suggested course sequence, addition of statistics and precalculus as prerequisites to core science courses, and incorporating interdisciplinary (math–biology) learning activities in genetics and zoology courses. In this article, we describe the activities developed for these two courses and the assessment tools used to measure the learning that took place with respect to biology and statistics. We distinguished the effectiveness of these learning opportunities in helping students improve their understanding of the math and statistical concepts addressed and, more importantly, their ability to apply them to solve a biological problem. We also identified areas that need emphasis in both biology and mathematics courses. In light of our observations, we recommend best practices that biology and mathematics academic departments can implement to train undergraduates for the demands of modern biology. PMID:21885822

  4. [Food additives and healthiness].

    PubMed

    Heinonen, Marina

    2014-01-01

    Additives are used for improving food structure or preventing its spoilage, for example. Many substances used as additives are also naturally present in food. The safety of additives is evaluated according to commonly agreed principles. If high concentrations of an additive cause adverse health effects for humans, a limit of acceptable daily intake (ADI) is set for it. An additive is a risk only when ADI is exceeded. The healthiness of food is measured on the basis of nutrient density and scientifically proven effects. PMID:24772784

  5. Polyimide processing additives

    NASA Technical Reports Server (NTRS)

    Pratt, J. R.; St. Clair, T. L.; Burks, H. D.; Stoakley, D. M.

    1987-01-01

    A method has been found for enhancing the melt flow of thermoplastic polyimides during processing. A high molecular weight 422 copoly(amic acid) or copolyimide was fused with approximately 0.05 to 5 pct by weight of a low molecular weight amic acid or imide additive, and this melt was studied by capillary rheometry. Excellent flow and improved composite properties on graphite resulted from the addition of a PMDA-aniline additive to LARC-TPI. Solution viscosity studies imply that amic acid additives temporarily lower molecular weight and, hence, enlarge the processing window. Thus, compositions containing the additive have a lower melt viscosity for a longer time than those unmodified.

  6. Lessons from Inferentialism for Statistics Education

    ERIC Educational Resources Information Center

    Bakker, Arthur; Derry, Jan

    2011-01-01

    This theoretical paper relates recent interest in informal statistical inference (ISI) to the semantic theory termed inferentialism, a significant development in contemporary philosophy, which places inference at the heart of human knowing. This theory assists epistemological reflection on challenges in statistics education encountered when…

  7. T1 VSAT Fade Compensation Statistical Results

    NASA Technical Reports Server (NTRS)

    Johnson, Sandra K.; Acosta, Roberto; Ugweje, Oke

    2000-01-01

    New satellite communication systems are steadily seeking to use higher frequency bands to accommodate the requirements for additional capacity. At these higher frequencies, propagation impairments that did not significantly affect the signal at lower frequencies begin to have considerable impact. In Ka-band. the next logical commercial frequency band to be used for satellite communication, attenuation of the signal due to rain is a primary concern. An experimental satellite built by NASA, the Advanced Communication Technology Satellite (ACTS). launched in September 1993, is the first U.S. communication satellite operating in the Ka-band. In addition to higher carrier frequencies, a number of other new technologies, including on-board baseband processing. multiple beam antennas, and rain fade detection and compensation techniques, were designed into the ACTS. Verification experiments have been conducted since the launch to characterize the new technologies. The focus of this paper is to characterize the method used by the ACTS TI Very Small Aperture Terminal (TI VSAT) ground stations in detecting the presence of fade in the communication signal and to adaptively compensate for it by the addition of burst rate reduction and forward error correction. Measured data obtained from the ACTS program was used to validate the compensation technique. A software process was developed and demonstrated to statistically characterize the increased availability achieved by the compensation techniques in terms of the bit error rate time enhancement factor. Several improvements to the ACTS technique are discussed and possible implementations for future Ka band system are offered.

  8. Statistics at a glance.

    PubMed

    Ector, Hugo

    2010-12-01

    I still remember my first book on statistics: "Elementary statistics with applications in medicine and the biological sciences" by Frederick E. Croxton. For me, it has been the start of pursuing understanding statistics in daily life and in medical practice. It was the first volume in a long row of books. In his introduction, Croxton pretends that"nearly everyone involved in any aspect of medicine needs to have some knowledge of statistics". The reality is that for many clinicians, statistics are limited to a "P < 0.05 = ok". I do not blame my colleagues who omit the paragraph on statistical methods. They have never had the opportunity to learn concise and clear descriptions of the key features. I have experienced how some authors can describe difficult methods in a well understandable language. Others fail completely. As a teacher, I tell my students that life is impossible without a basic knowledge of statistics. This feeling has resulted in an annual seminar of 90 minutes. This tutorial is the summary of this seminar. It is a summary and a transcription of the best pages I have detected. PMID:21302664

  9. The incoming statistical knowledge of undergraduate majors in a department of mathematics and statistics

    NASA Astrophysics Data System (ADS)

    Cook, Samuel A.; Fukawa-Connelly, Timothy

    2016-02-01

    Studies have shown that at the end of an introductory statistics course, students struggle with building block concepts, such as mean and standard deviation, and rely on procedural understandings of the concepts. This study aims to investigate the understandings entering freshman of a department of mathematics and statistics (including mathematics education), students who are presumably better prepared in terms of mathematics and statistics than the average university student, have of introductory statistics. This case study found that these students enter college with common statistical misunderstandings, lack of knowledge, and idiosyncratic collections of correct statistical knowledge. Moreover, they also have a wide range of beliefs about their knowledge with some of the students who believe that they have the strongest knowledge also having significant misconceptions. More attention to these statistical building blocks may be required in a university introduction statistics course.

  10. Statistical Mechanics of Infinite Gravitating Systems

    NASA Astrophysics Data System (ADS)

    Saslaw, William C.

    2008-01-01

    The cosmological many-body problem was stated over 300 years ago, but its solution is quite recent and still incomplete. Imagine an infinite expanding universe essentially containing a very large number of objects moving in response to their mutual gravitational forces. What will be the spatial and velocity distributions of these objects and how will they evolve? This question fascinates on many levels. Though inherently non-linear, it turns out to be one of the few analytically solvable problems of statistical mechanics with long range forces. The partition function can be calculated. From this all the thermodynamic properties of the system can be obtained for the grand canonical ensemble. They confirm results derived independently directly from the first and second laws of thermodynamics. The behavior of infinite gravitating systems is quite different from their finite relations such as star clusters. Infinite gravitating systems have regimes of negative specific heat, an unusual type of phase transition, and a very close relation to the observed large-scale structure of our universe. This last feature provides an additional astronomical motivation, especially since the statistical mechanics may be generalized to include effects of dark matter haloes around galaxies. Previously the cosmological many-body problem has mostly been studied using the BBGKY hierarchy (not so suitable in the non-linear regime) and by direct computer integrations of the objects' orbits. The statistical mechanics agrees with and substantially extends these earlier results. Most astrophysicists had previously thought that a statistical thermodynamic approach would not be applicable because: a) many-body gravitational systems have no rigorous equilibrium state, b) the unshielded nature of the long-range force would cause the partition function to diverge on large scales, and c) point masses would produce divergences on small scales. However, deeper considerations show that these are not

  11. Predicting Success in Psychological Statistics Courses.

    PubMed

    Lester, David

    2016-06-01

    Many students perform poorly in courses on psychological statistics, and it is useful to be able to predict which students will have difficulties. In a study of 93 undergraduates enrolled in Statistical Methods (18 men, 75 women; M age = 22.0 years, SD = 5.1), performance was significantly associated with sex (female students performed better) and proficiency in algebra in a linear regression analysis. Anxiety about statistics was not associated with course performance, indicating that basic mathematical skills are the best correlate for performance in statistics courses and can usefully be used to stream students into classes by ability. PMID:27273557

  12. Statistics: A Brief Overview

    PubMed Central

    Winters, Ryan; Winters, Andrew; Amedee, Ronald G.

    2010-01-01

    The Accreditation Council for Graduate Medical Education sets forth a number of required educational topics that must be addressed in residency and fellowship programs. We sought to provide a primer on some of the important basic statistical concepts to consider when examining the medical literature. It is not essential to understand the exact workings and methodology of every statistical test encountered, but it is necessary to understand selected concepts such as parametric and nonparametric tests, correlation, and numerical versus categorical data. This working knowledge will allow you to spot obvious irregularities in statistical analyses that you encounter. PMID:21603381

  13. Statistics of football dynamics

    NASA Astrophysics Data System (ADS)

    Mendes, R. S.; Malacarne, L. C.; Anteneodo, C.

    2007-06-01

    We investigate the dynamics of football matches. Our goal is to characterize statistically the temporal sequence of ball movements in this collective sport game, searching for traits of complex behavior. Data were collected over a variety of matches in South American, European and World championships throughout 2005 and 2006. We show that the statistics of ball touches presents power-law tails and can be described by q-gamma distributions. To explain such behavior we propose a model that provides information on the characteristics of football dynamics. Furthermore, we discuss the statistics of duration of out-of-play intervals, not directly related to the previous scenario.

  14. Hockey sticks, principal components, and spurious significance

    NASA Astrophysics Data System (ADS)

    McIntyre, Stephen; McKitrick, Ross

    2005-02-01

    The ``hockey stick'' shaped temperature reconstruction of Mann et al. (1998, 1999) has been widely applied. However it has not been previously noted in print that, prior to their principal components (PCs) analysis on tree ring networks, they carried out an unusual data transformation which strongly affects the resulting PCs. Their method, when tested on persistent red noise, nearly always produces a hockey stick shaped first principal component (PC1) and overstates the first eigenvalue. In the controversial 15th century period, the MBH98 method effectively selects only one species (bristlecone pine) into the critical North American PC1, making it implausible to describe it as the ``dominant pattern of variance''. Through Monte Carlo analysis, we show that MBH98 benchmarks for significance of the Reduction of Error (RE) statistic are substantially under-stated and, using a range of cross-validation statistics, we show that the MBH98 15th century reconstruction lacks statistical significance.

  15. Additive usage levels.

    PubMed

    Langlais, R

    1996-01-01

    With the adoption of the European Parliament and Council Directives on sweeteners, colours and miscellaneous additives the Commission is now embarking on the project of coordinating the activities of the European Union Member States in the collection of the data that are to make up the report on food additive intake requested by the European Parliament. This presentation looks at the inventory of available sources on additive use levels and concludes that for the time being national legislation is still the best source of information considering that the directives have yet to be transposed into national legislation. Furthermore, this presentation covers the correlation of the food categories as found in the additives directives with those used by national consumption surveys and finds that in a number of instances this correlation still leaves a lot to be desired. The intake of additives via food ingestion and the intake of substances which are chemically identical to additives but which occur naturally in fruits and vegetables is found in a number of cases to be higher than the intake of additives added during the manufacture of foodstuffs. While the difficulties are recognized in contributing to the compilation of food additive intake data, industry as a whole, i.e. the food manufacturing and food additive manufacturing industries, are confident that in a concerted effort, use data on food additives by industry can be made available. Lastly, the paper points out that with the transportation of the additives directives into national legislation and the time by which the food industry will be able to make use of the new food legislative environment several years will still go by; food additives use data by the food industry will thus have to be reviewed at the beginning of the next century. PMID:8792135

  16. Petroleum statistics in France

    SciTech Connect

    De Saint Germain, H.; Lamiraux, C.

    1995-08-01

    33 oil companies, including Elf, Exxon, Agip, Conoco as well as Coparex, Enron, Hadson, Midland, Hunt, Canyon and Union Texas are present in oil and gas exploration and production in France. The production of oil and gas in France amounts to some 60,000 bopd of oil and 350 MMcfpd of marketed natural gas each year, which still accounts for 3.5% and 10% for French domestic needs, respectively. To date, 166 fields have been discovered, representing a total reserve of 3 billion bbl of crude oil and 13 trillion cf of raw gas. These fields are concentrated in two major onshore sedimentary basins of Mesozoic age, which are the Aquitaine basin and the Paris basin. The Aquitaine basin should be subdivided into two distinct domains: The Parentis basin where the largest field Parentis was discovered in 1954 with still production of about 3700 bopd of oil and where Les Arbouslers field, discovered at the end of 1991, is currently producing about 10,000 bopd of oil. The northern Pyrenees and their foreland, where the Lacq field, discovered in 1951, has produced about 7.7 tcf of gas since 1957, and is still producing 138 MMcfpd. In the Paris basin, the two large oil fields are Villeperclue discovered in 1982 by Triton and Total, and Chaunoy, discovered in 1983 by Essorep, which are still producing about 10,000 and 15,000 bopd, respectively. The last significantly sized discovery occurred in 1990 with Itteville by Elf Aquitaine which is currently producing 4,200 bopd. The poster shows statistical data related to the past 20 years of oil and gas exploration and production in France.

  17. Tools for Basic Statistical Analysis

    NASA Technical Reports Server (NTRS)

    Luz, Paul L.

    2005-01-01

    Statistical Analysis Toolset is a collection of eight Microsoft Excel spreadsheet programs, each of which performs calculations pertaining to an aspect of statistical analysis. These programs present input and output data in user-friendly, menu-driven formats, with automatic execution. The following types of calculations are performed: Descriptive statistics are computed for a set of data x(i) (i = 1, 2, 3 . . . ) entered by the user. Normal Distribution Estimates will calculate the statistical value that corresponds to cumulative probability values, given a sample mean and standard deviation of the normal distribution. Normal Distribution from two Data Points will extend and generate a cumulative normal distribution for the user, given two data points and their associated probability values. Two programs perform two-way analysis of variance (ANOVA) with no replication or generalized ANOVA for two factors with four levels and three repetitions. Linear Regression-ANOVA will curvefit data to the linear equation y=f(x) and will do an ANOVA to check its significance.

  18. An additional middle cuneiform?

    PubMed Central

    Brookes-Fazakerley, S.D.; Jackson, G.E.; Platt, S.R.

    2015-01-01

    Additional cuneiform bones of the foot have been described in reference to the medial bipartite cuneiform or as small accessory ossicles. An additional middle cuneiform has not been previously documented. We present the case of a patient with an additional ossicle that has the appearance and location of an additional middle cuneiform. Recognizing such an anatomical anomaly is essential for ruling out second metatarsal base or middle cuneiform fractures and for the preoperative planning of arthrodesis or open reduction and internal fixation procedures in this anatomical location. PMID:26224890

  19. Statistical properties of Chinese phonemic networks

    NASA Astrophysics Data System (ADS)

    Yu, Shuiyuan; Liu, Haitao; Xu, Chunshan

    2011-04-01

    The study of properties of speech sound systems is of great significance in understanding the human cognitive mechanism and the working principles of speech sound systems. Some properties of speech sound systems, such as the listener-oriented feature and the talker-oriented feature, have been unveiled with the statistical study of phonemes in human languages and the research of the interrelations between human articulatory gestures and the corresponding acoustic parameters. With all the phonemes of speech sound systems treated as a coherent whole, our research, which focuses on the dynamic properties of speech sound systems in operation, investigates some statistical parameters of Chinese phoneme networks based on real text and dictionaries. The findings are as follows: phonemic networks have high connectivity degrees and short average distances; the degrees obey normal distribution and the weighted degrees obey power law distribution; vowels enjoy higher priority than consonants in the actual operation of speech sound systems; the phonemic networks have high robustness against targeted attacks and random errors. In addition, for investigating the structural properties of a speech sound system, a statistical study of dictionaries is conducted, which shows the higher frequency of shorter words and syllables and the tendency that the longer a word is, the shorter the syllables composing it are. From these structural properties and dynamic properties one can derive the following conclusion: the static structure of a speech sound system tends to promote communication efficiency and save articulation effort while the dynamic operation of this system gives preference to reliable transmission and easy recognition. In short, a speech sound system is an effective, efficient and reliable communication system optimized in many aspects.

  20. Playing at Statistical Mechanics

    ERIC Educational Resources Information Center

    Clark, Paul M.; And Others

    1974-01-01

    Discussed are the applications of counting techniques of a sorting game to distributions and concepts in statistical mechanics. Included are the following distributions: Fermi-Dirac, Bose-Einstein, and most probable. (RH)

  1. Cooperative Learning in Statistics.

    ERIC Educational Resources Information Center

    Keeler, Carolyn M.; And Others

    1994-01-01

    Formal use of cooperative learning techniques proved effective in improving student performance and retention in a freshman level statistics course. Lectures interspersed with group activities proved effective in increasing conceptual understanding and overall class performance. (11 references) (Author)

  2. Understanding Solar Flare Statistics

    NASA Astrophysics Data System (ADS)

    Wheatland, M. S.

    2005-12-01

    A review is presented of work aimed at understanding solar flare statistics, with emphasis on the well known flare power-law size distribution. Although avalanche models are perhaps the favoured model to describe flare statistics, their physical basis is unclear, and they are divorced from developing ideas in large-scale reconnection theory. An alternative model, aimed at reconciling large-scale reconnection models with solar flare statistics, is revisited. The solar flare waiting-time distribution has also attracted recent attention. Observed waiting-time distributions are described, together with what they might tell us about the flare phenomenon. Finally, a practical application of flare statistics to flare prediction is described in detail, including the results of a year of automated (web-based) predictions from the method.

  3. Plague Maps and Statistics

    MedlinePlus

    ... and Statistics Recommend on Facebook Tweet Share Compartir Plague in the United States Plague was first introduced ... per year in the United States: 1900-2012. Plague Worldwide Plague epidemics have occurred in Africa, Asia, ...

  4. Elements of Statistics

    NASA Astrophysics Data System (ADS)

    Grégoire, G.

    2016-05-01

    This chapter is devoted to two objectives. The first one is to answer the request expressed by attendees of the first Astrostatistics School (Annecy, October 2013) to be provided with an elementary vademecum of statistics that would facilitate understanding of the given courses. In this spirit we recall very basic notions, that is definitions and properties that we think sufficient to benefit from courses given in the Astrostatistical School. Thus we give briefly definitions and elementary properties on random variables and vectors, distributions, estimation and tests, maximum likelihood methodology. We intend to present basic ideas in a hopefully comprehensible way. We do not try to give a rigorous presentation, and due to the place devoted to this chapter, can cover only a rather limited field of statistics. The second aim is to focus on some statistical tools that are useful in classification: basic introduction to Bayesian statistics, maximum likelihood methodology, Gaussian vectors and Gaussian mixture models.

  5. Tuberculosis Data and Statistics

    MedlinePlus

    ... Organization Chart Advisory Groups Federal TB Task Force Data and Statistics Language: English Español (Spanish) Recommend on ... United States publication. PDF [6 MB] Interactive TB Data Tool Online Tuberculosis Information System (OTIS) OTIS is ...

  6. Statistics of the sagas

    NASA Astrophysics Data System (ADS)

    Richfield, Jon; bookfeller

    2016-07-01

    In reply to Ralph Kenna and Pádraig Mac Carron's feature article “Maths meets myths” in which they describe how they are using techniques from statistical physics to characterize the societies depicted in ancient Icelandic sagas.

  7. Brain Tumor Statistics

    MedlinePlus

    ... facts and statistics here include brain and central nervous system tumors (including spinal cord, pituitary and pineal gland ... U.S. living with a primary brain and central nervous system tumor. This year, nearly 17,000 people will ...

  8. Purposeful Statistical Investigations

    ERIC Educational Resources Information Center

    Day, Lorraine

    2014-01-01

    Lorraine Day provides us with a great range of statistical investigations using various resources such as maths300 and TinkerPlots. Each of the investigations link mathematics to students' lives and provide engaging and meaningful contexts for mathematical inquiry.

  9. Statistical process control

    SciTech Connect

    Oakland, J.S.

    1986-01-01

    Addressing the increasing importance for firms to have a thorough knowledge of statistically based quality control procedures, this book presents the fundamentals of statistical process control (SPC) in a non-mathematical, practical way. It provides real-life examples and data drawn from a wide variety of industries. The foundations of good quality management and process control, and control of conformance and consistency during production are given. Offers clear guidance to those who wish to understand and implement modern SPC techniques.

  10. Statistical Physics of Particles

    NASA Astrophysics Data System (ADS)

    Kardar, Mehran

    2006-06-01

    Statistical physics has its origins in attempts to describe the thermal properties of matter in terms of its constituent particles, and has played a fundamental role in the development of quantum mechanics. Based on lectures for a course in statistical mechanics taught by Professor Kardar at Massachusetts Institute of Technology, this textbook introduces the central concepts and tools of statistical physics. It contains a chapter on probability and related issues such as the central limit theorem and information theory, and covers interacting particles, with an extensive description of the van der Waals equation and its derivation by mean field approximation. It also contains an integrated set of problems, with solutions to selected problems at the end of the book. It will be invaluable for graduate and advanced undergraduate courses in statistical physics. A complete set of solutions is available to lecturers on a password protected website at www.cambridge.org/9780521873420. Based on lecture notes from a course on Statistical Mechanics taught by the author at MIT Contains 89 exercises, with solutions to selected problems Contains chapters on probability and interacting particles Ideal for graduate courses in Statistical Mechanics

  11. Statistical Physics of Fields

    NASA Astrophysics Data System (ADS)

    Kardar, Mehran

    2006-06-01

    While many scientists are familiar with fractals, fewer are familiar with the concepts of scale-invariance and universality which underly the ubiquity of their shapes. These properties may emerge from the collective behaviour of simple fundamental constituents, and are studied using statistical field theories. Based on lectures for a course in statistical mechanics taught by Professor Kardar at Massachusetts Institute of Technology, this textbook demonstrates how such theories are formulated and studied. Perturbation theory, exact solutions, renormalization groups, and other tools are employed to demonstrate the emergence of scale invariance and universality, and the non-equilibrium dynamics of interfaces and directed paths in random media are discussed. Ideal for advanced graduate courses in statistical physics, it contains an integrated set of problems, with solutions to selected problems at the end of the book. A complete set of solutions is available to lecturers on a password protected website at www.cambridge.org/9780521873413. Based on lecture notes from a course on Statistical Mechanics taught by the author at MIT Contains 65 exercises, with solutions to selected problems Features a thorough introduction to the methods of Statistical Field theory Ideal for graduate courses in Statistical Physics

  12. Primarily Statistics: Developing an Introductory Statistics Course for Pre-Service Elementary Teachers

    ERIC Educational Resources Information Center

    Green, Jennifer L.; Blankenship, Erin E.

    2013-01-01

    We developed an introductory statistics course for pre-service elementary teachers. In this paper, we describe the goals and structure of the course, as well as the assessments we implemented. Additionally, we use example course work to demonstrate pre-service teachers' progress both in learning statistics and as novice teachers. Overall, the…

  13. Anthropological significance of phenylketonuria.

    PubMed

    Saugstad, L F

    1975-01-01

    The highest incidence rates of phenylketonuria (PKU) have been observed in Ireland and Scotlant. Parents heterozygous for PKU in Norway differ significantly from the general population in the Rhesus, Kell and PGM systems. The parents investigated showed an excess of Rh negative, Kell plus and PGM type 1 individuals, which makes them similar to the present populations in Ireland and Scotlant. It is postulated that the heterozygotes for PKU in Norway are descended from a completely assimilated sub-population of Celtic origin, who came or were brought here, 1ooo years ago. Bronze objects of Western European (Scottish, Irish) origin, found in Viking graves widely distributed in Norway, have been taken as evidence of Vikings returning with loot (including a number of Celts) from Western Viking settlements. The continuity of residence since the Viking age in most habitable parts of Norway, and what seems to be a nearly complete regional relationship between the sites where Viking graves contain western imported objects and the birthplaces of grandparents of PKUs identified in Norway, lend further support to the hypothesis that the heterozygotes for PKU in Norway are descended from a completely assimilated subpopulation. The remarkable resemblance between Iceland and Ireland, in respect of several genetic markers (including the Rhesus, PGM and Kell systems), is considered to be an expression of a similar proportion of people of Celtic origin in each of the two countries. Their identical, high incidence rates of PKU are regarded as further evidence of this. The significant decline in the incidence of PKU when one passes from Ireland, Scotland and Iceland, to Denmark and on to Norway and Sweden, is therefore explained as being related to a reduction in the proportion of inhabitants of Celtic extraction in the respective populations. PMID:803884

  14. Carbamate deposit control additives

    SciTech Connect

    Honnen, L.R.; Lewis, R.A.

    1980-11-25

    Deposit control additives for internal combustion engines are provided which maintain cleanliness of intake systems without contributing to combustion chamber deposits. The additives are poly(oxyalkylene) carbamates comprising a hydrocarbyloxyterminated poly(Oxyalkylene) chain of 2-5 carbon oxyalkylene units bonded through an oxycarbonyl group to a nitrogen atom of ethylenediamine.

  15. NASA Pocket Statistics: 1997 Edition

    NASA Technical Reports Server (NTRS)

    1997-01-01

    POCKET STATISTICS is published by the NATIONAL AERONAUTICS AND SPACE ADMINISTRATION (NASA). Included in each edition is Administrative and Organizational information, summaries of Space Flight Activity including the NASA Major Launch Record, Aeronautics and Space Transportation and NASA Procurement, Financial and Workforce data. The NASA Major Launch Record includes all launches of Scout class and larger vehicles. Vehicle and spacecraft development flights are also included in the Major Launch Record. Shuttle missions are counted as one launch and one payload, where free flying payloads are not involved. All Satellites deployed from the cargo bay of the Shuttle and placed in a separate orbit or trajectory are counted as an additional payload.

  16. Nonstationary statistical theory for multipactor

    SciTech Connect

    Anza, S.; Vicente, C.; Gil, J.

    2010-06-15

    This work presents a new and general approach to the real dynamics of the multipactor process: the nonstationary statistical multipactor theory. The nonstationary theory removes the stationarity assumption of the classical theory and, as a consequence, it is able to adequately model electron exponential growth as well as absorption processes, above and below the multipactor breakdown level. In addition, it considers both double-surface and single-surface interactions constituting a full framework for nonresonant polyphase multipactor analysis. This work formulates the new theory and validates it with numerical and experimental results with excellent agreement.

  17. Statistical properties of convex clustering

    PubMed Central

    Tan, Kean Ming; Witten, Daniela

    2016-01-01

    In this manuscript, we study the statistical properties of convex clustering. We establish that convex clustering is closely related to single linkage hierarchical clustering and k-means clustering. In addition, we derive the range of the tuning parameter for convex clustering that yields a non-trivial solution. We also provide an unbiased estimator of the degrees of freedom, and provide a finite sample bound for the prediction error for convex clustering. We compare convex clustering to some traditional clustering methods in simulation studies.

  18. BETTER STATISTICS FOR BETTER DECISIONS: REJECTING NULL HYPOTHESES STATISTICAL TESTS IN FAVOR OF REPLICATION STATISTICS

    PubMed Central

    SANABRIA, FEDERICO; KILLEEN, PETER R.

    2008-01-01

    Despite being under challenge for the past 50 years, null hypothesis significance testing (NHST) remains dominant in the scientific field for want of viable alternatives. NHST, along with its significance level p, is inadequate for most of the uses to which it is put, a flaw that is of particular interest to educational practitioners who too often must use it to sanctify their research. In this article, we review the failure of NHST and propose prep, the probability of replicating an effect, as a more useful statistic for evaluating research and aiding practical decision making. PMID:19122766

  19. Effects of wall curvature on turbulence statistics

    NASA Technical Reports Server (NTRS)

    Moser, R. D.; Moin, P.

    1985-01-01

    A three-dimensional, time-dependent, direct numerical simulation of low-Reynolds number turbulent flow in a mildly curved channel was performed, and the results examined to determine the mechanism by which curvature affects wall-bounded turbulent shear flows. A spectral numerical method with about one-million modes was employed, and no explicit subgrid scale model was used. The effects of curvature on this flow were determined by comparing the concave and convex sides of the channel. The observed effects are consistent with experimental observations for mild curvature. The most significant difference in the turbulence statistics between the concave and convex sides is in the Reynolds shear stress. This is accompanied by significant differences in the terms of the Reynolds shear stress balance equations. In addition, it was found that stationary Taylor-Goertler vortices were present and that they had a significant effect on the flow by contributing to the mean Reynolds shear stress, and by enhancing the difference between the wall shear stresses.

  20. Statistical learning and selective inference

    PubMed Central

    Taylor, Jonathan; Tibshirani, Robert J.

    2015-01-01

    We describe the problem of “selective inference.” This addresses the following challenge: Having mined a set of data to find potential associations, how do we properly assess the strength of these associations? The fact that we have “cherry-picked”—searched for the strongest associations—means that we must set a higher bar for declaring significant the associations that we see. This challenge becomes more important in the era of big data and complex statistical modeling. The cherry tree (dataset) can be very large and the tools for cherry picking (statistical learning methods) are now very sophisticated. We describe some recent new developments in selective inference and illustrate their use in forward stepwise regression, the lasso, and principal components analysis. PMID:26100887

  1. Polyimide processing additives

    NASA Technical Reports Server (NTRS)

    Fletcher, James C. (Inventor); Pratt, J. Richard (Inventor); St.clair, Terry L. (Inventor); Stoakley, Diane M. (Inventor); Burks, Harold D. (Inventor)

    1992-01-01

    A process for preparing polyimides having enhanced melt flow properties is described. The process consists of heating a mixture of a high molecular weight poly-(amic acid) or polyimide with a low molecular weight amic acid or imide additive in the range of 0.05 to 15 percent by weight of additive. The polyimide powders so obtained show improved processability, as evidenced by lower melt viscosity by capillary rheometry. Likewise, films prepared from mixtures of polymers with additives show improved processability with earlier onset of stretching by TMA.

  2. Polyimide processing additives

    NASA Technical Reports Server (NTRS)

    Pratt, J. Richard (Inventor); St.clair, Terry L. (Inventor); Stoakley, Diane M. (Inventor); Burks, Harold D. (Inventor)

    1993-01-01

    A process for preparing polyimides having enhanced melt flow properties is described. The process consists of heating a mixture of a high molecular weight poly-(amic acid) or polyimide with a low molecular weight amic acid or imide additive in the range of 0.05 to 15 percent by weight of the additive. The polyimide powders so obtained show improved processability, as evidenced by lower melt viscosity by capillary rheometry. Likewise, films prepared from mixtures of polymers with additives show improved processability with earlier onset of stretching by TMA.

  3. Significant Radionuclides Determination

    SciTech Connect

    Jo A. Ziegler

    2001-07-31

    The purpose of this calculation is to identify radionuclides that are significant to offsite doses from potential preclosure events for spent nuclear fuel (SNF) and high-level radioactive waste expected to be received at the potential Monitored Geologic Repository (MGR). In this calculation, high-level radioactive waste is included in references to DOE SNF. A previous document, ''DOE SNF DBE Offsite Dose Calculations'' (CRWMS M&O 1999b), calculated the source terms and offsite doses for Department of Energy (DOE) and Naval SNF for use in design basis event analyses. This calculation reproduces only DOE SNF work (i.e., no naval SNF work is included in this calculation) created in ''DOE SNF DBE Offsite Dose Calculations'' and expands the calculation to include DOE SNF expected to produce a high dose consequence (even though the quantity of the SNF is expected to be small) and SNF owned by commercial nuclear power producers. The calculation does not address any specific off-normal/DBE event scenarios for receiving, handling, or packaging of SNF. The results of this calculation are developed for comparative analysis to establish the important radionuclides and do not represent the final source terms to be used for license application. This calculation will be used as input to preclosure safety analyses and is performed in accordance with procedure AP-3.12Q, ''Calculations'', and is subject to the requirements of DOE/RW-0333P, ''Quality Assurance Requirements and Description'' (DOE 2000) as determined by the activity evaluation contained in ''Technical Work Plan for: Preclosure Safety Analysis, TWP-MGR-SE-000010'' (CRWMS M&O 2000b) in accordance with procedure AP-2.21Q, ''Quality Determinations and Planning for Scientific, Engineering, and Regulatory Compliance Activities''.

  4. Fungi producing significant mycotoxins.

    PubMed

    2012-01-01

    Mycotoxins are secondary metabolites of microfungi that are known to cause sickness or death in humans or animals. Although many such toxic metabolites are known, it is generally agreed that only a few are significant in causing disease: aflatoxins, fumonisins, ochratoxin A, deoxynivalenol, zearalenone, and ergot alkaloids. These toxins are produced by just a few species from the common genera Aspergillus, Penicillium, Fusarium, and Claviceps. All Aspergillus and Penicillium species either are commensals, growing in crops without obvious signs of pathogenicity, or invade crops after harvest and produce toxins during drying and storage. In contrast, the important Fusarium and Claviceps species infect crops before harvest. The most important Aspergillus species, occurring in warmer climates, are A. flavus and A. parasiticus, which produce aflatoxins in maize, groundnuts, tree nuts, and, less frequently, other commodities. The main ochratoxin A producers, A. ochraceus and A. carbonarius, commonly occur in grapes, dried vine fruits, wine, and coffee. Penicillium verrucosum also produces ochratoxin A but occurs only in cool temperate climates, where it infects small grains. F. verticillioides is ubiquitous in maize, with an endophytic nature, and produces fumonisins, which are generally more prevalent when crops are under drought stress or suffer excessive insect damage. It has recently been shown that Aspergillus niger also produces fumonisins, and several commodities may be affected. F. graminearum, which is the major producer of deoxynivalenol and zearalenone, is pathogenic on maize, wheat, and barley and produces these toxins whenever it infects these grains before harvest. Also included is a short section on Claviceps purpurea, which produces sclerotia among the seeds in grasses, including wheat, barley, and triticale. The main thrust of the chapter contains information on the identification of these fungi and their morphological characteristics, as well as factors

  5. Statistical mechanics of economics I

    NASA Astrophysics Data System (ADS)

    Kusmartsev, F. V.

    2011-02-01

    We show that statistical mechanics is useful in the description of financial crisis and economics. Taking a large amount of instant snapshots of a market over an interval of time we construct their ensembles and study their statistical interference. This results in a probability description of the market and gives capital, money, income, wealth and debt distributions, which in the most cases takes the form of the Bose-Einstein distribution. In addition, statistical mechanics provides the main market equations and laws which govern the correlations between the amount of money, debt, product, prices and number of retailers. We applied the found relations to a study of the evolution of the economics in USA between the years 1996 to 2008 and observe that over that time the income of a major population is well described by the Bose-Einstein distribution which parameters are different for each year. Each financial crisis corresponds to a peak in the absolute activity coefficient. The analysis correctly indicates the past crises and predicts the future one.

  6. Statistical Physics of Fracture

    SciTech Connect

    Alava, Mikko; Nukala, Phani K; Zapperi, Stefano

    2006-05-01

    Disorder and long-range interactions are two of the key components that make material failure an interesting playfield for the application of statistical mechanics. The cornerstone in this respect has been lattice models of the fracture in which a network of elastic beams, bonds, or electrical fuses with random failure thresholds are subject to an increasing external load. These models describe on a qualitative level the failure processes of real, brittle, or quasi-brittle materials. This has been particularly important in solving the classical engineering problems of material strength: the size dependence of maximum stress and its sample-to-sample statistical fluctuations. At the same time, lattice models pose many new fundamental questions in statistical physics, such as the relation between fracture and phase transitions. Experimental results point out to the existence of an intriguing crackling noise in the acoustic emission and of self-affine fractals in the crack surface morphology. Recent advances in computer power have enabled considerable progress in the understanding of such models. Among these partly still controversial issues, are the scaling and size-effects in material strength and accumulated damage, the statistics of avalanches or bursts of microfailures, and the morphology of the crack surface. Here we present an overview of the results obtained with lattice models for fracture, highlighting the relations with statistical physics theories and more conventional fracture mechanics approaches.

  7. Statistical Downscaling: Lessons Learned

    NASA Astrophysics Data System (ADS)

    Walton, D.; Hall, A. D.; Sun, F.

    2013-12-01

    In this study, we examine ways to improve statistical downscaling of general circulation model (GCM) output. Why do we downscale GCM output? GCMs have low resolution, so they cannot represent local dynamics and topographic effects that cause spatial heterogeneity in the regional climate change signal. Statistical downscaling recovers fine-scale information by utilizing relationships between the large-scale and fine-scale signals to bridge this gap. In theory, the downscaled climate change signal is more credible and accurate than its GCM counterpart, but in practice, there may be little improvement. Here, we tackle the practical problems that arise in statistical downscaling, using temperature change over the Los Angeles region as a test case. This region is an ideal place to apply downscaling since its complex topography and shoreline are poorly simulated by GCMs. By comparing two popular statistical downscaling methods and one dynamical downscaling method, we identify issues with statistically downscaled climate change signals and develop ways to fix them. We focus on scale mismatch, domain of influence, and other problems - many of which users may be unaware of - and discuss practical solutions.

  8. Smog control fuel additives

    SciTech Connect

    Lundby, W.

    1993-06-29

    A method is described of controlling, reducing or eliminating, ozone and related smog resulting from photochemical reactions between ozone and automotive or industrial gases comprising the addition of iodine or compounds of iodine to hydrocarbon-base fuels prior to or during combustion in an amount of about 1 part iodine per 240 to 10,000,000 parts fuel, by weight, to be accomplished by: (a) the addition of these inhibitors during or after the refining or manufacturing process of liquid fuels; (b) the production of these inhibitors for addition into fuel tanks, such as automotive or industrial tanks; or (c) the addition of these inhibitors into combustion chambers of equipment utilizing solid fuels for the purpose of reducing ozone.

  9. Food Additives and Hyperkinesis

    ERIC Educational Resources Information Center

    Wender, Ester H.

    1977-01-01

    The hypothesis that food additives are causally associated with hyperkinesis and learning disabilities in children is reviewed, and available data are summarized. Available from: American Medical Association 535 North Dearborn Street Chicago, Illinois 60610. (JG)

  10. Additional Types of Neuropathy

    MedlinePlus

    ... A A Listen En Español Additional Types of Neuropathy Charcot's Joint Charcot's Joint, also called neuropathic arthropathy, ... can stop bone destruction and aid healing. Cranial Neuropathy Cranial neuropathy affects the 12 pairs of nerves ...

  11. XMM-Newton publication statistics

    NASA Astrophysics Data System (ADS)

    Ness, J.-U.; Parmar, A. N.; Valencic, L. A.; Smith, R.; Loiseau, N.; Salama, A.; Ehle, M.; Schartel, N.

    2014-02-01

    We assessed the scientific productivity of XMM-Newton by examining XMM-Newton publications and data usage statistics. We analyse 3272 refereed papers, published until the end of 2012, that directly use XMM-Newton data. The SAO/NASA Astrophysics Data System (ADS) was used to provide additional information on each paper including the number of citations. For each paper, the XMM-Newton observation identifiers and instruments used to provide the scientific results were determined. The identifiers were used to access the XMM-{Newton} Science Archive (XSA) to provide detailed information on the observations themselves and on the original proposals. The information obtained from these sources was then combined to allow the scientific productivity of the mission to be assessed. Since around three years after the launch of XMM-Newton there have been around 300 refereed papers per year that directly use XMM-Newton data. After more than 13 years in operation, this rate shows no evidence that it is decreasing. Since 2002, around 100 scientists per year become lead authors for the first time on a refereed paper which directly uses XMM-Newton data. Each refereed XMM-Newton paper receives around four citations per year in the first few years with a long-term citation rate of three citations per year, more than five years after publication. About half of the articles citing XMM-Newton articles are not primarily X-ray observational papers. The distribution of elapsed time between observations taken under the Guest Observer programme and first article peaks at 2 years with a possible second peak at 3.25 years. Observations taken under the Target of Opportunity programme are published significantly faster, after one year on average. The fraction of science time taken until the end of 2009 that has been used in at least one article is {˜ 90} %. Most observations were used more than once, yielding on average a factor of two in usage on available observing time per year. About 20 % of

  12. Statistical properties of a quantum cellular automaton

    NASA Astrophysics Data System (ADS)

    Inui, Norio; Inokuchi, Shuichi; Mizoguchi, Yoshihiro; Konno, Norio

    2005-09-01

    We study a quantum cellular automaton (QCA) whose time evolution is defined using the global transition function of a classical cellular automaton (CA). In order to investigate natural transformations from CAs to QCAs, the present QCA includes the CA with Wolfram’s rules 150 and 105 as special cases. We first compute the time evolution of the QCA and examine its statistical properties. As a basic statistical value, the probability of finding an active cell averaged over spatial-temporal space is introduced, and the difference between the CA and QCA is considered. In addition, it is shown that statistical properties in QCAs are related to the classical trajectory in configuration space.

  13. Innovative trend significance test and applications

    NASA Astrophysics Data System (ADS)

    Şen, Zekai

    2015-11-01

    Hydro-climatological time series might embed characteristics of past changes concerning climate variability in terms of shifts, cyclic fluctuations, and more significantly in the form of trends. Identification of such features from the available records is one of the prime tasks of hydrologists, climatologists, applied statisticians, or experts in related topics. Although there are different trend identification and significance tests in the literature, they require restrictive assumptions, which may not be existent in the structure of hydro-climatological time series. In this paper, a method is suggested with statistical significance test for trend identification in an innovative manner. This method has non-parametric basis without any restrictive assumption, and its application is rather simple with the concept of sub-series comparisons that are extracted from the main time series. The method provides privilege for selection of sub-temporal half periods for the comparison and, finally, generates trend on objective and quantitative manners. The necessary statistical equations are derived for innovative trend identification and statistical significance test application. The application of the proposed methodology is suggested for three time series from different parts of the world including Southern New Jersey annual temperature, Danube River annual discharge, and Tigris River Diyarbakir meteorology station annual total rainfall records. Each record has significant trend with increasing type in the New Jersey case, whereas in other two cases, decreasing trends exist.

  14. Tougher Addition Polyimides Containing Siloxane

    NASA Technical Reports Server (NTRS)

    St. Clair, T. L.; Maudgal, S.

    1986-01-01

    Laminates show increased impact resistances and other desirable mechanical properties. Bismaleamic acid extended by reaction of diaminosiloxane with maleic anhydride in 1:1 molar ratio, followed by reaction with half this molar ratio of aromatic dianhydride. Bismaleamic acid also extended by reaction of diaminosiloxane with maleic anhydride in 1:2 molar ratio, followed by reaction with half this molar ratio of aromatic diamine (Michael-addition reaction). Impact resistances improved over those of unmodified bismaleimide, showing significant increase in toughness. Aromatic addition polyimides developed as both matrix and adhesive resins for applications on future aircraft and spacecraft.

  15. The functional significance of stereopsis.

    PubMed

    O'Connor, Anna R; Birch, Eileen E; Anderson, Susan; Draper, Hayley

    2010-04-01

    Purpose. Development or restoration of binocular vision is one of the key goals of strabismus management; however, the functional impact of stereoacuity has largely been neglected. Methods. Subjects aged 10 to 30 years with normal, reduced, or nil stereoacuity performed three tasks: Purdue pegboard (measured how many pegs placed in 30 seconds), bead threading (with two sizes of bead, to increase the difficulty; measured time taken to thread a number of beads), and water pouring (measured both accuracy and time). All tests were undertaken both with and without occlusion of one eye. Results. One hundred forty-three subjects were recruited, 32.9% (n = 47) with a manifest deviation. Performances on the pegboard and bead tasks were significantly worse in the nil stereoacuity group when compared with that of the normal stereoacuity group. On the large and small bead tasks, those with reduced stereoacuity were better than those with nil stereoacuity (when the Preschool Randot Stereoacuity Test [Stereo Optical Co, Inc., Chicago, IL] results were used to determine stereoacuity levels). Comparison of the short-term monocular conditions (those with normal stereoacuity but occluded) with nil stereoacuity showed that, on all measures, the performance was best in the nil stereoacuity group and was statistically significant for the large and small beads task, irrespective of which test result was used to define the stereoacuity levels. Conclusions. Performance on motor skills tasks was related to stereoacuity, with subjects with normal stereoacuity performing best on all tests. This quantifiable degradation in performance on some motor skill tasks supports the need to implement management strategies to maximize development of high-grade stereoacuity. PMID:19933184

  16. Statistical aspects of solar flares

    NASA Technical Reports Server (NTRS)

    Wilson, Robert M.

    1987-01-01

    A survey of the statistical properties of 850 H alpha solar flares during 1975 is presented. Comparison of the results found here with those reported elsewhere for different epochs is accomplished. Distributions of rise time, decay time, and duration are given, as are the mean, mode, median, and 90th percentile values. Proportions by selected groupings are also determined. For flares in general, mean values for rise time, decay time, and duration are 5.2 + or - 0.4 min, and 18.1 + or 1.1 min, respectively. Subflares, accounting for nearly 90 percent of the flares, had mean values lower than those found for flares of H alpha importance greater than 1, and the differences are statistically significant. Likewise, flares of bright and normal relative brightness have mean values of decay time and duration that are significantly longer than those computed for faint flares, and mass-motion related flares are significantly longer than non-mass-motion related flares. Seventy-three percent of the mass-motion related flares are categorized as being a two-ribbon flare and/or being accompanied by a high-speed dark filament. Slow rise time flares (rise time greater than 5 min) have a mean value for duration that is significantly longer than that computed for fast rise time flares, and long-lived duration flares (duration greater than 18 min) have a mean value for rise time that is significantly longer than that computed for short-lived duration flares, suggesting a positive linear relationship between rise time and duration for flares. Monthly occurrence rates for flares in general and by group are found to be linearly related in a positive sense to monthly sunspot number. Statistical testing reveals the association between sunspot number and numbers of flares to be significant at the 95 percent level of confidence, and the t statistic for slope is significant at greater than 99 percent level of confidence. Dependent upon the specific fit, between 58 percent and 94 percent of

  17. Phenylethynyl Containing Reactive Additives

    NASA Technical Reports Server (NTRS)

    Connell, John W. (Inventor); Smith, Joseph G., Jr. (Inventor); Hergenrother, Paul M. (Inventor)

    2002-01-01

    Phenylethynyl containing reactive additives were prepared from aromatic diamine, containing phenylethvnvl groups and various ratios of phthalic anhydride and 4-phenylethynviphthalic anhydride in glacial acetic acid to form the imide in one step or in N-methyl-2-pvrrolidinone to form the amide acid intermediate. The reactive additives were mixed in various amounts (10% to 90%) with oligomers containing either terminal or pendent phenylethynyl groups (or both) to reduce the melt viscosity and thereby enhance processability. Upon thermal cure, the additives react and become chemically incorporated into the matrix and effect an increase in crosslink density relative to that of the host resin. This resultant increase in crosslink density has advantageous consequences on the cured resin properties such as higher glass transition temperature and higher modulus as compared to that of the host resin.

  18. Additives in plastics.

    PubMed Central

    Deanin, R D

    1975-01-01

    The polymers used in plastics are generally harmless. However, they are rarely used in pure form. In almost all commercial plastics, they are "compounded" with monomeric ingredients to improve their processing and end-use performance. In order of total volume used, these monomeric additives may be classified as follows: reinforcing fibers, fillers, and coupling agents; plasticizers; colorants; stabilizers (halogen stabilizers, antioxidants, ultraviolet absorbers, and biological preservatives); processing aids (lubricants, others, and flow controls); flame retardants, peroxides; and antistats. Some information is already available, and much more is needed, on potential toxicity and safe handling of these additives during processing and manufacture of plastics products. PMID:1175566

  19. Censored data treatment using additional information in intelligent medical systems

    NASA Astrophysics Data System (ADS)

    Zenkova, Z. N.

    2015-11-01

    Statistical procedures are a very important and significant part of modern intelligent medical systems. They are used for proceeding, mining and analysis of different types of the data about patients and their diseases; help to make various decisions, regarding the diagnosis, treatment, medication or surgery, etc. In many cases the data can be censored or incomplete. It is a well-known fact that censorship considerably reduces the efficiency of statistical procedures. In this paper the author makes a brief review of the approaches which allow improvement of the procedures using additional information, and describes a modified estimation of an unknown cumulative distribution function involving additional information about a quantile which is known exactly. The additional information is used by applying a projection of a classical estimator to a set of estimators with certain properties. The Kaplan-Meier estimator is considered as an estimator of the unknown cumulative distribution function, the properties of the modified estimator are investigated for a case of a single right censorship by means of simulations.

  20. Analogies for Understanding Statistics

    ERIC Educational Resources Information Center

    Hocquette, Jean-Francois

    2004-01-01

    This article describes a simple way to explain the limitations of statistics to scientists and students to avoid the publication of misleading conclusions. Biologists examine their results extremely critically and carefully choose the appropriate analytic methods depending on their scientific objectives. However, no such close attention is usually…

  1. Statistical methods in microbiology.

    PubMed Central

    Ilstrup, D M

    1990-01-01

    Statistical methodology is viewed by the average laboratory scientist, or physician, sometimes with fear and trepidation, occasionally with loathing, and seldom with fondness. Statistics may never be loved by the medical community, but it does not have to be hated by them. It is true that statistical science is sometimes highly mathematical, always philosophical, and occasionally obtuse, but for the majority of medical studies it can be made palatable. The goal of this article has been to outline a finite set of methods of analysis that investigators should choose based on the nature of the variable being studied and the design of the experiment. The reader is encouraged to seek the advice of a professional statistician when there is any doubt about the appropriate method of analysis. A statistician can also help the investigator with problems that have nothing to do with statistical tests, such as quality control, choice of response variable and comparison groups, randomization, and blinding of assessment of response variables. PMID:2200604

  2. Statistical Energy Analysis Program

    NASA Technical Reports Server (NTRS)

    Ferebee, R. C.; Trudell, R. W.; Yano, L. I.; Nygaard, S. I.

    1985-01-01

    Statistical Energy Analysis (SEA) is powerful tool for estimating highfrequency vibration spectra of complex structural systems and incorporated into computer program. Basic SEA analysis procedure divided into three steps: Idealization, parameter generation, and problem solution. SEA computer program written in FORTRAN V for batch execution.

  3. Education Statistics Quarterly, 2003.

    ERIC Educational Resources Information Center

    Marenus, Barbara; Burns, Shelley; Fowler, William; Greene, Wilma; Knepper, Paula; Kolstad, Andrew; McMillen Seastrom, Marilyn; Scott, Leslie

    2003-01-01

    This publication provides a comprehensive overview of work done across all parts of the National Center for Education Statistics (NCES). Each issue contains short publications, summaries, and descriptions that cover all NCES publications and data products released in a 3-month period. Each issue also contains a message from the NCES on a timely…

  4. Spitball Scatterplots in Statistics

    ERIC Educational Resources Information Center

    Wagaman, John C.

    2012-01-01

    This paper describes an active learning idea that I have used in my applied statistics class as a first lesson in correlation and regression. Students propel spitballs from various standing distances from the target and use the recorded data to determine if the spitball accuracy is associated with standing distance and review the algebra of lines…

  5. Juvenile Court Statistics - 1972.

    ERIC Educational Resources Information Center

    Office of Youth Development (DHEW), Washington, DC.

    This report is a statistical study of juvenile court cases in 1972. The data demonstrates how the court is frequently utilized in dealing with juvenile delinquency by the police as well as by other community agencies and parents. Excluded from this report are the ordinary traffic cases handled by juvenile court. The data indicate that: (1) in…

  6. Library Research and Statistics.

    ERIC Educational Resources Information Center

    Lynch, Mary Jo; St. Lifer, Evan; Halstead, Kent; Fox, Bette-Lee; Miller, Marilyn L.; Shontz, Marilyn L.

    2001-01-01

    These nine articles discuss research and statistics on libraries and librarianship, including libraries in the United States, Canada, and Mexico; acquisition expenditures in public, academic, special, and government libraries; price indexes; state rankings of public library data; library buildings; expenditures in school library media centers; and…

  7. Foundations of Statistical Seismology

    NASA Astrophysics Data System (ADS)

    Vere-Jones, David

    2010-06-01

    A brief account is given of the principles of stochastic modelling in seismology, with special regard to the role and development of stochastic models for seismicity. Stochastic models are seen as arising in a hierarchy of roles in seismology, as in other scientific disciplines. At their simplest, they provide a convenient descriptive tool for summarizing data patterns; in engineering and other applications, they provide a practical way of bridging the gap between the detailed modelling of a complex system, and the need to fit models to limited data; at the most fundamental level they arise as a basic component in the modelling of earthquake phenomena, analogous to that of stochastic models in statistical mechanics or turbulence theory. As an emerging subdiscipline, statistical seismology includes elements of all of these. The scope for the development of stochastic models depends crucially on the quantity and quality of the available data. The availability of extensive, high-quality catalogues and other relevant data lies behind the recent explosion of interest in statistical seismology. At just such a stage, it seems important to review the underlying principles on which statistical modelling is based, and that is the main purpose of the present paper.

  8. Graduate Statistics: Student Attitudes

    ERIC Educational Resources Information Center

    Kennedy, Robert L.; Broadston, Pamela M.

    2004-01-01

    This study investigated the attitudes toward statistics of graduate students who used a computer program as part of the instruction, which allowed for an individualized, self-paced, student-centered, activity-based course. The twelve sections involved in this study were offered in the spring and fall 2001, spring and fall 2002, spring and fall…

  9. Geopositional Statistical Methods

    NASA Technical Reports Server (NTRS)

    Ross, Kenton

    2006-01-01

    RMSE based methods distort circular error estimates (up to 50% overestimation). The empirical approach is the only statistically unbiased estimator offered. Ager modification to Shultz approach is nearly unbiased, but cumbersome. All methods hover around 20% uncertainty (@ 95% confidence) for low geopositional bias error estimates. This requires careful consideration in assessment of higher accuracy products.

  10. Statistical Reasoning over Lunch

    ERIC Educational Resources Information Center

    Selmer, Sarah J.; Bolyard, Johnna J.; Rye, James A.

    2011-01-01

    Students in the 21st century are exposed daily to a staggering amount of numerically infused media. In this era of abundant numeric data, students must be able to engage in sound statistical reasoning when making life decisions after exposure to varied information. The context of nutrition can be used to engage upper elementary and middle school…

  11. Fractional statistics and confinement

    NASA Astrophysics Data System (ADS)

    Gaete, P.; Wotzasek, C.

    2005-02-01

    It is shown that a pointlike composite having charge and magnetic moment displays a confining potential for the static interaction while simultaneously obeying fractional statistics in a pure gauge theory in three dimensions, without a Chern-Simons term. This result is distinct from the Maxwell-Chern-Simons theory that shows a screening nature for the potential.

  12. Statistics for Learning Genetics

    ERIC Educational Resources Information Center

    Charles, Abigail Sheena

    2012-01-01

    This study investigated the knowledge and skills that biology students may need to help them understand statistics/mathematics as it applies to genetics. The data are based on analyses of current representative genetics texts, practicing genetics professors' perspectives, and more directly, students' perceptions of, and performance in,…

  13. Learning Statistical Concepts

    ERIC Educational Resources Information Center

    Akram, Muhammad; Siddiqui, Asim Jamal; Yasmeen, Farah

    2004-01-01

    In order to learn the concept of statistical techniques one needs to run real experiments that generate reliable data. In practice, the data from some well-defined process or system is very costly and time consuming. It is difficult to run real experiments during the teaching period in the university. To overcome these difficulties, statisticians…

  14. Knot theory and statistical mechanics

    SciTech Connect

    Jones, V.F.R. )

    1990-11-01

    Certain algebraic relations used to solve models in statistical mechanics were key to describing a mathematical property of knots known as a polynomial invariant. This connection, tenuous at first, has since developed into a significant flow of ideas. The appearance of such common ground is not atypical of recent developments in mathematics and physics--ideas from different fields interact and produce unexpected results. Indeed, the discovery of the connection between knots and statistical mechanics passed through a theory intimately related to the mathematical structure of quantum physics. This theory, called von Neumann algebras, is distinguished by the idea of continuous dimensionality. Spaces typically have dimensions that are natural numbers, such as 2, 3 or 11, but in von Neumann algebras dimensions such as 2 or {pi} are equally possible. This possibility for continuous dimension played a key role in joining knot theory and statistical mechanics. In another direction, the knot invariants were soon found to occur in quantum field theory. Indeed, Edward Witten of the Institute for Advanced Study in Princeton, N.J., has shown that topological quantum field theory provides a natural way of expressing the new ideas about knots. This advance, in turn, has allowed a beautiful generalization about the invariants of knots in more complicated three-dimensional spaces known as three-manifolds, in which space itself may contain holes and loops.

  15. Statistics, Uncertainty, and Transmitted Variation

    SciTech Connect

    Wendelberger, Joanne Roth

    2014-11-05

    The field of Statistics provides methods for modeling and understanding data and making decisions in the presence of uncertainty. When examining response functions, variation present in the input variables will be transmitted via the response function to the output variables. This phenomenon can potentially have significant impacts on the uncertainty associated with results from subsequent analysis. This presentation will examine the concept of transmitted variation, its impact on designed experiments, and a method for identifying and estimating sources of transmitted variation in certain settings.

  16. Statistics for Learning Genetics

    NASA Astrophysics Data System (ADS)

    Charles, Abigail Sheena

    This study investigated the knowledge and skills that biology students may need to help them understand statistics/mathematics as it applies to genetics. The data are based on analyses of current representative genetics texts, practicing genetics professors' perspectives, and more directly, students' perceptions of, and performance in, doing statistically-based genetics problems. This issue is at the emerging edge of modern college-level genetics instruction, and this study attempts to identify key theoretical components for creating a specialized biological statistics curriculum. The goal of this curriculum will be to prepare biology students with the skills for assimilating quantitatively-based genetic processes, increasingly at the forefront of modern genetics. To fulfill this, two college level classes at two universities were surveyed. One university was located in the northeastern US and the other in the West Indies. There was a sample size of 42 students and a supplementary interview was administered to a select 9 students. Interviews were also administered to professors in the field in order to gain insight into the teaching of statistics in genetics. Key findings indicated that students had very little to no background in statistics (55%). Although students did perform well on exams with 60% of the population receiving an A or B grade, 77% of them did not offer good explanations on a probability question associated with the normal distribution provided in the survey. The scope and presentation of the applicable statistics/mathematics in some of the most used textbooks in genetics teaching, as well as genetics syllabi used by instructors do not help the issue. It was found that the text books, often times, either did not give effective explanations for students, or completely left out certain topics. The omission of certain statistical/mathematical oriented topics was seen to be also true with the genetics syllabi reviewed for this study. Nonetheless

  17. Biobased lubricant additives

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Fully biobased lubricants are those formulated using all biobased ingredients, i.e. biobased base oils and biobased additives. Such formulations provide the maximum environmental, safety, and economic benefits expected from a biobased product. Currently, there are a number of biobased base oils that...

  18. Multifunctional fuel additives

    SciTech Connect

    Baillargeon, D.J.; Cardis, A.B.; Heck, D.B.

    1991-03-26

    This paper discusses a composition comprising a major amount of a liquid hydrocarbyl fuel and a minor low-temperature flow properties improving amount of an additive product of the reaction of a suitable diol and product of a benzophenone tetracarboxylic dianhydride and a long-chain hydrocarbyl aminoalcohol.

  19. Latest statistics on cardiovascular disease in Australia.

    PubMed

    Waters, Anne-Marie; Trinh, Lany; Chau, Theresa; Bourchier, Michael; Moon, Lynelle

    2013-06-01

    The results presented herein summarize the most up-to-date cardiovascular statistics available at this time in Australia. The analysis presented here is based on and extends results published in two Australian Institute of Health and Welfare (AIHW) reports, namely Cardiovascular disease: Australian facts 2011 and the cardiovascular disease (CVD) section of Australia's Health 2012. Despite significant improvements in the cardiovascular health of Australians in recent decades, CVD continues to impose a heavy burden on Australians in terms of illness, disability and premature death. Direct health care expenditure for CVD exceeds that for any other disease group. The most recent national data have been analysed to describe patterns and trends in CVD hospitalization and death rates, with additional analysis by Indigenous status, remoteness and socioeconomic group. The incidence of and case-fatality from major coronary events has also been examined. Although CVD death rates have declined steadily in Australia since the late 1960s, CVD still accounts for a larger proportion of deaths (33% in 2009) than any other disease group. Worryingly, the rate at which the coronary heart disease death rate has been falling in recent years has slowed in younger (35-54 years) age groups. Between 1998-99 and 2009-10, the overall rate of hospitalizations for CVD fell by 13%, with declines observed for most major CVDs. In conclusion, CVD disease remains a significant health problem in Australia despite decreasing death and hospitalization rates. PMID:23517328

  20. Factors related to student performance in statistics courses in Lebanon

    NASA Astrophysics Data System (ADS)

    Naccache, Hiba Salim

    The purpose of the present study was to identify factors that may contribute to business students in Lebanese universities having difficulty in introductory and advanced statistics courses. Two statistics courses are required for business majors at Lebanese universities. Students are not obliged to be enrolled in any math courses prior to taking statistics courses. Drawing on recent educational research, this dissertation attempted to identify the relationship between (1) students’ scores on Lebanese university math admissions tests; (2) students’ scores on a test of very basic mathematical concepts; (3) students’ scores on the survey of attitude toward statistics (SATS); (4) course performance as measured by students’ final scores in the course; and (5) their scores on the final exam. Data were collected from 561 students enrolled in multiple sections of two courses: 307 students in the introductory statistics course and 260 in the advanced statistics course in seven campuses across Lebanon over one semester. The multiple regressions results revealed four significant relationships at the introductory level: between students’ scores on the math quiz with their (1) final exam scores; (2) their final averages; (3) the Cognitive subscale of the SATS with their final exam scores; and (4) their final averages. These four significant relationships were also found at the advanced level. In addition, two more significant relationships were found between students’ final average and the two subscales of Effort (5) and Affect (6). No relationship was found between students’ scores on the admission math tests and both their final exam scores and their final averages in both the introductory and advanced level courses. On the other hand, there was no relationship between students’ scores on Lebanese admissions tests and their final achievement. Although these results were consistent across course formats and instructors, they may encourage Lebanese universities

  1. A Technology-Based Statistical Reasoning Assessment Tool in Descriptive Statistics for Secondary School Students

    ERIC Educational Resources Information Center

    Chan, Shiau Wei; Ismail, Zaleha

    2014-01-01

    The focus of assessment in statistics has gradually shifted from traditional assessment towards alternative assessment where more attention has been paid to the core statistical concepts such as center, variability, and distribution. In spite of this, there are comparatively few assessments that combine the significant three types of statistical…

  2. Creating Statistically Literate Global Citizens: The Use of IPUMS-International Integrated Census Microdata in Teaching

    PubMed Central

    Meier, Ann; Lam, David

    2012-01-01

    Census microdata are ideal for developing statistical literacy of university students. Access, particularly to internationally comparable microdata, has been a significant obstacle. The IPUMS-International project offers a uniform solution to providing access for policy analysts, researchers, and students to integrated microdata and metadata, while protecting statistical confidentiality. Eighty-five official statistical agencies have endorsed IPUMS-I dissemination principles and entrusted microdata for 249 censuses to the project. From June 2010, 159 integrated samples, representing 55 countries and totaling over 325 million person records, are available at no cost to researchers and their students. The database is being expanded with the addition of samples for 5–10 countries per year as well as samples for the 2010 round of censuses. This paper illustrates two approaches to using IPUMS-I census microdata in the university curriculum to promote statistical literacy among undergraduates. PMID:25279022

  3. The Statistical Drake Equation

    NASA Astrophysics Data System (ADS)

    Maccone, Claudio

    2010-12-01

    We provide the statistical generalization of the Drake equation. From a simple product of seven positive numbers, the Drake equation is now turned into the product of seven positive random variables. We call this "the Statistical Drake Equation". The mathematical consequences of this transformation are then derived. The proof of our results is based on the Central Limit Theorem (CLT) of Statistics. In loose terms, the CLT states that the sum of any number of independent random variables, each of which may be ARBITRARILY distributed, approaches a Gaussian (i.e. normal) random variable. This is called the Lyapunov Form of the CLT, or the Lindeberg Form of the CLT, depending on the mathematical constraints assumed on the third moments of the various probability distributions. In conclusion, we show that: The new random variable N, yielding the number of communicating civilizations in the Galaxy, follows the LOGNORMAL distribution. Then, as a consequence, the mean value of this lognormal distribution is the ordinary N in the Drake equation. The standard deviation, mode, and all the moments of this lognormal N are also found. The seven factors in the ordinary Drake equation now become seven positive random variables. The probability distribution of each random variable may be ARBITRARY. The CLT in the so-called Lyapunov or Lindeberg forms (that both do not assume the factors to be identically distributed) allows for that. In other words, the CLT "translates" into our statistical Drake equation by allowing an arbitrary probability distribution for each factor. This is both physically realistic and practically very useful, of course. An application of our statistical Drake equation then follows. The (average) DISTANCE between any two neighboring and communicating civilizations in the Galaxy may be shown to be inversely proportional to the cubic root of N. Then, in our approach, this distance becomes a new random variable. We derive the relevant probability density

  4. The Statistical Fermi Paradox

    NASA Astrophysics Data System (ADS)

    Maccone, C.

    In this paper is provided the statistical generalization of the Fermi paradox. The statistics of habitable planets may be based on a set of ten (and possibly more) astrobiological requirements first pointed out by Stephen H. Dole in his book Habitable planets for man (1964). The statistical generalization of the original and by now too simplistic Dole equation is provided by replacing a product of ten positive numbers by the product of ten positive random variables. This is denoted the SEH, an acronym standing for “Statistical Equation for Habitables”. The proof in this paper is based on the Central Limit Theorem (CLT) of Statistics, stating that the sum of any number of independent random variables, each of which may be ARBITRARILY distributed, approaches a Gaussian (i.e. normal) random variable (Lyapunov form of the CLT). It is then shown that: 1. The new random variable NHab, yielding the number of habitables (i.e. habitable planets) in the Galaxy, follows the log- normal distribution. By construction, the mean value of this log-normal distribution is the total number of habitable planets as given by the statistical Dole equation. 2. The ten (or more) astrobiological factors are now positive random variables. The probability distribution of each random variable may be arbitrary. The CLT in the so-called Lyapunov or Lindeberg forms (that both do not assume the factors to be identically distributed) allows for that. In other words, the CLT "translates" into the SEH by allowing an arbitrary probability distribution for each factor. This is both astrobiologically realistic and useful for any further investigations. 3. By applying the SEH it is shown that the (average) distance between any two nearby habitable planets in the Galaxy may be shown to be inversely proportional to the cubic root of NHab. This distance is denoted by new random variable D. The relevant probability density function is derived, which was named the "Maccone distribution" by Paul Davies in

  5. Statistical considerations in design of spacelab experiments

    NASA Technical Reports Server (NTRS)

    Robinson, J.

    1978-01-01

    After making an analysis of experimental error sources, statistical models were developed for the design and analysis of potential Space Shuttle experiments. Guidelines for statistical significance and/or confidence limits of expected results were also included. The models were then tested out on the following proposed Space Shuttle biomedical experiments: (1) bone density by computer tomography; (2) basal metabolism; and (3) total body water. Analysis of those results and therefore of the models proved inconclusive due to the lack of previous research data and statistical values. However, the models were seen as possible guides to making some predictions and decisions.

  6. Environmental restoration and statistics: Issues and needs

    SciTech Connect

    Gilbert, R.O.

    1991-10-01

    Statisticians have a vital role to play in environmental restoration (ER) activities. One facet of that role is to point out where additional work is needed to develop statistical sampling plans and data analyses that meet the needs of ER. This paper is an attempt to show where statistics fits into the ER process. The statistician, as member of the ER planning team, works collaboratively with the team to develop the site characterization sampling design, so that data of the quality and quantity required by the specified data quality objectives (DQOs) are obtained. At the same time, the statistician works with the rest of the planning team to design and implement, when appropriate, the observational approach to streamline the ER process and reduce costs. The statistician will also provide the expertise needed to select or develop appropriate tools for statistical analysis that are suited for problems that are common to waste-site data. These data problems include highly heterogeneous waste forms, large variability in concentrations over space, correlated data, data that do not have a normal (Gaussian) distribution, and measurements below detection limits. Other problems include environmental transport and risk models that yield highly uncertain predictions, and the need to effectively communicate to the public highly technical information, such as sampling plans, site characterization data, statistical analysis results, and risk estimates. Even though some statistical analysis methods are available off the shelf'' for use in ER, these problems require the development of additional statistical tools, as discussed in this paper. 29 refs.

  7. Boron addition to alloys

    SciTech Connect

    Coad, B. C.

    1985-08-20

    A process for addition of boron to an alloy which involves forming a melt of the alloy and a reactive metal, selected from the group consisting of aluminum, titanium, zirconium and mixtures thereof to the melt, maintaining the resulting reactive mixture in the molten state and reacting the boric oxide with the reactive metal to convert at least a portion of the boric oxide to boron which dissolves in the resulting melt, and to convert at least portion of the reactive metal to the reactive metal oxide, which oxide remains with the resulting melt, and pouring the resulting melt into a gas stream to form a first atomized powder which is subsequently remelted with further addition of boric oxide, re-atomized, and thus reprocessed to convert essentially all the reactive metal to metal oxide to produce a powdered alloy containing specified amounts of boron.

  8. Tackifier for addition polyimides

    NASA Technical Reports Server (NTRS)

    Butler, J. M.; St.clair, T. L.

    1980-01-01

    A modification to the addition polyimide, LaRC-160, was prepared to improve tack and drape and increase prepeg out-time. The essentially solventless, high viscosity laminating resin is synthesized from low cost liquid monomers. The modified version takes advantage of a reactive, liquid plasticizer which is used in place of solvent and helps solve a major problem of maintaining good prepeg tack and drape, or the ability of the prepeg to adhere to adjacent plies and conform to a desired shape during the lay up process. This alternate solventless approach allows both longer life of the polymer prepeg and the processing of low void laminates. This approach appears to be applicable to all addition polyimide systems.

  9. Vinyl capped addition polyimides

    NASA Technical Reports Server (NTRS)

    Vannucci, Raymond D. (Inventor); Malarik, Diane C. (Inventor); Delvigs, Peter (Inventor)

    1991-01-01

    Polyimide resins (PMR) are generally useful where high strength and temperature capabilities are required (at temperatures up to about 700 F). Polyimide resins are particularly useful in applications such as jet engine compressor components, for example, blades, vanes, air seals, air splitters, and engine casing parts. Aromatic vinyl capped addition polyimides are obtained by reacting a diamine, an ester of tetracarboxylic acid, and an aromatic vinyl compound. Low void materials with improved oxidative stability when exposed to 700 F air may be fabricated as fiber reinforced high molecular weight capped polyimide composites. The aromatic vinyl capped polyimides are provided with a more aromatic nature and are more thermally stable than highly aliphatic, norbornenyl-type end-capped polyimides employed in PMR resins. The substitution of aromatic vinyl end-caps for norbornenyl end-caps in addition polyimides results in polymers with improved oxidative stability.

  10. [Biologically active food additives].

    PubMed

    Velichko, M A; Shevchenko, V P

    1998-07-01

    More than half out of 40 projects for the medical science development by the year of 2000 have been connected with the bio-active edible additives that are called "the food of XXI century", non-pharmacological means for many diseases. Most of these additives--nutricevtics and parapharmacevtics--are intended for the enrichment of food rations for the sick or healthy people. The ecologicaly safest and most effective are combined domestic adaptogens with immuno-modulating and antioxidating action that give anabolic and stimulating effect,--"leveton", "phytoton" and "adapton". The MKTs-229 tablets are residue discharge means. For atherosclerosis and general adiposis they recommend "tsar tablets" and "aiconol (ikhtien)"--on the base of cod-liver oil or "splat" made out of seaweed (algae). All these preparations have been clinically tested and received hygiene certificates from the Institute of Dietology of the Russian Academy of Medical Science. PMID:9752776

  11. Pedagogical Utilization and Assessment of the Statistic Online Computational Resource in Introductory Probability and Statistics Courses

    PubMed Central

    Dinov, Ivo D.; Sanchez, Juana; Christou, Nicolas

    2009-01-01

    Technology-based instruction represents a new recent pedagogical paradigm that is rooted in the realization that new generations are much more comfortable with, and excited about, new technologies. The rapid technological advancement over the past decade has fueled an enormous demand for the integration of modern networking, informational and computational tools with classical pedagogical instruments. Consequently, teaching with technology typically involves utilizing a variety of IT and multimedia resources for online learning, course management, electronic course materials, and novel tools of communication, engagement, experimental, critical thinking and assessment. The NSF-funded Statistics Online Computational Resource (SOCR) provides a number of interactive tools for enhancing instruction in various undergraduate and graduate courses in probability and statistics. These resources include online instructional materials, statistical calculators, interactive graphical user interfaces, computational and simulation applets, tools for data analysis and visualization. The tools provided as part of SOCR include conceptual simulations and statistical computing interfaces, which are designed to bridge between the introductory and the more advanced computational and applied probability and statistics courses. In this manuscript, we describe our designs for utilizing SOCR technology in instruction in a recent study. In addition, present the results of the effectiveness of using SOCR tools at two different course intensity levels on three outcome measures: exam scores, student satisfaction and choice of technology to complete assignments. Learning styles assessment was completed at baseline. We have used three very different designs for three different undergraduate classes. Each course included a treatment group, using the SOCR resources, and a control group, using classical instruction techniques. Our findings include marginal effects of the SOCR treatment per individual

  12. Electrophilic addition of astatine

    SciTech Connect

    Norseev, Yu.V.; Vasaros, L.; Nhan, D.D.; Huan, N.K.

    1988-03-01

    It has been shown for the first time that astatine is capable of undergoing addition reactions to unsaturated hydrocarbons. A new compound of astatine, viz., ethylene astatohydrin, has been obtained, and its retention numbers of squalane, Apiezon, and tricresyl phosphate have been found. The influence of various factors on the formation of ethylene astatohydrin has been studied. It has been concluded on the basis of the results obtained that the univalent cations of astatine in an acidic medium is protonated hypoastatous acid.

  13. Hydrocarbon fuel additive

    SciTech Connect

    Ambrogio, S.

    1989-02-28

    This patent describes the method of fuel storage or combustion, wherein the fuel supply contains small amounts of water, the step of adding to the fuel supply an additive comprising a blend of a hydrophilic agent chosen from the group of ethylene glycol, n-butyl alcohol, and cellosolve in the range of 22-37% by weight; ethoxylated nonylphenol in the range of 26-35% by weight; nonylphenol polyethylene glycol ether in the range of 32-43% by weight.

  14. Statistical Inference at Work: Statistical Process Control as an Example

    ERIC Educational Resources Information Center

    Bakker, Arthur; Kent, Phillip; Derry, Jan; Noss, Richard; Hoyles, Celia

    2008-01-01

    To characterise statistical inference in the workplace this paper compares a prototypical type of statistical inference at work, statistical process control (SPC), with a type of statistical inference that is better known in educational settings, hypothesis testing. Although there are some similarities between the reasoning structure involved in…

  15. Functional Generalized Additive Models.

    PubMed

    McLean, Mathew W; Hooker, Giles; Staicu, Ana-Maria; Scheipl, Fabian; Ruppert, David

    2014-01-01

    We introduce the functional generalized additive model (FGAM), a novel regression model for association studies between a scalar response and a functional predictor. We model the link-transformed mean response as the integral with respect to t of F{X(t), t} where F(·,·) is an unknown regression function and X(t) is a functional covariate. Rather than having an additive model in a finite number of principal components as in Müller and Yao (2008), our model incorporates the functional predictor directly and thus our model can be viewed as the natural functional extension of generalized additive models. We estimate F(·,·) using tensor-product B-splines with roughness penalties. A pointwise quantile transformation of the functional predictor is also considered to ensure each tensor-product B-spline has observed data on its support. The methods are evaluated using simulated data and their predictive performance is compared with other competing scalar-on-function regression alternatives. We illustrate the usefulness of our approach through an application to brain tractography, where X(t) is a signal from diffusion tensor imaging at position, t, along a tract in the brain. In one example, the response is disease-status (case or control) and in a second example, it is the score on a cognitive test. R code for performing the simulations and fitting the FGAM can be found in supplemental materials available online. PMID:24729671

  16. Fermions from classical statistics

    SciTech Connect

    Wetterich, C.

    2010-12-15

    We describe fermions in terms of a classical statistical ensemble. The states {tau} of this ensemble are characterized by a sequence of values one or zero or a corresponding set of two-level observables. Every classical probability distribution can be associated to a quantum state for fermions. If the time evolution of the classical probabilities p{sub {tau}} amounts to a rotation of the wave function q{sub {tau}}(t)={+-}{radical}(p{sub {tau}}(t)), we infer the unitary time evolution of a quantum system of fermions according to a Schroedinger equation. We establish how such classical statistical ensembles can be mapped to Grassmann functional integrals. Quantum field theories for fermions arise for a suitable time evolution of classical probabilities for generalized Ising models.

  17. Statistics in disease ecology

    PubMed Central

    Waller, Lance A.

    2008-01-01

    The three papers included in this special issue represent a set of presentations in an invited session on disease ecology at the 2005 Spring Meeting of the Eastern North American Region of the International Biometric Society. The papers each address statistical estimation and inference for particular components of different disease processes and, taken together, illustrate the breadth of statistical issues arising in the study of the ecology and public health impact of disease. As an introduction, we provide a very brief overview of the area of “disease ecology”, a variety of synonyms addressing different aspects of disease ecology, and present a schematic structure illustrating general components of the underlying disease process, data collection issues, and different disciplinary perspectives ranging from microbiology to public health surveillance. PMID:19081740

  18. Statistical evaluation of forecasts

    NASA Astrophysics Data System (ADS)

    Mader, Malenka; Mader, Wolfgang; Gluckman, Bruce J.; Timmer, Jens; Schelter, Björn

    2014-08-01

    Reliable forecasts of extreme but rare events, such as earthquakes, financial crashes, and epileptic seizures, would render interventions and precautions possible. Therefore, forecasting methods have been developed which intend to raise an alarm if an extreme event is about to occur. In order to statistically validate the performance of a prediction system, it must be compared to the performance of a random predictor, which raises alarms independent of the events. Such a random predictor can be obtained by bootstrapping or analytically. We propose an analytic statistical framework which, in contrast to conventional methods, allows for validating independently the sensitivity and specificity of a forecasting method. Moreover, our method accounts for the periods during which an event has to remain absent or occur after a respective forecast.

  19. Statistical tests for power-law cross-correlated processes

    NASA Astrophysics Data System (ADS)

    Podobnik, Boris; Jiang, Zhi-Qiang; Zhou, Wei-Xing; Stanley, H. Eugene

    2011-12-01

    For stationary time series, the cross-covariance and the cross-correlation as functions of time lag n serve to quantify the similarity of two time series. The latter measure is also used to assess whether the cross-correlations are statistically significant. For nonstationary time series, the analogous measures are detrended cross-correlations analysis (DCCA) and the recently proposed detrended cross-correlation coefficient, ρDCCA(T,n), where T is the total length of the time series and n the window size. For ρDCCA(T,n), we numerically calculated the Cauchy inequality -1≤ρDCCA(T,n)≤1. Here we derive -1≤ρDCCA(T,n)≤1 for a standard variance-covariance approach and for a detrending approach. For overlapping windows, we find the range of ρDCCA within which the cross-correlations become statistically significant. For overlapping windows we numerically determine—and for nonoverlapping windows we derive—that the standard deviation of ρDCCA(T,n) tends with increasing T to 1/T. Using ρDCCA(T,n) we show that the Chinese financial market's tendency to follow the U.S. market is extremely weak. We also propose an additional statistical test that can be used to quantify the existence of cross-correlations between two power-law correlated time series.

  20. Statistical tests for power-law cross-correlated processes.

    PubMed

    Podobnik, Boris; Jiang, Zhi-Qiang; Zhou, Wei-Xing; Stanley, H Eugene

    2011-12-01

    For stationary time series, the cross-covariance and the cross-correlation as functions of time lag n serve to quantify the similarity of two time series. The latter measure is also used to assess whether the cross-correlations are statistically significant. For nonstationary time series, the analogous measures are detrended cross-correlations analysis (DCCA) and the recently proposed detrended cross-correlation coefficient, ρ(DCCA)(T,n), where T is the total length of the time series and n the window size. For ρ(DCCA)(T,n), we numerically calculated the Cauchy inequality -1 ≤ ρ(DCCA)(T,n) ≤ 1. Here we derive -1 ≤ ρ DCCA)(T,n) ≤ 1 for a standard variance-covariance approach and for a detrending approach. For overlapping windows, we find the range of ρ(DCCA) within which the cross-correlations become statistically significant. For overlapping windows we numerically determine-and for nonoverlapping windows we derive--that the standard deviation of ρ(DCCA)(T,n) tends with increasing T to 1/T. Using ρ(DCCA)(T,n) we show that the Chinese financial market's tendency to follow the U.S. market is extremely weak. We also propose an additional statistical test that can be used to quantify the existence of cross-correlations between two power-law correlated time series. PMID:22304166

  1. 1979 DOE statistical symposium

    SciTech Connect

    Gardiner, D.A.; Truett T.

    1980-09-01

    The 1979 DOE Statistical Symposium was the fifth in the series of annual symposia designed to bring together statisticians and other interested parties who are actively engaged in helping to solve the nation's energy problems. The program included presentations of technical papers centered around exploration and disposal of nuclear fuel, general energy-related topics, and health-related issues, and workshops on model evaluation, risk analysis, analysis of large data sets, and resource estimation.

  2. Statistical Methods in Cosmology

    NASA Astrophysics Data System (ADS)

    Verde, L.

    2010-03-01

    The advent of large data-set in cosmology has meant that in the past 10 or 20 years our knowledge and understanding of the Universe has changed not only quantitatively but also, and most importantly, qualitatively. Cosmologists rely on data where a host of useful information is enclosed, but is encoded in a non-trivial way. The challenges in extracting this information must be overcome to make the most of a large experimental effort. Even after having converged to a standard cosmological model (the LCDM model) we should keep in mind that this model is described by 10 or more physical parameters and if we want to study deviations from it, the number of parameters is even larger. Dealing with such a high dimensional parameter space and finding parameters constraints is a challenge on itself. Cosmologists want to be able to compare and combine different data sets both for testing for possible disagreements (which could indicate new physics) and for improving parameter determinations. Finally, cosmologists in many cases want to find out, before actually doing the experiment, how much one would be able to learn from it. For all these reasons, sophisiticated statistical techniques are being employed in cosmology, and it has become crucial to know some statistical background to understand recent literature in the field. I will introduce some statistical tools that any cosmologist should know about in order to be able to understand recently published results from the analysis of cosmological data sets. I will not present a complete and rigorous introduction to statistics as there are several good books which are reported in the references. The reader should refer to those.

  3. Statistical Reviewers Improve Reporting in Biomedical Articles: A Randomized Trial

    PubMed Central

    Cobo, Erik; Selva-O'Callagham, Albert; Ribera, Josep-Maria; Cardellach, Francesc; Dominguez, Ruth; Vilardell, Miquel

    2007-01-01

    Background Although peer review is widely considered to be the most credible way of selecting manuscripts and improving the quality of accepted papers in scientific journals, there is little evidence to support its use. Our aim was to estimate the effects on manuscript quality of either adding a statistical peer reviewer or suggesting the use of checklists such as CONSORT or STARD to clinical reviewers or both. Methodology and Principal Findings Interventions were defined as 1) the addition of a statistical reviewer to the clinical peer review process, and 2) suggesting reporting guidelines to reviewers; with “no statistical expert” and “no checklist” as controls. The two interventions were crossed in a 2×2 balanced factorial design including original research articles consecutively selected, between May 2004 and March 2005, by the Medicina Clinica (Barc) editorial committee. We randomized manuscripts to minimize differences in terms of baseline quality and type of study (intervention, longitudinal, cross-sectional, others). Sample-size calculations indicated that 100 papers provide an 80% power to test a 55% standardized difference. We specified the main outcome as the increment in quality of papers as measured on the Goodman Scale. Two blinded evaluators rated the quality of manuscripts at initial submission and final post peer review version. Of the 327 manuscripts submitted to the journal, 131 were accepted for further review, and 129 were randomized. Of those, 14 that were lost to follow-up showed no differences in initial quality to the followed-up papers. Hence, 115 were included in the main analysis, with 16 rejected for publication after peer review. 21 (18.3%) of the 115 included papers were interventions, 46 (40.0%) were longitudinal designs, 28 (24.3%) cross-sectional and 20 (17.4%) others. The 16 (13.9%) rejected papers had a significantly lower initial score on the overall Goodman scale than accepted papers (difference 15.0, 95% CI: 4.6–24

  4. A Proposed Fourth Measure of Significance: The Role of Economic Significance in Educational Research

    ERIC Educational Resources Information Center

    Leech, Nancy L.; Onwuegbuzie, Anthony J.

    2004-01-01

    The purpose of this paper is to examine economic significance as a fourth measure of significance. In addition to describing and operationalising the concept of economic significance, a typology of economic significance indices is presented, including an example of how to compute these measures, as well as how to utilise them in applied research.…

  5. Quantum U-statistics

    SciTech Connect

    Guta, Madalin; Butucea, Cristina

    2010-10-15

    The notion of a U-statistic for an n-tuple of identical quantum systems is introduced in analogy to the classical (commutative) case: given a self-adjoint 'kernel' K acting on (C{sup d}){sup '}x{sup r} with rstatistics converges in moments to a linear combination of Hermite polynomials in canonical variables of a canonical commutation relation algebra defined through the quantum central limit theorem. In the special cases of nondegenerate kernels and kernels of order of 2, it is shown that the convergence holds in the stronger distribution sense. Two types of applications in quantum statistics are described: testing beyond the two simple hypotheses scenario and quantum metrology with interacting Hamiltonians.

  6. Statistics in fusion experiments

    NASA Astrophysics Data System (ADS)

    McNeill, D. H.

    1997-11-01

    Since the reasons for the variability in data from plasma experiments are often unknown or uncontrollable, statistical methods must be applied. Reliable interpretation and public accountability require full data sets. Two examples of data misrepresentation at PPPL are analyzed: Te >100 eV on S-1 spheromak.(M. Yamada, Nucl. Fusion 25, 1327 (1985); reports to DoE; etc.) The reported high values (statistical artifacts of Thomson scattering measurements) were selected from a mass of data with an average of 40 eV or less. ``Correlated'' spectroscopic data were meaningless. (2) Extrapolation to Q >=0.5 for DT in TFTR.(D. Meade et al., IAEA Baltimore (1990), V. 1, p. 9; H. P. Furth, Statements to U. S. Congress (1989).) The DD yield used there was the highest through 1990 (>= 50% above average) and the DT to DD power ratio used was about twice any published value. Average DD yields and published yield ratios scale to Q<0.15 for DT, in accord with the observed performance over the last 3 1/2 years. Press reports of outlier data from TFTR have obscured the fact that the DT behavior follows from trivial scaling of the DD data. Good practice in future fusion research would have confidence intervals and other descriptive statistics accompanying reported numerical values (cf. JAMA).

  7. Fast statistical alignment.

    PubMed

    Bradley, Robert K; Roberts, Adam; Smoot, Michael; Juvekar, Sudeep; Do, Jaeyoung; Dewey, Colin; Holmes, Ian; Pachter, Lior

    2009-05-01

    We describe a new program for the alignment of multiple biological sequences that is both statistically motivated and fast enough for problem sizes that arise in practice. Our Fast Statistical Alignment program is based on pair hidden Markov models which approximate an insertion/deletion process on a tree and uses a sequence annealing algorithm to combine the posterior probabilities estimated from these models into a multiple alignment. FSA uses its explicit statistical model to produce multiple alignments which are accompanied by estimates of the alignment accuracy and uncertainty for every column and character of the alignment--previously available only with alignment programs which use computationally-expensive Markov Chain Monte Carlo approaches--yet can align thousands of long sequences. Moreover, FSA utilizes an unsupervised query-specific learning procedure for parameter estimation which leads to improved accuracy on benchmark reference alignments in comparison to existing programs. The centroid alignment approach taken by FSA, in combination with its learning procedure, drastically reduces the amount of false-positive alignment on biological data in comparison to that given by other methods. The FSA program and a companion visualization tool for exploring uncertainty in alignments can be used via a web interface at http://orangutan.math.berkeley.edu/fsa/, and the source code is available at http://fsa.sourceforge.net/. PMID:19478997

  8. Expression and prognostic significance of unique ULBPs in pancreatic cancer

    PubMed Central

    Chen, Jiong; Zhu, Xing-Xing; Xu, Hong; Fang, Heng-Zhong; Zhao, Jin-Qian

    2016-01-01

    Background Pancreatic cancer is one of the most lethal cancers worldwide, due to the lack of efficient therapy and difficulty in early diagnosis. ULBPs have been shown to behave as important protectors with prognostic significance in various cancers. Materials and methods Immunohistochemistry and enzyme-linked immunosorbent assays were used to explore the expression of ULBPs in cancer tissue and in serum, while survival analysis was used to evaluate the subsequent clinical value of ULBPs. Results Statistics showed that high expression of membrane ULBP1 was a good biomarker of overall survival (18 months vs 13 months), and a high level of soluble ULBP2 was deemed an independent poor indicator for both overall survival (P<0.001) and disease-free survival (P<0.001). Conclusion ULBP1 provides additional information for early diagnosis, and soluble ULBP2 can be used as a novel tumor marker to evaluate the risk of pancreatic cancer patients. PMID:27621649

  9. Determining significant material properties: A discovery approach

    NASA Technical Reports Server (NTRS)

    Karplus, Alan K.

    1992-01-01

    The following is a laboratory experiment designed to further understanding of materials science. The experiment itself can be informative for persons of any age past elementary school, and even for some in elementary school. The preparation of the plastic samples is readily accomplished by persons with resonable dexterity in the cutting of paper designs. The completion of the statistical Design of Experiments, which uses Yates' Method, requires basic math (addition and subtraction). Interpretive work requires plotting of data and making observations. Knowledge of statistical methods would be helpful. The purpose of this experiment is to acquaint students with the seven classes of recyclable plastics, and provide hands-on learning about the response of these plastics to mechanical tensile loading.

  10. Siloxane containing addition polyimides

    NASA Technical Reports Server (NTRS)

    Maudgal, S.; St. Clair, T. L.

    1984-01-01

    Addition polyimide oligomers have been synthesized from bis(gamma-aminopropyl) tetramethyldisiloxane and 3, 3', 4, 4'-benzophenonetetracarboxylic dianhydride using a variety of latent crosslinking groups as endcappers. The prepolymers were isolated and characterized for solubility (in amide, chlorinated and ether solvents), melt flow and cure properties. The most promising systems, maleimide and acetylene terminated prepolymers, were selected for detailed study. Graphite cloth reinforced composites were prepared and properties compared with those of graphite/Kerimid 601, a commercially available bismaleimide. Mixtures of the maleimide terminated system with Kerimid 601, in varying proportions, were also studied.

  11. Oil additive process

    SciTech Connect

    Bishop, H.

    1988-10-18

    This patent describes a method of making an additive comprising: (a) adding 2 parts by volume of 3% sodium hypochlorite to 45 parts by volume of diesel oil fuel to form a sulphur free fuel, (b) removing all water and foreign matter formed by the sodium hypochlorite, (c) blending 30 parts by volume of 24% lead naphthanate with 15 parts by volume of the sulphur free fuel, 15 parts by volume of light-weight material oil to form a blended mixture, and (d) heating the blended mixture slowly and uniformly to 152F.

  12. Experimental Mathematics and Computational Statistics

    SciTech Connect

    Bailey, David H.; Borwein, Jonathan M.

    2009-04-30

    The field of statistics has long been noted for techniques to detect patterns and regularities in numerical data. In this article we explore connections between statistics and the emerging field of 'experimental mathematics'. These includes both applications of experimental mathematics in statistics, as well as statistical methods applied to computational mathematics.

  13. Truth, Damn Truth, and Statistics

    ERIC Educational Resources Information Center

    Velleman, Paul F.

    2008-01-01

    Statisticians and Statistics teachers often have to push back against the popular impression that Statistics teaches how to lie with data. Those who believe incorrectly that Statistics is solely a branch of Mathematics (and thus algorithmic), often see the use of judgment in Statistics as evidence that we do indeed manipulate our results. In the…

  14. Should College Algebra be a Prerequisite for Taking Psychology Statistics?

    ERIC Educational Resources Information Center

    Sibulkin, Amy E.; Butler, J. S.

    2008-01-01

    In order to consider whether a course in college algebra should be a prerequisite for taking psychology statistics, we recorded students' grades in elementary psychology statistics and in college algebra at a 4-year university. Students who earned credit in algebra prior to enrolling in statistics for the first time had a significantly higher mean…

  15. A Statistics Curriculum for the Undergraduate Chemistry Major

    ERIC Educational Resources Information Center

    Schlotter, Nicholas E.

    2013-01-01

    Our ability to statistically analyze data has grown significantly with the maturing of computer hardware and software. However, the evolution of our statistics capabilities has taken place without a corresponding evolution in the curriculum for the undergraduate chemistry major. Most faculty understands the need for a statistical educational…

  16. A Tablet-PC Software Application for Statistics Classes

    ERIC Educational Resources Information Center

    Probst, Alexandre C.

    2014-01-01

    A significant deficiency in the area of introductory statistics education exists: Student performance on standardized assessments after a full semester statistics course is poor and students report a very low desire to learn statistics. Research on the current generation of students indicates an affinity for technology and for multitasking.…

  17. Florida Library Directory with Statistics, 1997.

    ERIC Educational Resources Information Center

    Florida Dept. of State, Tallahassee. Div. of Library and Information Services.

    This 48th annual edition includes listings for over 1,000 libraries of all types in Florida, with contact names, phone numbers, addresses, and e-mail and web addresses. In addition, there is a section of library statistics, showing data on the use, resources, and financial condition of Florida's libraries. The first section consists of listings…

  18. A statistical development of entropy for the introductory physics course

    NASA Astrophysics Data System (ADS)

    Schoepf, David C.

    2002-02-01

    Many introductory physics texts introduce the statistical basis for the definition of entropy in addition to the Clausius definition, ΔS=q/T. We use a model based on equally spaced energy levels to present a way that the statistical definition of entropy can be developed at the introductory level. In addition to motivating the statistical definition of entropy, we also develop statistical arguments to answer the following questions: (i) Why does a system approach a state of maximum number of microstates? (ii) What is the equilibrium distribution of particles? (iii) What is the statistical basis of temperature? (iv) What is the statistical basis for the direction of spontaneous energy transfer? Finally, a correspondence between the statistical and the classical Clausius definitions of entropy is made.

  19. Statistics of the galaxy distribution

    NASA Astrophysics Data System (ADS)

    Lachièze-Rey, Marc

    1989-09-01

    The universe appears from recent observational results to be a highly structured but also highly disordered medium. This accounts for the difficulties with a conventional statistical approach. Since the statistics of disordered media is an increasingly well-studied field in physics, it is tempting to try to adapt its methods for the study of the universe (the use of correlation functions also resulted from the adaptation of techniques from a very different field to astrophysics). This is already the case for the fractal analysis, which, mainly developed in microscopic statistics, is increasingly used in astrophysics. I suggest a new approach, also derived from the study of disordered media, both from the study of percolation clusters and from the dynamics of so-called “cluster aggregation” gelification models. This approach is briefly presented. Its main interest lies in two points. First, it suggests an analysis able to characterize features of unconventional statistics (those that seem to be present in the galaxy distribution and which conventional indicators are unable to take into account). It appears also a priori very convenient for a synthetic approach, since it can be related to the other indicators used up to now: the link with the void probability function is very straightforward. The connexion with fractals can be said to be contained in the method, since the objects defined during this analysis are themselves fractal: different kinds of fractal dimensions are very easy to extract from the analysis. The link with the percolation studies is also very natural since the method is adapted from the study of percolation clusters. It is also expected that the information concerning the topology is contained in this approach; this seems natural since the method is very sensitive to the topology of the distribution and posses some common characteristics with the topology analysis already developed by Gott et al. (1986). The quantitative relations remain

  20. Performance Boosting Additive

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Mainstream Engineering Corporation was awarded Phase I and Phase II contracts from Goddard Space Flight Center's Small Business Innovation Research (SBIR) program in early 1990. With support from the SBIR program, Mainstream Engineering Corporation has developed a unique low cost additive, QwikBoost (TM), that increases the performance of air conditioners, heat pumps, refrigerators, and freezers. Because of the energy and environmental benefits of QwikBoost, Mainstream received the Tibbetts Award at a White House Ceremony on October 16, 1997. QwikBoost was introduced at the 1998 International Air Conditioning, Heating, and Refrigeration Exposition. QwikBoost is packaged in a handy 3-ounce can (pressurized with R-134a) and will be available for automotive air conditioning systems in summer 1998.

  1. Sewage sludge additive

    NASA Technical Reports Server (NTRS)

    Kalvinskas, J. J.; Mueller, W. A.; Ingham, J. D. (Inventor)

    1980-01-01

    The additive is for a raw sewage treatment process of the type where settling tanks are used for the purpose of permitting the suspended matter in the raw sewage to be settled as well as to permit adsorption of the dissolved contaminants in the water of the sewage. The sludge, which settles down to the bottom of the settling tank is extracted, pyrolyzed and activated to form activated carbon and ash which is mixed with the sewage prior to its introduction into the settling tank. The sludge does not provide all of the activated carbon and ash required for adequate treatment of the raw sewage. It is necessary to add carbon to the process and instead of expensive commercial carbon, coal is used to provide the carbon supplement.

  2. Perspectives on Additive Manufacturing

    NASA Astrophysics Data System (ADS)

    Bourell, David L.

    2016-07-01

    Additive manufacturing (AM) has skyrocketed in visibility commercially and in the public sector. This article describes the development of this field from early layered manufacturing approaches of photosculpture, topography, and material deposition. Certain precursors to modern AM processes are also briefly described. The growth of the field over the last 30 years is presented. Included is the standard delineation of AM technologies into seven broad categories. The economics of AM part generation is considered, and the impacts of the economics on application sectors are described. On the basis of current trends, the future outlook will include a convergence of AM fabricators, mass-produced AM fabricators, enabling of topology optimization designs, and specialization in the AM legal arena. Long-term developments with huge impact are organ printing and volume-based printing.

  3. New addition curing polyimides

    NASA Technical Reports Server (NTRS)

    Frimer, Aryeh A.; Cavano, Paul

    1991-01-01

    In an attempt to improve the thermal-oxidative stability (TOS) of PMR-type polymers, the use of 1,4-phenylenebis (phenylmaleic anhydride) PPMA, was evaluated. Two series of nadic end-capped addition curing polyimides were prepared by imidizing PPMA with either 4,4'-methylene dianiline or p-phenylenediamine. The first resulted in improved solubility and increased resin flow while the latter yielded a compression molded neat resin sample with a T(sub g) of 408 C, close to 70 C higher than PME-15. The performance of these materials in long term weight loss studies was below that of PMR-15, independent of post-cure conditions. These results can be rationalized in terms of the thermal lability of the pendant phenyl groups and the incomplete imidization of the sterically congested PPMA. The preparation of model compounds as well as future research directions are discussed.

  4. Who Needs Statistics? | Poster

    Cancer.gov

    You may know the feeling. You have collected a lot of new data on an important experiment. Now you are faced with multiple groups of data, a sea of numbers, and a deadline for submitting your paper to a peer-reviewed journal. And you are not sure which data are relevant, or even the best way to present them. The statisticians at Data Management Services (DMS) know how to help. This small group of experts provides a wide array of statistical and mathematical consulting services to the scientific community at NCI at Frederick and NCI-Bethesda.

  5. International petroleum statistics report

    SciTech Connect

    1995-10-01

    The International Petroleum Statistics Report is a monthly publication that provides current international oil data. This report presents data on international oil production, demand, imports, exports and stocks. The report has four sections. Section 1 contains time series data on world oil production, and on oil demand and stocks in the Organization for Economic Cooperation and Development (OECD). Section 2 presents an oil supply/demand balance for the world, in quarterly intervals for the most recent two years. Section 3 presents data on oil imports by OECD countries. Section 4 presents annual time series data on world oil production and oil stocks, demand, and trade in OECD countries.

  6. Overweight and Obesity Statistics

    MedlinePlus

    ... Research Training & Career Development Grant programs for students, postdocs, and faculty Research at NIDDK Labs, faculty, and ... Resources Additional Reading from the Centers for Disease Control and Prevention Obesity and Socioeconomic Status in Adults: ...

  7. Relationship between Graduate Students' Statistics Self-Efficacy, Statistics Anxiety, Attitude toward Statistics, and Social Support

    ERIC Educational Resources Information Center

    Perepiczka, Michelle; Chandler, Nichelle; Becerra, Michael

    2011-01-01

    Statistics plays an integral role in graduate programs. However, numerous intra- and interpersonal factors may lead to successful completion of needed coursework in this area. The authors examined the extent of the relationship between self-efficacy to learn statistics and statistics anxiety, attitude towards statistics, and social support of 166…

  8. Individualized additional instruction for calculus

    NASA Astrophysics Data System (ADS)

    Takata, Ken

    2010-10-01

    College students enrolling in the calculus sequence have a wide variance in their preparation and abilities, yet they are usually taught from the same lecture. We describe another pedagogical model of Individualized Additional Instruction (IAI) that assesses each student frequently and prescribes further instruction and homework based on the student's performance. Our study compares two calculus classes, one taught with mandatory remedial IAI and the other without. The class with mandatory remedial IAI did significantly better on comprehensive multiple-choice exams, participated more frequently in classroom discussion and showed greater interest in theorem-proving and other advanced topics.

  9. International petroleum statistics report

    SciTech Connect

    1997-05-01

    The International Petroleum Statistics Report is a monthly publication that provides current international oil data. This report is published for the use of Members of Congress, Federal agencies, State agencies, industry, and the general public. Publication of this report is in keeping with responsibilities given the Energy Information Administration in Public Law 95-91. The International Petroleum Statistics Report presents data on international oil production, demand, imports, and stocks. The report has four sections. Section 1 contains time series data on world oil production, and on oil demand and stocks in the Organization for Economic Cooperation and Development (OECD). This section contains annual data beginning in 1985, and monthly data for the most recent two years. Section 2 presents an oil supply/demand balance for the world. This balance is presented in quarterly intervals for the most recent two years. Section 3 presents data on oil imports by OECD countries. This section contains annual data for the most recent year, quarterly data for the most recent two quarters, and monthly data for the most recent twelve months. Section 4 presents annual time series data on world oil production and oil stocks, demand, and trade in OECD countries. World oil production and OECD demand data are for the years 1970 through 1995; OECD stocks from 1973 through 1995; and OECD trade from 1985 through 1995.

  10. Statistical clumped isotope signatures

    PubMed Central

    Röckmann, T.; Popa, M. E.; Krol, M. C.; Hofmann, M. E. G.

    2016-01-01

    High precision measurements of molecules containing more than one heavy isotope may provide novel constraints on element cycles in nature. These so-called clumped isotope signatures are reported relative to the random (stochastic) distribution of heavy isotopes over all available isotopocules of a molecule, which is the conventional reference. When multiple indistinguishable atoms of the same element are present in a molecule, this reference is calculated from the bulk (≈average) isotopic composition of the involved atoms. We show here that this referencing convention leads to apparent negative clumped isotope anomalies (anti-clumping) when the indistinguishable atoms originate from isotopically different populations. Such statistical clumped isotope anomalies must occur in any system where two or more indistinguishable atoms of the same element, but with different isotopic composition, combine in a molecule. The size of the anti-clumping signal is closely related to the difference of the initial isotope ratios of the indistinguishable atoms that have combined. Therefore, a measured statistical clumped isotope anomaly, relative to an expected (e.g. thermodynamical) clumped isotope composition, may allow assessment of the heterogeneity of the isotopic pools of atoms that are the substrate for formation of molecules. PMID:27535168

  11. Statistical clumped isotope signatures.

    PubMed

    Röckmann, T; Popa, M E; Krol, M C; Hofmann, M E G

    2016-01-01

    High precision measurements of molecules containing more than one heavy isotope may provide novel constraints on element cycles in nature. These so-called clumped isotope signatures are reported relative to the random (stochastic) distribution of heavy isotopes over all available isotopocules of a molecule, which is the conventional reference. When multiple indistinguishable atoms of the same element are present in a molecule, this reference is calculated from the bulk (≈average) isotopic composition of the involved atoms. We show here that this referencing convention leads to apparent negative clumped isotope anomalies (anti-clumping) when the indistinguishable atoms originate from isotopically different populations. Such statistical clumped isotope anomalies must occur in any system where two or more indistinguishable atoms of the same element, but with different isotopic composition, combine in a molecule. The size of the anti-clumping signal is closely related to the difference of the initial isotope ratios of the indistinguishable atoms that have combined. Therefore, a measured statistical clumped isotope anomaly, relative to an expected (e.g. thermodynamical) clumped isotope composition, may allow assessment of the heterogeneity of the isotopic pools of atoms that are the substrate for formation of molecules. PMID:27535168

  12. Fragile entanglement statistics

    NASA Astrophysics Data System (ADS)

    Brody, Dorje C.; Hughston, Lane P.; Meier, David M.

    2015-10-01

    If X and Y are independent, Y and Z are independent, and so are X and Z, one might be tempted to conclude that X, Y, and Z are independent. But it has long been known in classical probability theory that, intuitive as it may seem, this is not true in general. In quantum mechanics one can ask whether analogous statistics can emerge for configurations of particles in certain types of entangled states. The explicit construction of such states, along with the specification of suitable sets of observables that have the purported statistical properties, is not entirely straightforward. We show that an example of such a configuration arises in the case of an N-particle GHZ state, and we are able to identify a family of observables with the property that the associated measurement outcomes are independent for any choice of 2,3,\\ldots ,N-1 of the particles, even though the measurement outcomes for all N particles are not independent. Although such states are highly entangled, the entanglement turns out to be ‘fragile’, i.e. the associated density matrix has the property that if one traces out the freedom associated with even a single particle, the resulting reduced density matrix is separable.

  13. Statistically determined nickel cadmium performance relationships

    NASA Technical Reports Server (NTRS)

    Gross, Sidney

    1987-01-01

    A statistical analysis was performed on sealed nickel cadmium cell manufacturing data and cell matching data. The cells subjected to the analysis were 30 Ah sealed Ni/Cd cells, made by General Electric. A total of 213 data parameters was investigated, including such information as plate thickness, amount of electrolyte added, weight of active material, positive and negative capacity, and charge-discharge behavior. Statistical analyses were made to determine possible correlations between test events. The data show many departures from normal distribution. Product consistency from one lot to another is an important attribute for aerospace applications. It is clear from these examples that there are some significant differences between lots. Statistical analyses are seen to be an excellent way to spot those differences. Also, it is now proven beyond doubt that battery testing is one of the leading causes of statistics.

  14. MAGNETOMETRY, SELF-POTENTIAL, AND SEISMIC - ADDITIONAL GEOPHYSICAL METHODS HAVING POTENTIALLY SIGNIFICANT FUTURE UTILIZATION IN AGRICULTURE

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Geophysical methods can provide important information in agricultural settings, and the use of these techniques are becoming more and more widespread. Magnetrometry, self-potential, and seismic are three geophysical methods, all of which have the potential for substantial future use in agriculture, ...

  15. Additions and deletions to the known cerambycidae (Coleoptera) of Bolivia

    Technology Transfer Automated Retrieval System (TEKTRAN)

    An additional 137 species and two tribes are added to the known cerambycid fauna of Bolivia while 12 species are deleted. Comments and statistics regarding the growth of knowledge on the Bolivian Cerambycid fauna and species endemicity are included....

  16. Statistical Literacy: Developing a Youth and Adult Education Statistical Project

    ERIC Educational Resources Information Center

    Conti, Keli Cristina; Lucchesi de Carvalho, Dione

    2014-01-01

    This article focuses on the notion of literacy--general and statistical--in the analysis of data from a fieldwork research project carried out as part of a master's degree that investigated the teaching and learning of statistics in adult education mathematics classes. We describe the statistical context of the project that involved the…

  17. Understanding Statistics and Statistics Education: A Chinese Perspective

    ERIC Educational Resources Information Center

    Shi, Ning-Zhong; He, Xuming; Tao, Jian

    2009-01-01

    In recent years, statistics education in China has made great strides. However, there still exists a fairly large gap with the advanced levels of statistics education in more developed countries. In this paper, we identify some existing problems in statistics education in Chinese schools and make some proposals as to how they may be overcome. We…

  18. Statistics Anxiety and Business Statistics: The International Student

    ERIC Educational Resources Information Center

    Bell, James A.

    2008-01-01

    Does the international student suffer from statistics anxiety? To investigate this, the Statistics Anxiety Rating Scale (STARS) was administered to sixty-six beginning statistics students, including twelve international students and fifty-four domestic students. Due to the small number of international students, nonparametric methods were used to…

  19. Wide Wide World of Statistics: International Statistics on the Internet.

    ERIC Educational Resources Information Center

    Foudy, Geraldine

    2000-01-01

    Explains how to find statistics on the Internet, especially international statistics. Discusses advantages over print sources, including convenience, currency of information, cost effectiveness, and value-added formatting; sources of international statistics; United Nations agencies; search engines and power searching; and evaluating sources. (LRW)

  20. Teaching Statistics in Biology: Using Inquiry-based Learning to Strengthen Understanding of Statistical Analysis in Biology Laboratory Courses

    PubMed Central

    2008-01-01

    There is an increasing need for students in the biological sciences to build a strong foundation in quantitative approaches to data analyses. Although most science, engineering, and math field majors are required to take at least one statistics course, statistical analysis is poorly integrated into undergraduate biology course work, particularly at the lower-division level. Elements of statistics were incorporated into an introductory biology course, including a review of statistics concepts and opportunity for students to perform statistical analysis in a biological context. Learning gains were measured with an 11-item statistics learning survey instrument developed for the course. Students showed a statistically significant 25% (p < 0.005) increase in statistics knowledge after completing introductory biology. Students improved their scores on the survey after completing introductory biology, even if they had previously completed an introductory statistics course (9%, improvement p < 0.005). Students retested 1 yr after completing introductory biology showed no loss of their statistics knowledge as measured by this instrument, suggesting that the use of statistics in biology course work may aid long-term retention of statistics knowledge. No statistically significant differences in learning were detected between male and female students in the study. PMID:18765754

  1. Prognostic Significance of POLE Proofreading Mutations in Endometrial Cancer

    PubMed Central

    Church, David N.; Stelloo, Ellen; Nout, Remi A.; Valtcheva, Nadejda; Depreeuw, Jeroen; ter Haar, Natalja; Noske, Aurelia; Amant, Frederic; Wild, Peter J.; Lambrechts, Diether; Jürgenliemk-Schulz, Ina M.; Jobsen, Jan J.; Smit, Vincent T. H. B. M.; Creutzberg, Carien L.; Bosse, Tjalling

    2015-01-01

    Background: Current risk stratification in endometrial cancer (EC) results in frequent over- and underuse of adjuvant therapy, and may be improved by novel biomarkers. We examined whether POLE proofreading mutations, recently reported in about 7% of ECs, predict prognosis. Methods: We performed targeted POLE sequencing in ECs from the PORTEC-1 and -2 trials (n = 788), and analyzed clinical outcome according to POLE status. We combined these results with those from three additional series (n = 628) by meta-analysis to generate multivariable-adjusted, pooled hazard ratios (HRs) for recurrence-free survival (RFS) and cancer-specific survival (CSS) of POLE-mutant ECs. All statistical tests were two-sided. Results: POLE mutations were detected in 48 of 788 (6.1%) ECs from PORTEC-1 and-2 and were associated with high tumor grade (P < .001). Women with POLE-mutant ECs had fewer recurrences (6.2% vs 14.1%) and EC deaths (2.3% vs 9.7%), though, in the total PORTEC cohort, differences in RFS and CSS were not statistically significant (multivariable-adjusted HR = 0.43, 95% CI = 0.13 to 1.37, P = .15; HR = 0.19, 95% CI = 0.03 to 1.44, P = .11 respectively). However, of 109 grade 3 tumors, 0 of 15 POLE-mutant ECs recurred, compared with 29 of 94 (30.9%) POLE wild-type cancers; reflected in statistically significantly greater RFS (multivariable-adjusted HR = 0.11, 95% CI = 0.001 to 0.84, P = .03). In the additional series, there were no EC-related events in any of 33 POLE-mutant ECs, resulting in a multivariable-adjusted, pooled HR of 0.33 for RFS (95% CI = 0.12 to 0.91, P = .03) and 0.26 for CSS (95% CI = 0.06 to 1.08, P = .06). Conclusion: POLE proofreading mutations predict favorable EC prognosis, independently of other clinicopathological variables, with the greatest effect seen in high-grade tumors. This novel biomarker may help to reduce overtreatment in EC. PMID:25505230

  2. Statistical Modeling for Radiation Hardness Assurance

    NASA Technical Reports Server (NTRS)

    Ladbury, Raymond L.

    2014-01-01

    We cover the models and statistics associated with single event effects (and total ionizing dose), why we need them, and how to use them: What models are used, what errors exist in real test data, and what the model allows us to say about the DUT will be discussed. In addition, how to use other sources of data such as historical, heritage, and similar part and how to apply experience, physics, and expert opinion to the analysis will be covered. Also included will be concepts of Bayesian statistics, data fitting, and bounding rates.

  3. Improving extreme value statistics.

    PubMed

    Shekhawat, Ashivni

    2014-11-01

    The rate of convergence in extreme value statistics is nonuniversal and can be arbitrarily slow. Further, the relative error can be unbounded in the tail of the approximation, leading to difficulty in extrapolating the extreme value fit beyond the available data. We introduce the T method, and show that by using simple nonlinear transformations the extreme value approximation can be rendered rapidly convergent in the bulk, and asymptotic in the tail, thus fixing both issues. The transformations are often parametrized by just one parameter, which can be estimated numerically. The classical extreme value method is shown to be a special case of the proposed method. We demonstrate that vastly improved results can be obtained with almost no extra cost. PMID:25493780

  4. Statistical test of anarchy

    NASA Astrophysics Data System (ADS)

    de Gouvêa, André; Murayama, Hitoshi

    2003-10-01

    “Anarchy” is the hypothesis that there is no fundamental distinction among the three flavors of neutrinos. It describes the mixing angles as random variables, drawn from well-defined probability distributions dictated by the group Haar measure. We perform a Kolmogorov-Smirnov (KS) statistical test to verify whether anarchy is consistent with all neutrino data, including the new result presented by KamLAND. We find a KS probability for Nature's choice of mixing angles equal to 64%, quite consistent with the anarchical hypothesis. In turn, assuming that anarchy is indeed correct, we compute lower bounds on |Ue3|2, the remaining unknown “angle” of the leptonic mixing matrix.

  5. Statistical physics ""Beyond equilibrium

    SciTech Connect

    Ecke, Robert E

    2009-01-01

    The scientific challenges of the 21st century will increasingly involve competing interactions, geometric frustration, spatial and temporal intrinsic inhomogeneity, nanoscale structures, and interactions spanning many scales. We will focus on a broad class of emerging problems that will require new tools in non-equilibrium statistical physics and that will find application in new material functionality, in predicting complex spatial dynamics, and in understanding novel states of matter. Our work will encompass materials under extreme conditions involving elastic/plastic deformation, competing interactions, intrinsic inhomogeneity, frustration in condensed matter systems, scaling phenomena in disordered materials from glasses to granular matter, quantum chemistry applied to nano-scale materials, soft-matter materials, and spatio-temporal properties of both ordinary and complex fluids.

  6. Why Tsallis statistics?

    NASA Astrophysics Data System (ADS)

    Baranger, Michel

    2002-03-01

    It is a remarkable fact that the traditional teaching of thermodynamics, as reflected in the textbooks and including the long developments about ensembles and thermodynamic functions, is almost entirely about systems in equilibrium. The time variable does not enter. There is one exception, however. The single most important item, the flagship of the thermodynamic navy, the second law, is about the irreversibility of the time evolution of systems out of equilibrium. This is a bizarre situation, to say the least; a glaring case of the drunk man looking for his key under the lamp-post, when he knows that he lost it in the dark part of the street. The moment has come for us to go looking in the dark part, the behavior of systems as a function of time. We have been given a powerful new flashlight, chaos theory. We should use it. There, on the formerly dark pavement, we can find Tsallis statistics.

  7. Fast approximate motif statistics.

    PubMed

    Nicodème, P

    2001-01-01

    We present in this article a fast approximate method for computing the statistics of a number of non-self-overlapping matches of motifs in a random text in the nonuniform Bernoulli model. This method is well suited for protein motifs where the probability of self-overlap of motifs is small. For 96% of the PROSITE motifs, the expectations of occurrences of the motifs in a 7-million-amino-acids random database are computed by the approximate method with less than 1% error when compared with the exact method. Processing of the whole PROSITE takes about 30 seconds with the approximate method. We apply this new method to a comparison of the C. elegans and S. cerevisiae proteomes. PMID:11535175

  8. Statistical design controversy

    SciTech Connect

    Evans, L.S.; Hendrey, G.R.; Thompson, K.H.

    1985-02-01

    This article was in response to criticisms received by Evans, Hendrey, and Thompson that their article was biased because of omissions and misrepresentations. The authors contend that experimental designs having only one plot per treatment ''were, from the outset, not capable of differentiating between treatment effects and field-position effects,'' remains valid and is supported by decades of agronomic research. Several men, Irving, Troiano, and McCune thought of the article as a review of all studies of acidic rain effects on soybeans. It was not. The article was written over the concern of the comparisons which were being made among studies which purport to evaluate effects of acid deposition on field-grown crops, and implicitly assumes that all of the studies are of equal scientific value. They are not. Only experimental approaches that are well-focused and designed with appropriate agronomic and statistical procedures should be used for credible regional and national assessments of crop inventories. 12 references.

  9. Statistical Thermodynamics of Biomembranes

    PubMed Central

    Devireddy, Ram V.

    2010-01-01

    An overview of the major issues involved in the statistical thermodynamic treatment of phospholipid membranes at the atomistic level is summarized: thermodynamic ensembles, initial configuration (or the physical system being modeled), force field representation as well as the representation of long-range interactions. This is followed by a description of the various ways that the simulated ensembles can be analyzed: area of the lipid, mass density profiles, radial distribution functions (RDFs), water orientation profile, Deuteurium order parameter, free energy profiles and void (pore) formation; with particular focus on the results obtained from our recent molecular dynamic (MD) simulations of phospholipids interacting with dimethylsulfoxide (Me2SO), a commonly used cryoprotective agent (CPA). PMID:19460363

  10. Statistical crack mechanics

    SciTech Connect

    Dienes, J.K.

    1983-01-01

    An alternative to the use of plasticity theory to characterize the inelastic behavior of solids is to represent the flaws by statistical methods. We have taken such an approach to study fragmentation because it offers a number of advantages. Foremost among these is that, by considering the effects of flaws, it becomes possible to address the underlying physics directly. For example, we have been able to explain why rocks exhibit large strain-rate effects (a consequence of the finite growth rate of cracks), why a spherical explosive imbedded in oil shale produces a cavity with a nearly square section (opening of bedding cracks) and why propellants may detonate following low-speed impact (a consequence of frictional hot spots).

  11. The Statistical Consulting Center for Astronomy (SCCA)

    NASA Technical Reports Server (NTRS)

    Akritas, Michael

    2001-01-01

    The process by which raw astronomical data acquisition is transformed into scientifically meaningful results and interpretation typically involves many statistical steps. Traditional astronomy limits itself to a narrow range of old and familiar statistical methods: means and standard deviations; least-squares methods like chi(sup 2) minimization; and simple nonparametric procedures such as the Kolmogorov-Smirnov tests. These tools are often inadequate for the complex problems and datasets under investigations, and recent years have witnessed an increased usage of maximum-likelihood, survival analysis, multivariate analysis, wavelet and advanced time-series methods. The Statistical Consulting Center for Astronomy (SCCA) assisted astronomers with the use of sophisticated tools, and to match these tools with specific problems. The SCCA operated with two professors of statistics and a professor of astronomy working together. Questions were received by e-mail, and were discussed in detail with the questioner. Summaries of those questions and answers leading to new approaches were posted on the Web (www.state.psu.edu/ mga/SCCA). In addition to serving individual astronomers, the SCCA established a Web site for general use that provides hypertext links to selected on-line public-domain statistical software and services. The StatCodes site (www.astro.psu.edu/statcodes) provides over 200 links in the areas of: Bayesian statistics; censored and truncated data; correlation and regression, density estimation and smoothing, general statistics packages and information; image analysis; interactive Web tools; multivariate analysis; multivariate clustering and classification; nonparametric analysis; software written by astronomers; spatial statistics; statistical distributions; time series analysis; and visualization tools. StatCodes has received a remarkable high and constant hit rate of 250 hits/week (over 10,000/year) since its inception in mid-1997. It is of interest to

  12. On Type and Other Hot Binaries: Current Statistics of the USNO Database

    NASA Astrophysics Data System (ADS)

    Mason, B. D.

    The first speckle survey of O stars (Mason et al. 1998) conducted on NOAO 4-m telescopes in 1994-6 had success far in excess of our expectations. In addition to the conclusions in the multiplicity analysis, many of the new systems which were first resolved in this paper have very significant astrophysical interest. This updates the statistics from 1998 based on new results from the double star catalogs maintained at the U.S. Naval Observatory.

  13. Simulating statistics of lightning-induced and man made fires

    NASA Astrophysics Data System (ADS)

    Krenn, R.; Hergarten, S.

    2009-04-01

    The frequency-area distributions of forest fires show power-law behavior with scaling exponents α in a quite narrow range, relating wildfire research to the theoretical framework of self-organized criticality. Examples of self-organized critical behavior can be found in computer simulations of simple cellular automata. The established self-organized critical Drossel-Schwabl forest fire model (DS-FFM) is one of the most widespread models in this context. Despite its qualitative agreement with event-size statistics from nature, its applicability is still questioned. Apart from general concerns that the DS-FFM apparently oversimplifies the complex nature of forest dynamics, it significantly overestimates the frequency of large fires. We present a straightforward modification of the model rules that increases the scaling exponent α by approximately 1•3 and brings the simulated event-size statistics close to those observed in nature. In addition, combined simulations of both the original and the modified model predict a dependence of the overall distribution on the ratio of lightning induced and man made fires as well as a difference between their respective event-size statistics. The increase of the scaling exponent with decreasing lightning probability as well as the splitting of the partial distributions are confirmed by the analysis of the Canadian Large Fire Database. As a consequence, lightning induced and man made forest fires cannot be treated separately in wildfire modeling, hazard assessment and forest management.

  14. Statistical Analysis of Single-Trial Granger Causality Spectra

    PubMed Central

    Brovelli, Andrea

    2012-01-01

    Granger causality analysis is becoming central for the analysis of interactions between neural populations and oscillatory networks. However, it is currently unclear whether single-trial estimates of Granger causality spectra can be used reliably to assess directional influence. We addressed this issue by combining single-trial Granger causality spectra with statistical inference based on general linear models. The approach was assessed on synthetic and neurophysiological data. Synthetic bivariate data was generated using two autoregressive processes with unidirectional coupling. We simulated two hypothetical experimental conditions: the first mimicked a constant and unidirectional coupling, whereas the second modelled a linear increase in coupling across trials. The statistical analysis of single-trial Granger causality spectra, based on t-tests and linear regression, successfully recovered the underlying pattern of directional influence. In addition, we characterised the minimum number of trials and coupling strengths required for significant detection of directionality. Finally, we demonstrated the relevance for neurophysiology by analysing two local field potentials (LFPs) simultaneously recorded from the prefrontal and premotor cortices of a macaque monkey performing a conditional visuomotor task. Our results suggest that the combination of single-trial Granger causality spectra and statistical inference provides a valuable tool for the analysis of large-scale cortical networks and brain connectivity. PMID:22649482

  15. 40 CFR Appendix IV to Part 265 - Tests for Significance

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...(b) the owner or operator must use the Student's t-test to determine statistically significant... (specific conductance, total organic carbon, and total organic halogen) a single-tailed Student's t-test must be used to test at the 0.01 level of significance for significant increases over background....

  16. Evaluation of removable statistical interaction for binary traits.

    PubMed

    Satagopan, Jaya M; Elston, Robert C

    2013-03-30

    This paper is concerned with evaluating whether an interaction between two sets of risk factors for a binary trait is removable and, when it is removable, fitting a parsimonious additive model using a suitable link function to estimate the disease odds (on the natural logarithm scale). Statisticians define the term 'interaction' as a departure from additivity in a linear model on a specific scale on which the data are measured. Certain interactions may be eliminated via a transformation of the outcome such that the relationship between the risk factors and the outcome is additive on the transformed scale. Such interactions are known as removable interactions. We develop a novel test statistic for detecting the presence of a removable interaction in case-control studies. We consider the Guerrero and Johnson family of transformations and show that this family constitutes an appropriate link function for fitting an additive model when an interaction is removable. We use simulation studies to examine the type I error and power of the proposed test and to show that, when an interaction is removable, an additive model based on the Guerrero and Johnson link function leads to more precise estimates of the disease odds parameters and a better fit. We illustrate the proposed test and use of the transformation by using case-control data from three published studies. Finally, we indicate how one can check that, after transformation, no further interaction is significant. PMID:23018341

  17. Evaluation of removable statistical interaction for binary traits

    PubMed Central

    Satagopan, Jaya M.; Elston, Robert C.

    2013-01-01

    This paper is concerned with evaluating whether an interaction between two sets of risk factors for a binary trait is removable and fitting a parsimonious additive model using a suitable link function to estimate the disease odds (on the natural logarithm scale) when an interaction is removable. Statisticians define the term “interaction” as a departure from additivity in a linear model on a specific scale on which the data are measured. Certain interactions may be eliminated via a transformation of the outcome such that the relationship between the risk factors and the outcome is additive on the transformed scale. Such interactions are known as removable interactions. We develop a novel test statistic for detecting the presence a removable interaction in case-control studies. We consider the Guerrero and Johnson family of transformations and show that this family constitutes an appropriate link function for fitting an additive model when an interaction is removable. We use simulation studies to examine the type I error and power of the proposed test and to show that an additive model based on the Guerrero and Johnson link function leads to more precise estimates of the disease odds parameters and a better fit when an interaction is removable. The proposed test and use of the transformation are illustrated using case-control data from three published studies. Finally, we indicate how one can check that, after transformation, no further interaction is significant. PMID:23018341

  18. Effect of Operating Parameters and Chemical Additives on Crystal Habit and Specific Cake Resistance of Zinc Hydroxide Precipitates

    SciTech Connect

    Alwin, Jennifer Louise

    1999-08-01

    The effect of process parameters and chemical additives on the specific cake resistance of zinc hydroxide precipitates was investigated. The ability of a slurry to be filtered is dependent upon the particle habit of the solid and the particle habit is influenced by certain process variables. The process variables studied include neutralization temperature, agitation type, and alkalinity source used for neutralization. Several commercially available chemical additives advertised to aid in solid/liquid separation were also examined in conjunction with hydroxide precipitation. A statistical analysis revealed that the neutralization temperature and the source of alkalinity were statistically significant in influencing the specific cake resistance of zinc hydroxide precipitates in this study. The type of agitation did not significantly effect the specific cake resistance of zinc hydroxide precipitates. The use of chemical additives in conjunction with hydroxide precipitation had a favorable effect on the filterability. The morphology of the hydroxide precipitates was analyzed using scanning electron microscopy.

  19. HPV-Associated Cancers Statistics

    MedlinePlus

    ... What CDC Is Doing Related Links Stay Informed Statistics for Other Kinds of Cancer Breast Cervical Colorectal ( ... Vaginal and Vulvar Cancer Home HPV-Associated Cancer Statistics Language: English Español (Spanish) Recommend on Facebook Tweet ...

  20. Key Statistics for Thyroid Cancer

    MedlinePlus

    ... cancer? Next Topic Thyroid cancer risk factors Key statistics for thyroid cancer How common is thyroid cancer? ... remains very low compared with most other cancers. Statistics on survival rates for thyroid cancer are discussed ...

  1. Muscular Dystrophy: Data and Statistics

    MedlinePlus

    ... Statistics Recommend on Facebook Tweet Share Compartir MD STAR net Data and Statistics The following data and ... research [ Read Article ] For more information on MD STAR net see Research and Tracking . Key Findings Feature ...

  2. Heart Disease and Stroke Statistics

    MedlinePlus

    ... Nutrition (PDF) Obesity (PDF) Peripheral Artery Disease (PDF) ... statistics, please contact the American Heart Association National Center, Office of Science & Medicine at statistics@heart.org . Please direct all ...

  3. Statistical methods in physical mapping

    SciTech Connect

    Nelson, D.O.

    1995-05-01

    One of the great success stories of modern molecular genetics has been the ability of biologists to isolate and characterize the genes responsible for serious inherited diseases like fragile X syndrome, cystic fibrosis and myotonic muscular dystrophy. This dissertation concentrates on constructing high-resolution physical maps. It demonstrates how probabilistic modeling and statistical analysis can aid molecular geneticists in the tasks of planning, execution, and evaluation of physical maps of chromosomes and large chromosomal regions. The dissertation is divided into six chapters. Chapter 1 provides an introduction to the field of physical mapping, describing the role of physical mapping in gene isolation and ill past efforts at mapping chromosomal regions. The next two chapters review and extend known results on predicting progress in large mapping projects. Such predictions help project planners decide between various approaches and tactics for mapping large regions of the human genome. Chapter 2 shows how probability models have been used in the past to predict progress in mapping projects. Chapter 3 presents new results, based on stationary point process theory, for progress measures for mapping projects based on directed mapping strategies. Chapter 4 describes in detail the construction of all initial high-resolution physical map for human chromosome 19. This chapter introduces the probability and statistical models involved in map construction in the context of a large, ongoing physical mapping project. Chapter 5 concentrates on one such model, the trinomial model. This chapter contains new results on the large-sample behavior of this model, including distributional results, asymptotic moments, and detection error rates. In addition, it contains an optimality result concerning experimental procedures based on the trinomial model. The last chapter explores unsolved problems and describes future work.

  4. Statistics Anxiety and Instructor Immediacy

    ERIC Educational Resources Information Center

    Williams, Amanda S.

    2010-01-01

    The purpose of this study was to investigate the relationship between instructor immediacy and statistics anxiety. It was predicted that students receiving immediacy would report lower levels of statistics anxiety. Using a pretest-posttest-control group design, immediacy was measured using the Instructor Immediacy scale. Statistics anxiety was…

  5. Statistics: It's in the Numbers!

    ERIC Educational Resources Information Center

    Deal, Mary M.; Deal, Walter F., III

    2007-01-01

    Mathematics and statistics play important roles in peoples' lives today. A day hardly passes that they are not bombarded with many different kinds of statistics. As consumers they see statistical information as they surf the web, watch television, listen to their satellite radios, or even read the nutrition facts panel on a cereal box in the…

  6. Statistical log analysis made practical

    SciTech Connect

    Mitchell, W.K.; Nelson, R.J. )

    1991-06-01

    This paper discusses the advantages of a statistical approach to log analysis. Statistical techniques use inverse methods to calculate formation parameters. The use of statistical techniques has been limited, however, by the complexity of the mathematics and lengthy computer time required to minimize traditionally used nonlinear equations.

  7. Invention Activities Support Statistical Reasoning

    ERIC Educational Resources Information Center

    Smith, Carmen Petrick; Kenlan, Kris

    2016-01-01

    Students' experiences with statistics and data analysis in middle school are often limited to little more than making and interpreting graphs. Although students may develop fluency in statistical procedures and vocabulary, they frequently lack the skills necessary to apply statistical reasoning in situations other than clear-cut textbook examples.…

  8. Teaching Statistics Online Using "Excel"

    ERIC Educational Resources Information Center

    Jerome, Lawrence

    2011-01-01

    As anyone who has taught or taken a statistics course knows, statistical calculations can be tedious and error-prone, with the details of a calculation sometimes distracting students from understanding the larger concepts. Traditional statistics courses typically use scientific calculators, which can relieve some of the tedium and errors but…

  9. Explorations in Statistics: the Bootstrap

    ERIC Educational Resources Information Center

    Curran-Everett, Douglas

    2009-01-01

    Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This fourth installment of Explorations in Statistics explores the bootstrap. The bootstrap gives us an empirical approach to estimate the theoretical variability among possible values of a sample statistic such as the…

  10. Representative Ensembles in Statistical Mechanics

    NASA Astrophysics Data System (ADS)

    Yukalov, V. I.

    The notion of representative statistical ensembles, correctly representing statistical systems, is strictly formulated. This notion allows for a proper description of statistical systems, avoiding inconsistencies in theory. As an illustration, a Bose-condensed system is considered. It is shown that a self-consistent treatment of the latter, using a representative ensemble, always yields a conserving and gapless theory.

  11. Use of Statistics by Librarians.

    ERIC Educational Resources Information Center

    Christensen, John O.

    1988-01-01

    Description of common errors found in the statistical methodologies of research carried out by librarians, focuses on sampling and generalizability. The discussion covers the need to either adapt library research to the statistical abilities of librarians or to educate librarians in the proper use of statistics. (15 references) (CLB)

  12. Statistical Analysis Experiment for Freshman Chemistry Lab.

    ERIC Educational Resources Information Center

    Salzsieder, John C.

    1995-01-01

    Describes a laboratory experiment dissolving zinc from galvanized nails in which data can be gathered very quickly for statistical analysis. The data have sufficient significant figures and the experiment yields a nice distribution of random errors. Freshman students can gain an appreciation of the relationships between random error, number of…

  13. Statistical Symbolic Execution with Informed Sampling

    NASA Technical Reports Server (NTRS)

    Filieri, Antonio; Pasareanu, Corina S.; Visser, Willem; Geldenhuys, Jaco

    2014-01-01

    Symbolic execution techniques have been proposed recently for the probabilistic analysis of programs. These techniques seek to quantify the likelihood of reaching program events of interest, e.g., assert violations. They have many promising applications but have scalability issues due to high computational demand. To address this challenge, we propose a statistical symbolic execution technique that performs Monte Carlo sampling of the symbolic program paths and uses the obtained information for Bayesian estimation and hypothesis testing with respect to the probability of reaching the target events. To speed up the convergence of the statistical analysis, we propose Informed Sampling, an iterative symbolic execution that first explores the paths that have high statistical significance, prunes them from the state space and guides the execution towards less likely paths. The technique combines Bayesian estimation with a partial exact analysis for the pruned paths leading to provably improved convergence of the statistical analysis. We have implemented statistical symbolic execution with in- formed sampling in the Symbolic PathFinder tool. We show experimentally that the informed sampling obtains more precise results and converges faster than a purely statistical analysis and may also be more efficient than an exact symbolic analysis. When the latter does not terminate symbolic execution with informed sampling can give meaningful results under the same time and memory limits.

  14. Statistical mechanics and the ontological interpretation

    NASA Astrophysics Data System (ADS)

    Bohm, D.; Hiley, B. J.

    1996-06-01

    To complete our ontological interpretation of quantum theory we have to conclude a treatment of quantum statistical mechanics. The basic concepts in the ontological approach are the particle and the wave function. The density matrix cannot play a fundamental role here. Therefore quantum statistical mechanics will require a further statistical distribution over wave functions in addition to the distribution of particles that have a specified wave function. Ultimately the wave function of the universe will he required, but we show that if the universe in not in thermodynamic equilibrium then it can he treated in terms of weakly interacting large scale constituents that are very nearly independent of each other. In this way we obtain the same results as those of the usual approach within the framework of the ontological interpretation.

  15. Proof of the Spin-Statistics Theorem

    NASA Astrophysics Data System (ADS)

    Santamato, Enrico; De Martini, Francesco

    2015-07-01

    The traditional standard quantum mechanics theory is unable to solve the spin-statistics problem, i.e. to justify the utterly important "Pauli Exclusion Principle". A complete and straightforward solution of the spin-statistics problem is presented on the basis of the "conformal quantum geometrodynamics" theory. This theory provides a Weyl-gauge invariant formulation of the standard quantum mechanics and reproduces successfully all relevant quantum processes including the formulation of Dirac's or Schrödinger's equation, of Heisenberg's uncertainty relations and of the nonlocal EPR correlations. When the conformal quantum geometrodynamics is applied to a system made of many identical particles with spin, an additional constant property of all elementary particles enters naturally into play: the "intrinsic helicity". This property, not considered in the Standard Quantum Mechanics, determines the correct spin-statistics connection observed in Nature.

  16. Clastogenic effects of food additive citric acid in human peripheral lymphocytes

    PubMed Central

    Ünal, Fatma; Yüzbaşıoğlu, Deniz; Aksoy, Hüseyin

    2008-01-01

    Clastogenic properties of the food additive citric acid, commonly used as an antioxidant, were analysed in human peripheral blood lymphocytes. Citric acid induced a significant increase of chromosomal aberrations (CAs) at all the concentrations and treatment periods tested. Citric acid significantly decreased mitotic index (MI) at 100 and 200 μg ml−1 concentrations at 24 h, and in all concentrations at 48 h. However, it did not decrease the replication index (RI) significantly. Citric acid also significantly increased sister chromatid exchanges (SCEs) at 100 and 200 μg ml−1 concentrations at 24 h, and in all concentrations at 48 h. This chemical significantly increased the micronuclei frequency (MN) compared to the negative control. It also decreased the cytokinesis-block proliferation index (CBPI), but this result was not statistically significant. PMID:19002851

  17. Topics in statistical mechanics

    SciTech Connect

    Elser, V.

    1984-05-01

    This thesis deals with four independent topics in statistical mechanics: (1) the dimer problem is solved exactly for a hexagonal lattice with general boundary using a known generating function from the theory of partitions. It is shown that the leading term in the entropy depends on the shape of the boundary; (2) continuum models of percolation and self-avoiding walks are introduced with the property that their series expansions are sums over linear graphs with intrinsic combinatorial weights and explicit dimension dependence; (3) a constrained SOS model is used to describe the edge of a simple cubic crystal. Low and high temperature results are derived as well as the detailed behavior near the crystal facet; (4) the microscopic model of the lambda-transition involving atomic permutation cycles is reexamined. In particular, a new derivation of the two-component field theory model of the critical behavior is presented. Results for a lattice model originally proposed by Kikuchi are extended with a high temperature series expansion and Monte Carlo simulation. 30 references.

  18. International petroleum statistics report

    SciTech Connect

    1996-05-01

    The International Petroleum Statistics Report presents data on international oil production, demand, imports, exports, and stocks. The report has four sections. Section 1 contains time series data on world oil production, and on oil demand and stocks in the Organization for Economic Cooperation and Development (OECD). This section contains annual data beginning in 1985, and monthly data for the most recent two years. Section 2 presents an oil supply/demand balance for the world. This balance is presented in quarterly intervals for the most recent two years. Section 3 presents data on oil imports by OECD countries. This section contains annual data for the most recent year, quarterly data for the most recent two quarters, and monthly data for the most recent twelve months. Section 4 presents annual time series data on world oil production and oil stocks, demand, and trade in OECD countries. World oil production and OECD demand data are for the years 1970 through 1995; OECD stocks from 1973 through 1995; and OECD trade from 1084 through 1994.

  19. Statistical properties of exoplanets

    NASA Astrophysics Data System (ADS)

    Udry, Stéphane

    Since the detection a decade ago of the planetary companion of 51 Peg, more than 165 extrasolar planets have been unveiled by radial-velocity measurements. They present a wide variety of characteristics such as large masses with small orbital separations, high eccentricities, period resonances in multi-planet systems, etc. Meaningful features of the statistical distributions of the orbital parameters or parent stellar properties have emerged. We discuss them in the context of the constraints they provide for planet-formation models and in comparison to Neptune-mass planets in short-period orbits recently detected by radial-velocity surveys, thanks to new instrumental developments and adequate observing strategy. We expect continued improvement in velocity precision and anticipate the detection of Neptune-mass planets in longer-period orbits and even lower-mass planets in short-period orbits, giving us new information on the mass distribution function of exoplanets. Finally, the role of radial-velocity follow-up measurements of transit candidates is emphasized.

  20. International petroleum statistics report

    SciTech Connect

    1995-07-27

    The International Petroleum Statistics Report presents data on international oil production, demand, imports, and exports, and stocks. The report has four sections. Section 1 contains time series data on world oil production, and on oil demand and stocks in the Organization for Economic Cooperation and Development (OECD). This section contains annual data beginning in 1985, and monthly data for the most recent two years. Section 2 presents an oil supply/demand balance for the world. This balance is presented in quarterly intervals for the most recent two years. Section 3 presents data on oil imports by OECD countries. This section contains annual data for the most recent year, quarterly data for the most recent two quarters, and monthly data for the most recent twelve months. Section 4 presents annual time series data on world oil production and oil stocks, demand, and trade in OECD countries. World oil production and OECD demand data are for the years 1970 through 1994; OECD stocks from 1973 through 1994; and OECD trade from 1984 through 1994.

  1. International petroleum statistics report

    SciTech Connect

    1997-07-01

    The International Petroleum Statistics Report is a monthly publication that provides current international data. The report presents data on international oil production, demand, imports, and stocks. The report has four sections. Section 1 contains time series data on world oil production, and on oil demand and stocks in the Organization for Economic Cooperation and Development (OECD). This section contains annual data beginning in 1985, and monthly data for the most recent two years. Section 2 presents an oil supply/demand balance for the world. This balance is presented in quarterly intervals for the most recent two years. Section 3 presents data on oil imports by OECD countries. This section contains annual data for the most recent year, quarterly data for the most recent two quarters, and monthly data for the most recent 12 months. Section 4 presents annual time series data on world oil production and oil stocks, demand, and trade in OECD countries. World oil production and OECD demand data are for the years 1970 through 1996; OECD stocks from 1973 through 1996; and OECD trade from 1986 through 1996.

  2. International petroleum statistics report

    SciTech Connect

    1995-11-01

    The International Petroleum Statistics Report presents data on international oil production, demand, imports, exports, and stocks. The report has four sections. Section 1 contains time series data on world oil production, and on oil demand and stocks in the Organization for Economic Cooperation and Development (OECD). This section contains annual data beginning in 1985, and monthly data for the most recent two years. Section 2 presents an oil supply/demand balance for the world. This balance is presented in quarterly intervals for the most recent two years. Section 3 presents data on oil imports by OECD countries. This section contains annual data for the most recent year, quarterly data for the most recent two quarters, and monthly data for the most recent twelve months. Section 4 presents annual time series data on world oil production and oil stocks, demand, and trade in OECD countries. World oil production and OECD demand data are for the years 1970 through 1994; OECD stocks from 1973 through 1994; and OECD trade from 1984 through 1994.

  3. International petroleum statistics report

    SciTech Connect

    1996-10-01

    The International Petroleum Statistics Report presents data on international oil production, demand, imports, and stocks. The report has four sections. Section 1 contains time series data on world oil production, and on oil demand and stocks in the Organization for Economic Cooperation and Development (OECD). This section contains annual data beginning in 1985, and monthly data for the most recent two years. Section 2 presents an oil supply/demand balance for the world. This balance is presented in quarterly intervals for the most recent two years. Section 3 presents data on oil imports by OECD countries. This section contains annual data for the most recent year, quarterly data for the most recent two quarters, and monthly data for the most recent twelve months. Section 4 presents annual time series data on world oil production and oil stocks, demand, and trade in OECD countries. Word oil production and OECD demand data are for the years 1970 through 1995; OECD stocks from 1973 through 1995; and OECD trade from 1985 through 1995.

  4. Statistical mechanics of nucleosomes

    NASA Astrophysics Data System (ADS)

    Chereji, Razvan V.

    Eukaryotic cells contain long DNA molecules (about two meters for a human cell) which are tightly packed inside the micrometric nuclei. Nucleosomes are the basic packaging unit of the DNA which allows this millionfold compactification. A longstanding puzzle is to understand the principles which allow cells to both organize their genomes into chromatin fibers in the crowded space of their nuclei, and also to keep the DNA accessible to many factors and enzymes. With the nucleosomes covering about three quarters of the DNA, their positions are essential because these influence which genes can be regulated by the transcription factors and which cannot. We study physical models which predict the genome-wide organization of the nucleosomes and also the relevant energies which dictate this organization. In the last five years, the study of chromatin knew many important advances. In particular, in the field of nucleosome positioning, new techniques of identifying nucleosomes and the competing DNA-binding factors appeared, as chemical mapping with hydroxyl radicals, ChIP-exo, among others, the resolution of the nucleosome maps increased by using paired-end sequencing, and the price of sequencing an entire genome decreased. We present a rigorous statistical mechanics model which is able to explain the recent experimental results by taking into account nucleosome unwrapping, competition between different DNA-binding proteins, and both the interaction between histones and DNA, and between neighboring histones. We show a series of predictions of our new model, all in agreement with the experimental observations.

  5. Response of Dissolved Organic Matter to Warming and Nitrogen Addition

    NASA Astrophysics Data System (ADS)

    Choi, J. H.; Nguyen, H.

    2014-12-01

    Dissolved Organic Matter (DOM) is a ubiquitous mixture of soluble organic components. Since DOM is produced from the terrestrial leachate of various soil types, soil may influence the chemistry and biology of freshwater through the input of leachate and run-off. The increased temperature by climate change could dramatically change the DOM characteristics of soils through enhanced decomposition rate and losses of carbon from soil organic matter. In addition, the increase in the N-deposition affects DOM leaching from soils by changing the carbon cycling and decomposition rate of soil decay. In this study, we conducted growth chamber experiments using two types of soil (wetland and forest) under the conditions of temperature increase and N-deposition in order to investigate how warming and nitrogen addition influence the characteristics of the DOM leaching from different soil types. This leachate controls the quantity and quality of DOM in surface water systems. After 10 months of incubation, the dissolved organic carbon (DOC) concentrations decreased for almost samples in the range of 7.6 to 87.3% (ANOVA, p<0.05). The specific UV absorption (SUVA) values also decreased for almost samples after the first 3 months and then increased gradually afterward in range of 3.3 to 108.4%. Both time and the interaction between time and the temperature had the statistically significant effects on the SUVA values (MANOVA, p<0.05). Humification index (HIX) showed the significant increase trends during the duration of incubation and temperature for almost the samples (ANOVA, p<0.05). Higher decreases in the DOC values and increases in HIX were observed at higher temperatures, whereas the opposite trend was observed for samples with N-addition. The PARAFAC results showed that three fluorescence components: terrestrial humic (C1), microbial humic-like (C2), and protein-like (C3), constituted the fluorescence matrices of soil samples. During the experiment, labile DOM from the soils was

  6. Researchers' Perceptions of Statistical Significance Contribute to Bias in Health and Exercise Science

    ERIC Educational Resources Information Center

    Buchanan, Taylor L.; Lohse, Keith R.

    2016-01-01

    We surveyed researchers in the health and exercise sciences to explore different areas and magnitudes of bias in researchers' decision making. Participants were presented with scenarios (testing a central hypothesis with p = 0.06 or p = 0.04) in a random order and surveyed about what they would do in each scenario. Participants showed significant…

  7. Statistical, Practical, Clinical, and Personal Significance: Definitions and Applications in Speech-Language Pathology

    ERIC Educational Resources Information Center

    Bothe, Anne K.; Richardson, Jessica D.

    2011-01-01

    Purpose: To discuss constructs and methods related to assessing the magnitude and the meaning of clinical outcomes, with a focus on applications in speech-language pathology. Method: Professionals in medicine, allied health, psychology, education, and many other fields have long been concerned with issues referred to variously as practical…

  8. CONFIDENCE INTERVALS AND STANDARD ERROR INTERVALS: WHAT DO THEY MEAN IN TERMS OF STATISTICAL SIGNIFICANCE?

    Technology Transfer Automated Retrieval System (TEKTRAN)

    We investigate the use of confidence intervals and standard error intervals to draw conclusions regarding tests of hypotheses about normal population means. Mathematical expressions and algebraic manipulations are given, and computer simulations are performed to assess the usefulness of confidence ...

  9. Five Methodology Errors in Educational Research: The Pantheon of Statistical Significance and Other Faux Pas.

    ERIC Educational Resources Information Center

    Thompson, Bruce

    After presenting a general linear model as a framework for discussion, this paper reviews five methodology errors that occur in educational research: (1) the use of stepwise methods; (2) the failure to consider in result interpretation the context specificity of analytic weights (e.g., regression beta weights, factor pattern coefficients,…

  10. A Proposed New "What If Reliability" Analysis for Assessing the Statistical Significance of Bivariate Relationships.

    ERIC Educational Resources Information Center

    Onwuegbuzie, Anthony J.; Daniel, Larry G.; Roberts, J. Kyle

    The purpose of this paper is to illustrate how displaying disattenuated correlation coefficients along with their unadjusted counterparts will allow the reader to assess the impact of unreliability on each bivariate relationship. The paper also demonstrates how a proposed new "what if reliability" analysis can complement the conventional null…

  11. Assessing Statistical Significance in Microarray Experiments Using the Distance Between Microarrays

    PubMed Central

    Hayden, Douglas; Lazar, Peter; Schoenfeld, David

    2009-01-01

    We propose permutation tests based on the pairwise distances between microarrays to compare location, variability, or equivalence of gene expression between two populations. For these tests the entire microarray or some pre-specified subset of genes is the unit of analysis. The pairwise distances only have to be computed once so the procedure is not computationally intensive despite the high dimensionality of the data. An R software package, permtest, implementing the method is freely available from the Comprehensive R Archive Network at http://cran.r-project.org. PMID:19529777

  12. Determining Statistically Significant Deviations from a Model Crater Production Function for Estimating Resurfacing Events

    NASA Astrophysics Data System (ADS)

    Weaver, B. P.; Hilbe, J. M.; Robbins, S. J.; Plesko, C. S.; Riggs, J. D.

    2015-05-01

    Many crater analysts will search for deviations of observed crater population data from model crater populations and treat those deviations as a modification event - usually resurfacing. We will discuss how to assign confidences for these deviations.

  13. Evaluating Video Self-Modeling Treatment Outcomes: Differentiating between Statistically and Clinically Significant Change

    ERIC Educational Resources Information Center

    La Spata, Michelle G.; Carter, Christopher W.; Johnson, Wendi L.; McGill, Ryan J.

    2016-01-01

    The present study examined the utility of video self-modeling (VSM) for reducing externalizing behaviors (e.g., aggression, conduct problems, hyperactivity, and impulsivity) observed within the classroom environment. After identification of relevant target behaviors, VSM interventions were developed for first and second grade students (N = 4),…

  14. Statistical Significance, Effect Size Reporting, and Confidence Intervals: Best Reporting Strategies

    ERIC Educational Resources Information Center

    Capraro, Robert M.

    2004-01-01

    With great interest the author read the May 2002 editorial in the "Journal for Research in Mathematics Education (JRME)" (King, 2002) regarding changes to the 5th edition of the "Publication Manual of the American Psychological Association" (APA, 2001). Of special note to him, and of great import to the field of mathematics education research, are…

  15. Basic statistics in cell biology.

    PubMed

    Vaux, David L

    2014-01-01

    The physicist Ernest Rutherford said, "If your experiment needs statistics, you ought to have done a better experiment." Although this aphorism remains true for much of today's research in cell biology, a basic understanding of statistics can be useful to cell biologists to help in monitoring the conduct of their experiments, in interpreting the results, in presenting them in publications, and when critically evaluating research by others. However, training in statistics is often focused on the sophisticated needs of clinical researchers, psychologists, and epidemiologists, whose conclusions depend wholly on statistics, rather than the practical needs of cell biologists, whose experiments often provide evidence that is not statistical in nature. This review describes some of the basic statistical principles that may be of use to experimental biologists, but it does not cover the sophisticated statistics needed for papers that contain evidence of no other kind. PMID:25000992

  16. Ontologies and tag-statistics

    NASA Astrophysics Data System (ADS)

    Tibély, Gergely; Pollner, Péter; Vicsek, Tamás; Palla, Gergely

    2012-05-01

    Due to the increasing popularity of collaborative tagging systems, the research on tagged networks, hypergraphs, ontologies, folksonomies and other related concepts is becoming an important interdisciplinary area with great potential and relevance for practical applications. In most collaborative tagging systems the tagging by the users is completely ‘flat’, while in some cases they are allowed to define a shallow hierarchy for their own tags. However, usually no overall hierarchical organization of the tags is given, and one of the interesting challenges of this area is to provide an algorithm generating the ontology of the tags from the available data. In contrast, there are also other types of tagged networks available for research, where the tags are already organized into a directed acyclic graph (DAG), encapsulating the ‘is a sub-category of’ type of hierarchy between each other. In this paper, we study how this DAG affects the statistical distribution of tags on the nodes marked by the tags in various real networks. The motivation for this research was the fact that understanding the tagging based on a known hierarchy can help in revealing the hidden hierarchy of tags in collaborative tagging systems. We analyse the relation between the tag-frequency and the position of the tag in the DAG in two large sub-networks of the English Wikipedia and a protein-protein interaction network. We also study the tag co-occurrence statistics by introducing a two-dimensional (2D) tag-distance distribution preserving both the difference in the levels and the absolute distance in the DAG for the co-occurring pairs of tags. Our most interesting finding is that the local relevance of tags in the DAG (i.e. their rank or significance as characterized by, e.g., the length of the branches starting from them) is much more important than their global distance from the root. Furthermore, we also introduce a simple tagging model based on random walks on the DAG, capable of

  17. Gaussian statistics for palaeomagnetic vectors

    USGS Publications Warehouse

    Love, J.J.; Constable, C.G.

    2003-01-01

    formulate the inverse problem, and how to estimate the mean and variance of the magnetic vector field, even when the data consist of mixed combinations of directions and intensities. We examine palaeomagnetic secular-variation data from Hawaii and Re??union, and although these two sites are on almost opposite latitudes, we find significant differences in the mean vector and differences in the local vectorial variances, with the Hawaiian data being particularly anisotropic. These observations are inconsistent with a description of the mean field as being a simple geocentric axial dipole and with secular variation being statistically symmetrical with respect to reflection through the equatorial plane. Finally, our analysis of palaeomagnetic acquisition data from the 1960 Kilauea flow in Hawaii and the Holocene Xitle flow in Mexico, is consistent with the widely held suspicion that directional data are more accurate than intensity data.

  18. 31 CFR 561.328 - Reduce significantly, significantly reduced, and significant reduction.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 31 Money and Finance:Treasury 3 2013-07-01 2013-07-01 false Reduce significantly, significantly reduced, and significant reduction. 561.328 Section 561.328 Money and Finance: Treasury Regulations... IRANIAN FINANCIAL SANCTIONS REGULATIONS General Definitions § 561.328 Reduce significantly,...

  19. 31 CFR 561.328 - Reduce significantly, significantly reduced, and significant reduction.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 31 Money and Finance:Treasury 3 2014-07-01 2014-07-01 false Reduce significantly, significantly reduced, and significant reduction. 561.328 Section 561.328 Money and Finance: Treasury Regulations... IRANIAN FINANCIAL SANCTIONS REGULATIONS General Definitions § 561.328 Reduce significantly,...

  20. NASA Pocket Statistics

    NASA Technical Reports Server (NTRS)

    1992-01-01

    NASA administrative and organizational information is presented along with summaries of space flight activity including the NASA Major Launch Record, and NASA procurement, financial and manpower data. The Major Launch Record includes all launches of Scout class and larger vehicles. Vehicle and spacecraft development flights are also included. Shuttle missions are counted as one launch and one payload, where free flying payloads are not involved. Satellites deployed from the cargo bay of the Shuttle and placed in a separate orbit or trajectory are counted as an additional payload.

  1. Statistical analysis of high density diffuse optical tomography

    PubMed Central

    Hassanpour, Mahlega S.; White, Brian R.; Eggebrecht, Adam T.; Ferradal, Silvina L.; Snyder, Abraham Z.; Culver, Joseph P.

    2014-01-01

    High density diffuse optical tomography (HD-DOT) is a noninvasive neuroimaging modality with moderate spatial resolution and localization accuracy. Due to portability and wear-ability advantages, HD-DOT has the potential to be used in populations that are not amenable to functional magnetic resonance imaging (fMRI), such as hospitalized patients and young children. However, whereas the use of event-related stimuli designs, general linear model (GLM) analysis, and imaging statistics are standardized and routine with fMRI, such tools are not yet common practice in HD-DOT. In this paper we adapt and optimize fundamental elements of fMRI analysis for application to HD-DOT. We show the use of event-related protocols and GLM de-convolution analysis in un-mixing multi-stimuli event-related HD-DOT data. Statistical parametric mapping (SPM) in the framework of a general linear model is developed considering the temporal and spatial characteristics of HD- DOT data. The statistical analysis utilizes a random field noise model that incorporates estimates of the local temporal and spatial correlations of the GLM residuals. The multiple-comparison problem is addressed using a cluster analysis based on non-stationary Gaussian random field theory. These analysis tools provide access to a wide range of experimental designs necessary for the study of the complex brain functions. In addition, they provide a foundation for understanding and interpreting HD-DOT results with quantitative estimates for the statistical significance of detected activation foci. PMID:23732886

  2. Statistical measures for workload capacity analysis.

    PubMed

    Houpt, Joseph W; Townsend, James T

    2012-10-01

    A critical component of how we understand a mental process is given by measuring the effect of varying the workload. The capacity coefficient (Townsend & Nozawa, 1995; Townsend & Wenger, 2004) is a measure on response times for quantifying changes in performance due to workload. Despite its precise mathematical foundation, until now rigorous statistical tests have been lacking. In this paper, we demonstrate statistical properties of the components of the capacity measure and propose a significance test for comparing the capacity coefficient to a baseline measure or two capacity coefficients to each other. PMID:23175582

  3. Genetic interactions contribute less than additive effects to quantitative trait variation in yeast

    PubMed Central

    Bloom, Joshua S.; Kotenko, Iulia; Sadhu, Meru J.; Treusch, Sebastian; Albert, Frank W.; Kruglyak, Leonid

    2015-01-01

    Genetic mapping studies of quantitative traits typically focus on detecting loci that contribute additively to trait variation. Genetic interactions are often proposed as a contributing factor to trait variation, but the relative contribution of interactions to trait variation is a subject of debate. Here we use a very large cross between two yeast strains to accurately estimate the fraction of phenotypic variance due to pairwise QTL–QTL interactions for 20 quantitative traits. We find that this fraction is 9% on average, substantially less than the contribution of additive QTL (43%). Statistically significant QTL–QTL pairs typically have small individual effect sizes, but collectively explain 40% of the pairwise interaction variance. We show that pairwise interaction variance is largely explained by pairs of loci at least one of which has a significant additive effect. These results refine our understanding of the genetic architecture of quantitative traits and help guide future mapping studies. PMID:26537231

  4. A Statistical investigation of sloshing parameters for multiphase offshore separators

    NASA Astrophysics Data System (ADS)

    Mahmud, Md; Khan, Rafiqul; Xu, Qiang

    Liquid sloshing in multiphase offshore separators has been the subject of intense investigations for last several decades both by experiments and simulations. Large number scientists have worked to minimize sloshing impacts/intensity and some others have developed new methods to describe the sloshing patterns. In addition, complex mathematical models are developed to characterize sloshing phenomenon. However, a comprehensive statistical study of the input parameters and output results is not yet been studied. In this study, statistical approach will be considered to determine the significant parameters for liquid sloshing. The factor analysis and principal component analysis techniques are considered to identify the significant parameters for liquid sloshing. Numerical experiments are carried out through Computation Fluid Dynamics (CFD) technique using ANSYS Fluent software. The input parameters considered here are liquid depth/tank length ratio, tank acceleration, wave frequencies, amplitudes in various sea state conditions .The measured variables include hydrodynamic force, pressure, moments, turbulent kinetic energy, height of the free surface, vorticity. Mathematical correlations may be developed from the data analysis. Doctoral Candidate Dept of Chemical Engineering Lamar University, Beaumont, TX 77710.

  5. A Statistical investigation of sloshing parameters for multiphase offshore separators

    NASA Astrophysics Data System (ADS)

    Mahmud, Md; Khan, Rafiqul; Xu, Qiang

    Liquid sloshing in multiphase offshore separators has been the subject of intense investigations for last several decades both by experiments and simulations. Large number scientists have worked to minimize sloshing impacts and others have developed new methods to describe the sloshing patterns. In addition, complex mathematical models are developed to characterize sloshing phenomenon. However, a comprehensive statistical study of the input parameters and output results is yet to be done. In this study, statistical approaches will be considered to determine the significant parameters for liquid sloshing. The factor analysis and principal component analysis techniques are considered to identify the significant parameters for liquid sloshing. Numerical experiments are carried out through Computation Fluid Dynamics (CFD) technique using ANSYS Fluent software. The input parameters considered here are liquid depth/length ratio, acceleration, wave frequencies, amplitudes in various sea state conditions. The measured variables include hydrodynamic force, pressure, moments, turbulent kinetic energy, height of interfaces. Mathematical correlations may be developed from the data analysis. Graduate Student Dept of Chemical Eng,Lamar University, Beaumont, TX 77710.

  6. Neutron Characterization for Additive Manufacturing

    NASA Technical Reports Server (NTRS)

    Watkins, Thomas; Bilheux, Hassina; An, Ke; Payzant, Andrew; DeHoff, Ryan; Duty, Chad; Peter, William; Blue, Craig; Brice, Craig A.

    2013-01-01

    Oak Ridge National Laboratory (ORNL) is leveraging decades of experience in neutron characterization of advanced materials together with resources such as the Spallation Neutron Source (SNS) and the High Flux Isotope Reactor (HFIR) shown in Fig. 1 to solve challenging problems in additive manufacturing (AM). Additive manufacturing, or three-dimensional (3-D) printing, is a rapidly maturing technology wherein components are built by selectively adding feedstock material at locations specified by a computer model. The majority of these technologies use thermally driven phase change mechanisms to convert the feedstock into functioning material. As the molten material cools and solidifies, the component is subjected to significant thermal gradients, generating significant internal stresses throughout the part (Fig. 2). As layers are added, inherent residual stresses cause warping and distortions that lead to geometrical differences between the final part and the original computer generated design. This effect also limits geometries that can be fabricated using AM, such as thin-walled, high-aspect- ratio, and overhanging structures. Distortion may be minimized by intelligent toolpath planning or strategic placement of support structures, but these approaches are not well understood and often "Edisonian" in nature. Residual stresses can also impact component performance during operation. For example, in a thermally cycled environment such as a high-pressure turbine engine, residual stresses can cause components to distort unpredictably. Different thermal treatments on as-fabricated AM components have been used to minimize residual stress, but components still retain a nonhomogeneous stress state and/or demonstrate a relaxation-derived geometric distortion. Industry, federal laboratory, and university collaboration is needed to address these challenges and enable the U.S. to compete in the global market. Work is currently being conducted on AM technologies at the ORNL

  7. Significance testing as perverse probabilistic reasoning

    PubMed Central

    2011-01-01

    Truth claims in the medical literature rely heavily on statistical significance testing. Unfortunately, most physicians misunderstand the underlying probabilistic logic of significance tests and consequently often misinterpret their results. This near-universal misunderstanding is highlighted by means of a simple quiz which we administered to 246 physicians at two major academic hospitals, on which the proportion of incorrect responses exceeded 90%. A solid understanding of the fundamental concepts of probability theory is becoming essential to the rational interpretation of medical information. This essay provides a technically sound review of these concepts that is accessible to a medical audience. We also briefly review the debate in the cognitive sciences regarding physicians' aptitude for probabilistic inference. PMID:21356064

  8. Quantifying clinically significant change: a brief review of methods and presentation of a hybrid approach.

    PubMed

    Mann, Barton J; Gosens, Taco; Lyman, Stephen

    2012-10-01

    Treatment outcome researchers in orthopaedics frequently report only tests of statistical significance between group means to evaluate the effectiveness of a given intervention. Although important in establishing that mean differences are not caused by chance, these methods do not reflect the extent to which an intervention produces improvements that are meaningful and represent a return to health. This is an issue that is often of great interest to patients and clinicians. Other methods use a percentage change in an outcome measure (eg, 25% reduction in pain score) to classify treatment responders but often do not indicate whether the treatment restored a patient to normal. Researchers have developed several indices that provide a metric for statistically defining the amount of change that patients consider to be important. In this article, we focus on the concept of "clinical significance" and the different methods that have been developed to define clinically significant change using statistics. We then present a hybrid method that can classify whether a patient has returned to normal function. We apply this method to real patient data to illustrate its use with different outcome instruments commonly used in orthopaedic sports medicine. We advocate that the addition of these methods to reports from clinical outcome studies can deepen our understanding of the impact of interventions on patients' lives. PMID:22962295

  9. Clinical significance of lymphadenectomy in patients with gastric cancer.

    PubMed

    Tóth, Dezső; Plósz, János; Török, Miklós

    2016-02-15

    Approximately thirty percent of patients with gastric cancer undergo an avoidable lymph node dissection with a higher rate of postoperative complication. Comparing the D1 and D2 dissections, it was found that there is a significant difference in morbidity, favoured D1 dissection without any difference in overall survival. Subgroup analysis of patients with T3 tumor shows a survival difference favoring D2 lymphadenectomy, and there is a better gastric cancer-related death and non-statistically significant improvement of survival for node-positive disease in patients with D2 dissection. However, the extended lymphadenectomy could improve stage-specific survival owing to the stage migration phenomenon. The deployment of centralization and application of national guidelines could improve the surgical outcomes. The Japanese and European guidelines enclose the D2 lymphadenectomy as the gold standard in R0 resection. In the individualized, stage-adapted gastric cancer surgery the Maruyama computer program (MCP) can estimate lymph node involvement preoperatively with high accuracy and in addition the Maruyama Index less than 5 has a better impact on survival, than D-level guided surgery. For these reasons, the preoperative application of MCP is recommended routinely, with an aim to perform "low Maruyama Index surgery". The sentinel lymph node biopsy (SNB) may decrease the number of redundant lymphadenectomy intraoperatively with a high detection rate (93.7%) and an accuracy of 92%. More accurate stage-adapted surgery could be performed using the MCP and SNB in parallel fashion in gastric cancer. PMID:26909128

  10. A spatial scan statistic for multinomial data

    PubMed Central

    Jung, Inkyung; Kulldorff, Martin; Richard, Otukei John

    2014-01-01

    As a geographical cluster detection analysis tool, the spatial scan statistic has been developed for different types of data such as Bernoulli, Poisson, ordinal, exponential and normal. Another interesting data type is multinomial. For example, one may want to find clusters where the disease-type distribution is statistically significantly different from the rest of the study region when there are different types of disease. In this paper, we propose a spatial scan statistic for such data, which is useful for geographical cluster detection analysis for categorical data without any intrinsic order information. The proposed method is applied to meningitis data consisting of five different disease categories to identify areas with distinct disease-type patterns in two counties in the U.K. The performance of the method is evaluated through a simulation study. PMID:20680984

  11. Statistical process control in nursing research.

    PubMed

    Polit, Denise F; Chaboyer, Wendy

    2012-02-01

    In intervention studies in which randomization to groups is not possible, researchers typically use quasi-experimental designs. Time series designs are strong quasi-experimental designs but are seldom used, perhaps because of technical and analytic hurdles. Statistical process control (SPC) is an alternative analytic approach to testing hypotheses about intervention effects using data collected over time. SPC, like traditional statistical methods, is a tool for understanding variation and involves the construction of control charts that distinguish between normal, random fluctuations (common cause variation), and statistically significant special cause variation that can result from an innovation. The purpose of this article is to provide an overview of SPC and to illustrate its use in a study of a nursing practice improvement intervention. PMID:22095634

  12. Efforts to improve international migration statistics: a historical perspective.

    PubMed

    Kraly, E P; Gnanasekaran, K S

    1987-01-01

    During the past decade, the international statistical community has made several efforts to develop standards for the definition, collection and publication of statistics on international migration. This article surveys the history of official initiatives to standardize international migration statistics by reviewing the recommendations of the International Statistical Institute, International Labor Organization, and the UN, and reports a recently proposed agenda for moving toward comparability among national statistical systems. Heightening awareness of the benefits of exchange and creating motivation to implement international standards requires a 3-pronged effort from the international statistical community. 1st, it is essential to continue discussion about the significance of improvement, specifically standardization, of international migration statistics. The move from theory to practice in this area requires ongoing focus by migration statisticians so that conformity to international standards itself becomes a criterion by which national statistical practices are examined and assessed. 2nd, the countries should be provided with technical documentation to support and facilitate the implementation of the recommended statistical systems. Documentation should be developed with an understanding that conformity to international standards for migration and travel statistics must be achieved within existing national statistical programs. 3rd, the call for statistical research in this area requires more efforts by the community of migration statisticians, beginning with the mobilization of bilateral and multilateral resources to undertake the preceding list of activities. PMID:12280924

  13. 31 CFR 561.404 - Significant transaction or transactions; significant financial services; significant financial...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 31 Money and Finance:Treasury 3 2013-07-01 2013-07-01 false Significant transaction or transactions; significant financial services; significant financial transaction. 561.404 Section 561.404 Money and Finance: Treasury Regulations Relating to Money and Finance (Continued) OFFICE OF FOREIGN ASSETS CONTROL, DEPARTMENT OF THE TREASURY...

  14. Statistical label fusion with hierarchical performance models

    PubMed Central

    Asman, Andrew J.; Dagley, Alexander S.; Landman, Bennett A.

    2014-01-01

    Label fusion is a critical step in many image segmentation frameworks (e.g., multi-atlas segmentation) as it provides a mechanism for generalizing a collection of labeled examples into a single estimate of the underlying segmentation. In the multi-label case, typical label fusion algorithms treat all labels equally – fully neglecting the known, yet complex, anatomical relationships exhibited in the data. To address this problem, we propose a generalized statistical fusion framework using hierarchical models of rater performance. Building on the seminal work in statistical fusion, we reformulate the traditional rater performance model from a multi-tiered hierarchical perspective. This new approach provides a natural framework for leveraging known anatomical relationships and accurately modeling the types of errors that raters (or atlases) make within a hierarchically consistent formulation. Herein, we describe several contributions. First, we derive a theoretical advancement to the statistical fusion framework that enables the simultaneous estimation of multiple (hierarchical) performance models within the statistical fusion context. Second, we demonstrate that the proposed hierarchical formulation is highly amenable to the state-of-the-art advancements that have been made to the statistical fusion framework. Lastly, in an empirical whole-brain segmentation task we demonstrate substantial qualitative and significant quantitative improvement in overall segmentation accuracy. PMID:24817809

  15. Statistics without Tears: Complex Statistics with Simple Arithmetic

    ERIC Educational Resources Information Center

    Smith, Brian

    2011-01-01

    One of the often overlooked aspects of modern statistics is the analysis of time series data. Modern introductory statistics courses tend to rush to probabilistic applications involving risk and confidence. Rarely does the first level course linger on such useful and fascinating topics as time series decomposition, with its practical applications…

  16. Statistics of high-level scene context

    PubMed Central

    Greene, Michelle R.

    2013-01-01

    Context is critical for recognizing environments and for searching for objects within them: contextual associations have been shown to modulate reaction time and object recognition accuracy, as well as influence the distribution of eye movements and patterns of brain activations. However, we have not yet systematically quantified the relationships between objects and their scene environments. Here I seek to fill this gap by providing descriptive statistics of object-scene relationships. A total of 48, 167 objects were hand-labeled in 3499 scenes using the LabelMe tool (Russell et al., 2008). From these data, I computed a variety of descriptive statistics at three different levels of analysis: the ensemble statistics that describe the density and spatial distribution of unnamed “things” in the scene; the bag of words level where scenes are described by the list of objects contained within them; and the structural level where the spatial distribution and relationships between the objects are measured. The utility of each level of description for scene categorization was assessed through the use of linear classifiers, and the plausibility of each level for modeling human scene categorization is discussed. Of the three levels, ensemble statistics were found to be the most informative (per feature), and also best explained human patterns of categorization errors. Although a bag of words classifier had similar performance to human observers, it had a markedly different pattern of errors. However, certain objects are more useful than others, and ceiling classification performance could be achieved using only the 64 most informative objects. As object location tends not to vary as a function of category, structural information provided little additional information. Additionally, these data provide valuable information on natural scene redundancy that can be exploited for machine vision, and can help the visual cognition community to design experiments guided by

  17. 40 CFR Appendix IV to Part 265 - Tests for Significance

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... changes in the concentration or value of an indicator parameter in periodic ground-water samples when... then be compared to the value of the t-statistic found in a table for t-test of significance at the specified level of significance. A calculated value of t which exceeds the value of t found in the...

  18. 40 CFR Appendix IV to Part 265 - Tests for Significance

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... changes in the concentration or value of an indicator parameter in periodic ground-water samples when... then be compared to the value of the t-statistic found in a table for t-test of significance at the specified level of significance. A calculated value of t which exceeds the value of t found in the...

  19. Effects of bulking agent addition on odorous compounds emissions during composting of OFMSW.

    PubMed

    Shao, Li-Ming; Zhang, Chun-Yan; Wu, Duo; Lü, Fan; Li, Tian-Shui; He, Pin-Jing

    2014-08-01

    The effects of rice straw addition level on odorous compounds emissions in a pilot-scale organic fraction of municipal solid waste (OFMSW) composting plant were investigated. The cumulative odorous compounds emissions occurred in a descending order of 40.22, 28.71 and 27.83 mg/dry kg of OFMSW for piles with rice straw addition level at ratio of 1:10, 2:10 and 3:10 (mixing ratio of rice straw to OFMSW on a wet basis), respectively. The mixing ratio of rice straw to OFMSW had a statistically significant effect on the reduction of malodorous sulfur compounds emissions, which had no statistically significant effect on the reduction of VFAs, alcohols, aldehydes, ketones, aromatics and ammonia emissions during composting, respectively. The cumulative emissions of malodorous sulfur compounds from piles with the increasing rice straw addition level were 1.17, 1.08 and 0.88 mg/dry kg of OFMSW, respectively. The optimal mixing ratio of rice straw to OFMSW was 1:5. Using this addition level, the cumulative malodorous sulfur compounds emissions based on the organic matter degradation were the lowest during composting of OFMSW. PMID:24820662

  20. Nitrogen as a friendly addition to steel

    SciTech Connect

    Rawers, J.C.

    2006-01-01

    Interstitial alloying with nitrogen or carbon is a common means of enhancing properties of iron-based alloys. Interstitial nitrogen addition to fcc-phase Fe-Cr-Mn/Ni alloys results in improved mechanical properties, whereas addition of carbon can result in the formation of unwanted carbides. Carbon addition to low alloy, bcc-phase iron alloys significantly improves strength through the formation of carbides, whereas addition of nitrogen in bcc-phase iron alloys can result in porous casting and reduced mechanical properties. This study will show that alloying iron-based alloys with both nitrogen and carbon can produce positive results. Nitrogen addition to Fe-C and Fe-Cr-C alloys, and both nitrogen and nitrogen-carbon additions to Fe-Cr-Mn/Ni alloys altered the microstructure, improved mechanical properties, increased hardness, and reduced wear by stabilizing the fcc-phase and altering (possibly eliminating) precipitate formation.

  1. Basic statistics for clinicians: 1. Hypothesis testing.

    PubMed Central

    Guyatt, G; Jaeschke, R; Heddle, N; Cook, D; Shannon, H; Walter, S

    1995-01-01

    In the first of a series of four articles the authors explain the statistical concepts of hypothesis testing and p values. In many clinical trials investigators test a null hypothesis that there is no difference between a new treatment and a placebo or between two treatments. The result of a single experiment will almost always show some difference between the experimental and the control groups. Is the difference due to chance, or is it large enough to reject the null hypothesis and conclude that there is a true difference in treatment effects? Statistical tests yield a p value: the probability that the experiment would show a difference as great or greater than that observed if the null hypothesis were true. By convention, p values of less than 0.05 are considered statistically significant, and investigators conclude that there is a real difference. However, the smaller the sample size, the greater the chance of erroneously concluding that the experimental treatment does not differ from the control--in statistical terms, the power of the test may be inadequate. Tests of several outcomes from one set of data may lead to an erroneous conclusion that an outcome is significant if the joint probability of the outcomes is not taken into account. Hypothesis testing has limitations, which will be discussed in the next article in the series. PMID:7804919

  2. Statistical mechanics of community detection

    NASA Astrophysics Data System (ADS)

    Reichardt, Jörg; Bornholdt, Stefan

    2006-07-01

    Starting from a general ansatz, we show how community detection can be interpreted as finding the ground state of an infinite range spin glass. Our approach applies to weighted and directed networks alike. It contains the ad hoc introduced quality function from [J. Reichardt and S. Bornholdt, Phys. Rev. Lett. 93, 218701 (2004)] and the modularity Q as defined by Newman and Girvan [Phys. Rev. E 69, 026113 (2004)] as special cases. The community structure of the network is interpreted as the spin configuration that minimizes the energy of the spin glass with the spin states being the community indices. We elucidate the properties of the ground state configuration to give a concise definition of communities as cohesive subgroups in networks that is adaptive to the specific class of network under study. Further, we show how hierarchies and overlap in the community structure can be detected. Computationally efficient local update rules for optimization procedures to find the ground state are given. We show how the ansatz may be used to discover the community around a given node without detecting all communities in the full network and we give benchmarks for the performance of this extension. Finally, we give expectation values for the modularity of random graphs, which can be used in the assessment of statistical significance of community structure.

  3. Eigenfunction statistics on quantum graphs

    SciTech Connect

    Gnutzmann, S.; Keating, J.P.; Piotet, F.

    2010-12-15

    We investigate the spatial statistics of the energy eigenfunctions on large quantum graphs. It has previously been conjectured that these should be described by a Gaussian Random Wave Model, by analogy with quantum chaotic systems, for which such a model was proposed by Berry in 1977. The autocorrelation functions we calculate for an individual quantum graph exhibit a universal component, which completely determines a Gaussian Random Wave Model, and a system-dependent deviation. This deviation depends on the graph only through its underlying classical dynamics. Classical criteria for quantum universality to be met asymptotically in the large graph limit (i.e. for the non-universal deviation to vanish) are then extracted. We use an exact field theoretic expression in terms of a variant of a supersymmetric {sigma} model. A saddle-point analysis of this expression leads to the estimates. In particular, intensity correlations are used to discuss the possible equidistribution of the energy eigenfunctions in the large graph limit. When equidistribution is asymptotically realized, our theory predicts a rate of convergence that is a significant refinement of previous estimates. The universal and system-dependent components of intensity correlation functions are recovered by means of an exact trace formula which we analyse in the diagonal approximation, drawing in this way a parallel between the field theory and semiclassics. Our results provide the first instance where an asymptotic Gaussian Random Wave Model has been established microscopically for eigenfunctions in a system with no disorder.

  4. Statistical mechanics of community detection.

    PubMed

    Reichardt, Jörg; Bornholdt, Stefan

    2006-07-01

    Starting from a general ansatz, we show how community detection can be interpreted as finding the ground state of an infinite range spin glass. Our approach applies to weighted and directed networks alike. It contains the ad hoc introduced quality function from [J. Reichardt and S. Bornholdt, Phys. Rev. Lett. 93, 218701 (2004)] and the modularity Q as defined by Newman and Girvan [Phys. Rev. E 69, 026113 (2004)] as special cases. The community structure of the network is interpreted as the spin configuration that minimizes the energy of the spin glass with the spin states being the community indices. We elucidate the properties of the ground state configuration to give a concise definition of communities as cohesive subgroups in networks that is adaptive to the specific class of network under study. Further, we show how hierarchies and overlap in the community structure can be detected. Computationally efficient local update rules for optimization procedures to find the ground state are given. We show how the ansatz may be used to discover the community around a given node without detecting all communities in the full network and we give benchmarks for the performance of this extension. Finally, we give expectation values for the modularity of random graphs, which can be used in the assessment of statistical significance of community structure. PMID:16907154

  5. Algorithms for Detecting Significantly Mutated Pathways in Cancer

    NASA Astrophysics Data System (ADS)

    Vandin, Fabio; Upfal, Eli; Raphael, Benjamin J.

    Recent genome sequencing studies have shown that the somatic mutations that drive cancer development are distributed across a large number of genes. This mutational heterogeneity complicates efforts to distinguish functional mutations from sporadic, passenger mutations. Since cancer mutations are hypothesized to target a relatively small number of cellular signaling and regulatory pathways, a common approach is to assess whether known pathways are enriched for mutated genes. However, restricting attention to known pathways will not reveal novel cancer genes or pathways. An alterative strategy is to examine mutated genes in the context of genome-scale interaction networks that include both well characterized pathways and additional gene interactions measured through various approaches. We introduce a computational framework for de novo identification of subnetworks in a large gene interaction network that are mutated in a significant number of patients. This framework includes two major features. First, we introduce a diffusion process on the interaction network to define a local neighborhood of "influence" for each mutated gene in the network. Second, we derive a two-stage multiple hypothesis test to bound the false discovery rate (FDR) associated with the identified subnetworks. We test these algorithms on a large human protein-protein interaction network using mutation data from two recent studies: glioblastoma samples from The Cancer Genome Atlas and lung adenocarcinoma samples from the Tumor Sequencing Project. We successfully recover pathways that are known to be important in these cancers, such as the p53 pathway. We also identify additional pathways, such as the Notch signaling pathway, that have been implicated in other cancers but not previously reported as mutated in these samples. Our approach is the first, to our knowledge, to demonstrate a computationally efficient strategy for de novo identification of statistically significant mutated subnetworks. We

  6. Equations for estimating selected streamflow statistics in Rhode Island

    USGS Publications Warehouse

    Bent, Gardner C.; Steeves, Peter A.; Waite, Andrew M.

    2014-01-01

    Regional regression equations were developed for estimating selected natural—unaffected by alteration—streamflows of specific flow durations and low-flow frequency statistics for ungaged stream sites in Rhode Island. Selected at-site streamflow statistics are provided for 41 long-term streamgages, 21 short-term streamgages, and 135 partial-record stations in Rhode Island, eastern Connecticut, and southeastern and south-central Massachusetts. The regression equations for estimating selected streamflow statistics and the at-site statistics estimated for each of the 197 sites may be used by Federal, State, and local water managers in addressing water issues in and near Rhode Island. Multiple and simple linear regression equations were developed to estimate the 99-, 98-, 95-, 90-, 85-, 80-, 75-, 70-, 60-, 50-, 40-, 30-, 25-, 20-, 15-, 10-, 5-, 2-, and 1-percent flow durations and the 7Q2 (7-day, 2-year) and 7Q10 (7-day, 10-year) low-flow-frequency statistics. An additional 49 selected statistics, for which regression equations were not developed, also were estimated for the long- and short-term streamgages and partial-record stations for flow durations between the 99.99 and 0.01 percent and for the mean annual, mean monthly, and median monthly streamflows. A total of 70 selected streamflow statistics were estimated for 41 long-term streamgages, 21 short-term streamgages, and 135 partial-record stations in and near Rhode Island. Estimates of the long-term streamflow statistics for the 21 short-term streamgages and 135 partial-record stations were developed by the Maintenance of Variance Extension, type 1 (MOVE.1), record-extension technique. The equations used to estimate selected streamflow statistics were developed by relating the 19 flow-duration and 2 low-flow-frequency statistics to 31 different basin characteristics (physical, land-cover, and climatic) at the 41 long-term and 19 of 21 short-term streamgages (a total of 60 streamgages) in and near Rhode Island

  7. New statistical downscaling for Canada

    NASA Astrophysics Data System (ADS)

    Murdock, T. Q.; Cannon, A. J.; Sobie, S.

    2013-12-01

    This poster will document the production of a set of statistically downscaled future climate projections for Canada based on the latest available RCM and GCM simulations - the North American Regional Climate Change Assessment Program (NARCCAP; Mearns et al. 2007) and the Coupled Model Intercomparison Project Phase 5 (CMIP5). The main stages of the project included (1) downscaling method evaluation, (2) scenarios selection, (3) production of statistically downscaled results, and (4) applications of results. We build upon a previous downscaling evaluation project (Bürger et al. 2012, Bürger et al. 2013) in which a quantile-based method (Bias Correction/Spatial Disaggregation - BCSD; Werner 2011) provided high skill compared with four other methods representing the majority of types of downscaling used in Canada. Additional quantile-based methods (Bias-Correction/Constructed Analogues; Maurer et al. 2010 and Bias-Correction/Climate Imprint ; Hunter and Meentemeyer 2005) were evaluated. A subset of 12 CMIP5 simulations was chosen based on an objective set of selection criteria. This included hemispheric skill assessment based on the CLIMDEX indices (Sillmann et al. 2013), historical criteria used previously at the Pacific Climate Impacts Consortium (Werner 2011), and refinement based on a modified clustering algorithm (Houle et al. 2012; Katsavounidis et al. 1994). Statistical downscaling was carried out on the NARCCAP ensemble and a subset of the CMIP5 ensemble. We produced downscaled scenarios over Canada at a daily time resolution and 300 arc second (~10 km) spatial resolution from historical runs for 1951-2005 and from RCP 2.6, 4.5, and 8.5 projections for 2006-2100. The ANUSPLIN gridded daily dataset (McKenney et al. 2011) was used as a target. It has national coverage, spans the historical period of interest 1951-2005, and has daily time resolution. It uses interpolation of station data based on thin-plate splines. This type of method has been shown to have

  8. An empirical study on the relationship between teacher's judgments and fit statistics of the partial credit model.

    PubMed

    Baek, Sun-Geun; Kim, Hye-Sook

    2009-01-01

    The main purpose of the study was to investigate empirically the relationship between classroom teacher's judgment and the item and person fit-statistics of the partial credit model. In this study, classroom teacher's judgments were made intuitively checking each item's consistency with the general response pattern and each student's need for additional treatment or advice. The item and person fit statistics of the partial credit model were estimated using the WINSTEPS program (Linacre, 2003). The subjects of this study were 321 sixth grade students in 9 classrooms within 3 elementary schools in Seoul, Korea. For this research, a performance assessment test for 6th grade mathematics was developed. It consisted of 20 polytomous response items and its total scores ranged between 0 and 50. In addition, the 9 classroom teachers made their judgments for each item of the test and for each student in their own classroom. They judged intuitively using 4 categories; (1) well fit, (2) fit, (3) misfit, and (4) badly misfit for each item as well as each student. Their judgments were scored from 1 to 4 for each item as well as each student. There are two significant findings in this study. First, there is a statistically significant relationship between the classroom teacher's judgment and item fit statistic for each item (The median correlation coefficient between the teacher's judgment and the item outfit ZSTD is 0.61). Second, there is a statistically significant relationship between the teacher's judgment and the person fit statistic for each student (The median correlation coefficient between the teacher's judgment and the person outfit ZSTD is 0.52). In conclusion, the item and person fit statistics of the partial credit model correspond with the teacher's judgments for each test item and each student. PMID:19299887

  9. Digest of Education Statistics, 2000.

    ERIC Educational Resources Information Center

    Snyder, Thomas D.; Hoffman, Charlene M.

    This edition of the "Digest of Education Statistics" is the 36th in a series that provides a compilation of statistical information covering the broad field of U.S. education from kindergarten through graduate school. The Digest includes data from many sources, both government and private, and draws heavily on work done by the National Center for…

  10. Education Statistics Quarterly, Fall 2000.

    ERIC Educational Resources Information Center

    Dillow, Sally, Ed.

    2000-01-01

    The "Education Statistics Quarterly" gives a comprehensive overview of work done across all parts of the National Center for Education Statistics (NCES). Each issue contains short publications, summaries, and descriptions that cover all NCES publications and data products released during a 3-month period. Each message also contains a message from…

  11. Explorations in Statistics: Permutation Methods

    ERIC Educational Resources Information Center

    Curran-Everett, Douglas

    2012-01-01

    Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This eighth installment of "Explorations in Statistics" explores permutation methods, empiric procedures we can use to assess an experimental result--to test a null hypothesis--when we are reluctant to trust statistical…

  12. Education Statistics Quarterly, Spring 2001.

    ERIC Educational Resources Information Center

    Education Statistics Quarterly, 2001

    2001-01-01

    The "Education Statistics Quarterly" gives a comprehensive overview of work done across all parts of the National Center for Education Statistics (NCES). Each issue contains short publications, summaries, and descriptions that cover all NCES publications, data products and funding opportunities developed over a 3-month period. Each issue also…

  13. SOCR: Statistics Online Computational Resource

    ERIC Educational Resources Information Center

    Dinov, Ivo D.

    2006-01-01

    The need for hands-on computer laboratory experience in undergraduate and graduate statistics education has been firmly established in the past decade. As a result a number of attempts have been undertaken to develop novel approaches for problem-driven statistical thinking, data analysis and result interpretation. In this paper we describe an…

  14. Representational Versatility in Learning Statistics

    ERIC Educational Resources Information Center

    Graham, Alan T.; Thomas, Michael O. J.

    2005-01-01

    Statistical data can be represented in a number of qualitatively different ways, the choice depending on the following three conditions: the concepts to be investigated; the nature of the data; and the purpose for which they were collected. This paper begins by setting out frameworks that describe the nature of statistical thinking in schools, and…

  15. Students' Attitudes toward Statistics (STATS).

    ERIC Educational Resources Information Center

    Sutarso, Toto

    The purposes of this study were to develop an instrument to measure students' attitude toward statistics (STATS), and to define the underlying dimensions that comprise the STATS. The instrument consists of 24 items. The sample included 79 male and 97 female students from the statistics classes at the College of Education and the College of…

  16. Motivating Play Using Statistical Reasoning

    ERIC Educational Resources Information Center

    Cross Francis, Dionne I.; Hudson, Rick A.; Lee, Mi Yeon; Rapacki, Lauren; Vesperman, Crystal Marie

    2014-01-01

    Statistical literacy is essential in everyone's personal lives as consumers, citizens, and professionals. To make informed life and professional decisions, students are required to read, understand, and interpret vast amounts of information, much of which is quantitative. To develop statistical literacy so students are able to make sense of…

  17. Explorations in Statistics: Confidence Intervals

    ERIC Educational Resources Information Center

    Curran-Everett, Douglas

    2009-01-01

    Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This third installment of "Explorations in Statistics" investigates confidence intervals. A confidence interval is a range that we expect, with some level of confidence, to include the true value of a population parameter…

  18. Statistical Factors in Complexation Reactions.

    ERIC Educational Resources Information Center

    Chung, Chung-Sun

    1985-01-01

    Four cases which illustrate statistical factors in complexation reactions (where two of the reactants are monodentate ligands) are presented. Included are tables showing statistical factors for the reactions of: (1) square-planar complexes; (2) tetrahedral complexes; and (3) octahedral complexes. (JN)

  19. Statistical Methods in Psychology Journals.

    ERIC Educational Resources Information Center

    Willkinson, Leland

    1999-01-01

    Proposes guidelines for revising the American Psychological Association (APA) publication manual or other APA materials to clarify the application of statistics in research reports. The guidelines are intended to induce authors and editors to recognize the thoughtless application of statistical methods. Contains 54 references. (SLD)

  20. Students' attitudes towards learning statistics

    NASA Astrophysics Data System (ADS)

    Ghulami, Hassan Rahnaward; Hamid, Mohd Rashid Ab; Zakaria, Roslinazairimah

    2015-05-01

    Positive attitude towards learning is vital in order to master the core content of the subject matters under study. This is unexceptional in learning statistics course especially at the university level. Therefore, this study investigates the students' attitude towards learning statistics. Six variables or constructs have been identified such as affect, cognitive competence, value, difficulty, interest, and effort. The instrument used for the study is questionnaire that was adopted and adapted from the reliable instrument of Survey of Attitudes towards Statistics(SATS©). This study is conducted to engineering undergraduate students in one of the university in the East Coast of Malaysia. The respondents consist of students who were taking the applied statistics course from different faculties. The results are analysed in terms of descriptive analysis and it contributes to the descriptive understanding of students' attitude towards the teaching and learning process of statistics.