Statistical Significance Testing.
ERIC Educational Resources Information Center
McLean, James E., Ed.; Kaufman, Alan S., Ed.
1998-01-01
The controversy about the use or misuse of statistical significance testing has become the major methodological issue in educational research. This special issue contains three articles that explore the controversy, three commentaries on these articles, an overall response, and three rejoinders by the first three authors. They are: (1)…
Statistical or biological significance?
Saxon, Emma
2015-01-01
Oat plants grown at an agricultural research facility produce higher yields in Field 1 than in Field 2, under well fertilised conditions and with similar weather exposure; all oat plants in both fields are healthy and show no sign of disease. In this study, the authors hypothesised that the soil microbial community might be different in each field, and these differences might explain the difference in oat plant growth. They carried out a metagenomic analysis of the 16 s ribosomal 'signature' sequences from bacteria in 50 randomly located soil samples in each field to determine the composition of the bacterial community. The study identified >1000 species, most of which were present in both fields. The authors identified two plant growth-promoting species that were significantly reduced in soil from Field 2 (Student's t-test P < 0.05), and concluded that these species might have contributed to reduced yield. PMID:26541972
Statistically significant relational data mining :
Berry, Jonathan W.; Leung, Vitus Joseph; Phillips, Cynthia Ann; Pinar, Ali; Robinson, David Gerald; Berger-Wolf, Tanya; Bhowmick, Sanjukta; Casleton, Emily; Kaiser, Mark; Nordman, Daniel J.; Wilson, Alyson G.
2014-02-01
This report summarizes the work performed under the project (3z(BStatitically significant relational data mining.(3y (BThe goal of the project was to add more statistical rigor to the fairly ad hoc area of data mining on graphs. Our goal was to develop better algorithms and better ways to evaluate algorithm quality. We concetrated on algorithms for community detection, approximate pattern matching, and graph similarity measures. Approximate pattern matching involves finding an instance of a relatively small pattern, expressed with tolerance, in a large graph of data observed with uncertainty. This report gathers the abstracts and references for the eight refereed publications that have appeared as part of this work. We then archive three pieces of research that have not yet been published. The first is theoretical and experimental evidence that a popular statistical measure for comparison of community assignments favors over-resolved communities over approximations to a ground truth. The second are statistically motivated methods for measuring the quality of an approximate match of a small pattern in a large graph. The third is a new probabilistic random graph model. Statisticians favor these models for graph analysis. The new local structure graph model overcomes some of the issues with popular models such as exponential random graph models and latent variable models.
Statistical significance of the gallium anomaly
Giunti, Carlo; Laveder, Marco
2011-06-15
We calculate the statistical significance of the anomalous deficit of electron neutrinos measured in the radioactive source experiments of the GALLEX and SAGE solar neutrino detectors, taking into account the uncertainty of the detection cross section. We found that the statistical significance of the anomaly is {approx}3.0{sigma}. A fit of the data in terms of neutrino oscillations favors at {approx}2.7{sigma} short-baseline electron neutrino disappearance with respect to the null hypothesis of no oscillations.
The insignificance of statistical significance testing
Johnson, Douglas H.
1999-01-01
Despite their use in scientific joumals such asThe journal of Wildlife Management, statistical hypothesis tests add very little value to the products of research. Indeed, they frequently confuse the interpretation of data. This paper describes how statistical hypothesis tests are often viewed, and then contrasts that interpretation with the correct one. I discuss the arbitrariness of P-values, conclusions that the null hypothesis is true, power analysis, and distinctions between statistical and biological significance. Statistical hypothesis testing, in which the null hypothesis about the properties of a population is almost always known a priori to be false, is contrasted with scientific hypothesis testing, which examines a credible null hypothesis about phenomena in nature. More meaningful alternatives are briefly outlined, including estimation and confidence intervals for determining the importance of factors, decision theory for guiding actions in the face of uncertainty, and Bayesian approaches to hypothesis testing and other statistical practices.
Statistical Significance vs. Practical Significance: An Exploration through Health Education
ERIC Educational Resources Information Center
Rosen, Brittany L.; DeMaria, Andrea L.
2012-01-01
The purpose of this paper is to examine the differences between statistical and practical significance, including strengths and criticisms of both methods, as well as provide information surrounding the application of various effect sizes and confidence intervals within health education research. Provided are recommendations, explanations and…
Understanding Statistical Significance: A Conceptual History.
ERIC Educational Resources Information Center
Little, Joseph
2001-01-01
Considers how if literacy is envisioned as a sort of competence in a set of social and intellectual practices, then scientific literacy must encompass the realization that "statistical significance," the cardinal arbiter of social scientific knowledge, was not born out of an immanent logic of mathematics but socially constructed and reconstructed…
Determining the Statistical Significance of Relative Weights
ERIC Educational Resources Information Center
Tonidandel, Scott; LeBreton, James M.; Johnson, Jeff W.
2009-01-01
Relative weight analysis is a procedure for estimating the relative importance of correlated predictors in a regression equation. Because the sampling distribution of relative weights is unknown, researchers using relative weight analysis are unable to make judgments regarding the statistical significance of the relative weights. J. W. Johnson…
Comments on the Statistical Significance Testing Articles.
ERIC Educational Resources Information Center
Knapp, Thomas R.
1998-01-01
Expresses a "middle-of-the-road" position on statistical significance testing, suggesting that it has its place but that confidence intervals are generally more useful. Identifies 10 errors of omission or commission in the papers reviewed that weaken the positions taken in their discussions. (SLD)
Social significance of community structure: Statistical view
NASA Astrophysics Data System (ADS)
Li, Hui-Jia; Daniels, Jasmine J.
2015-01-01
Community structure analysis is a powerful tool for social networks that can simplify their topological and functional analysis considerably. However, since community detection methods have random factors and real social networks obtained from complex systems always contain error edges, evaluating the significance of a partitioned community structure is an urgent and important question. In this paper, integrating the specific characteristics of real society, we present a framework to analyze the significance of a social community. The dynamics of social interactions are modeled by identifying social leaders and corresponding hierarchical structures. Instead of a direct comparison with the average outcome of a random model, we compute the similarity of a given node with the leader by the number of common neighbors. To determine the membership vector, an efficient community detection algorithm is proposed based on the position of the nodes and their corresponding leaders. Then, using a log-likelihood score, the tightness of the community can be derived. Based on the distribution of community tightness, we establish a connection between p -value theory and network analysis, and then we obtain a significance measure of statistical form . Finally, the framework is applied to both benchmark networks and real social networks. Experimental results show that our work can be used in many fields, such as determining the optimal number of communities, analyzing the social significance of a given community, comparing the performance among various algorithms, etc.
Statistical Significance of Clustering using Soft Thresholding
Huang, Hanwen; Liu, Yufeng; Yuan, Ming; Marron, J. S.
2015-01-01
Clustering methods have led to a number of important discoveries in bioinformatics and beyond. A major challenge in their use is determining which clusters represent important underlying structure, as opposed to spurious sampling artifacts. This challenge is especially serious, and very few methods are available, when the data are very high in dimension. Statistical Significance of Clustering (SigClust) is a recently developed cluster evaluation tool for high dimensional low sample size data. An important component of the SigClust approach is the very definition of a single cluster as a subset of data sampled from a multivariate Gaussian distribution. The implementation of SigClust requires the estimation of the eigenvalues of the covariance matrix for the null multivariate Gaussian distribution. We show that the original eigenvalue estimation can lead to a test that suffers from severe inflation of type-I error, in the important case where there are a few very large eigenvalues. This paper addresses this critical challenge using a novel likelihood based soft thresholding approach to estimate these eigenvalues, which leads to a much improved SigClust. Major improvements in SigClust performance are shown by both mathematical analysis, based on the new notion of Theoretical Cluster Index, and extensive simulation studies. Applications to some cancer genomic data further demonstrate the usefulness of these improvements. PMID:26755893
Sibling Competition & Growth Tradeoffs. Biological vs. Statistical Significance
Kramer, Karen L.; Veile, Amanda; Otárola-Castillo, Erik
2016-01-01
Early childhood growth has many downstream effects on future health and reproduction and is an important measure of offspring quality. While a tradeoff between family size and child growth outcomes is theoretically predicted in high-fertility societies, empirical evidence is mixed. This is often attributed to phenotypic variation in parental condition. However, inconsistent study results may also arise because family size confounds the potentially differential effects that older and younger siblings can have on young children’s growth. Additionally, inconsistent results might reflect that the biological significance associated with different growth trajectories is poorly understood. This paper addresses these concerns by tracking children’s monthly gains in height and weight from weaning to age five in a high fertility Maya community. We predict that: 1) as an aggregate measure family size will not have a major impact on child growth during the post weaning period; 2) competition from young siblings will negatively impact child growth during the post weaning period; 3) however because of their economic value, older siblings will have a negligible effect on young children’s growth. Accounting for parental condition, we use linear mixed models to evaluate the effects that family size, younger and older siblings have on children’s growth. Congruent with our expectations, it is younger siblings who have the most detrimental effect on children’s growth. While we find statistical evidence of a quantity/quality tradeoff effect, the biological significance of these results is negligible in early childhood. Our findings help to resolve why quantity/quality studies have had inconsistent results by showing that sibling competition varies with sibling age composition, not just family size, and that biological significance is distinct from statistical significance. PMID:26938742
The Use of Meta-Analytic Statistical Significance Testing
ERIC Educational Resources Information Center
Polanin, Joshua R.; Pigott, Terri D.
2015-01-01
Meta-analysis multiplicity, the concept of conducting multiple tests of statistical significance within one review, is an underdeveloped literature. We address this issue by considering how Type I errors can impact meta-analytic results, suggest how statistical power may be affected through the use of multiplicity corrections, and propose how…
Testing the Difference of Correlated Agreement Coefficients for Statistical Significance
ERIC Educational Resources Information Center
Gwet, Kilem L.
2016-01-01
This article addresses the problem of testing the difference between two correlated agreement coefficients for statistical significance. A number of authors have proposed methods for testing the difference between two correlated kappa coefficients, which require either the use of resampling methods or the use of advanced statistical modeling…
Advances in Testing the Statistical Significance of Mediation Effects
ERIC Educational Resources Information Center
Mallinckrodt, Brent; Abraham, W. Todd; Wei, Meifen; Russell, Daniel W.
2006-01-01
P. A. Frazier, A. P. Tix, and K. E. Barron (2004) highlighted a normal theory method popularized by R. M. Baron and D. A. Kenny (1986) for testing the statistical significance of indirect effects (i.e., mediator variables) in multiple regression contexts. However, simulation studies suggest that this method lacks statistical power relative to some…
The questioned p value: clinical, practical and statistical significance.
Jiménez-Paneque, Rosa
2016-01-01
The use of p-value and statistical significance have been questioned since the early 80s in the last century until today. Much has been discussed about it in the field of statistics and its applications, especially in Epidemiology and Public Health. As a matter of fact, the p-value and its equivalent, statistical significance, are difficult concepts to grasp for the many health professionals some way involved in research applied to their work areas. However, its meaning should be clear in intuitive terms although it is based on theoretical concepts of the field of Statistics. This paper attempts to present the p-value as a concept that applies to everyday life and therefore intuitively simple but whose proper use cannot be separated from theoretical and methodological elements of inherent complexity. The reasons behind the criticism received by the p-value and its isolated use are intuitively explained, mainly the need to demarcate statistical significance from clinical significance and some of the recommended remedies for these problems are approached as well. It finally refers to the current trend to vindicate the p-value appealing to the convenience of its use in certain situations and the recent statement of the American Statistical Association in this regard. PMID:27636600
The questioned p value: clinical, practical and statistical significance.
Jiménez-Paneque, Rosa
2016-09-09
The use of p-value and statistical significance have been questioned since the early 80s in the last century until today. Much has been discussed about it in the field of statistics and its applications, especially in Epidemiology and Public Health. As a matter of fact, the p-value and its equivalent, statistical significance, are difficult concepts to grasp for the many health professionals some way involved in research applied to their work areas. However, its meaning should be clear in intuitive terms although it is based on theoretical concepts of the field of Statistics. This paper attempts to present the p-value as a concept that applies to everyday life and therefore intuitively simple but whose proper use cannot be separated from theoretical and methodological elements of inherent complexity. The reasons behind the criticism received by the p-value and its isolated use are intuitively explained, mainly the need to demarcate statistical significance from clinical significance and some of the recommended remedies for these problems are approached as well. It finally refers to the current trend to vindicate the p-value appealing to the convenience of its use in certain situations and the recent statement of the American Statistical Association in this regard.
Has Testing for Statistical Significance Outlived Its Usefulness?
ERIC Educational Resources Information Center
McLean, James E.; Ernest, James M.
The research methodology literature in recent years has included a full frontal assault on statistical significance testing. An entire edition of "Experimental Education" explored this controversy. The purpose of this paper is to promote the position that while significance testing by itself may be flawed, it has not outlived its usefulness.…
On detection and assessment of statistical significance of Genomic Islands
Chatterjee, Raghunath; Chaudhuri, Keya; Chaudhuri, Probal
2008-01-01
Background Many of the available methods for detecting Genomic Islands (GIs) in prokaryotic genomes use markers such as transposons, proximal tRNAs, flanking repeats etc., or they use other supervised techniques requiring training datasets. Most of these methods are primarily based on the biases in GC content or codon and amino acid usage of the islands. However, these methods either do not use any formal statistical test of significance or use statistical tests for which the critical values and the P-values are not adequately justified. We propose a method, which is unsupervised in nature and uses Monte-Carlo statistical tests based on randomly selected segments of a chromosome. Such tests are supported by precise statistical distribution theory, and consequently, the resulting P-values are quite reliable for making the decision. Results Our algorithm (named Design-Island, an acronym for Detection of Statistically Significant Genomic Island) runs in two phases. Some 'putative GIs' are identified in the first phase, and those are refined into smaller segments containing horizontally acquired genes in the refinement phase. This method is applied to Salmonella typhi CT18 genome leading to the discovery of several new pathogenicity, antibiotic resistance and metabolic islands that were missed by earlier methods. Many of these islands contain mobile genetic elements like phage-mediated genes, transposons, integrase and IS elements confirming their horizontal acquirement. Conclusion The proposed method is based on statistical tests supported by precise distribution theory and reliable P-values along with a technique for visualizing statistically significant islands. The performance of our method is better than many other well known methods in terms of their sensitivity and accuracy, and in terms of specificity, it is comparable to other methods. PMID:18380895
A Comparison of Statistical Significance Tests for Selecting Equating Functions
ERIC Educational Resources Information Center
Moses, Tim
2009-01-01
This study compared the accuracies of nine previously proposed statistical significance tests for selecting identity, linear, and equipercentile equating functions in an equivalent groups equating design. The strategies included likelihood ratio tests for the loglinear models of tests' frequency distributions, regression tests, Kolmogorov-Smirnov…
Assigning statistical significance to proteotypic peptides via database searches
Alves, Gelio; Ogurtsov, Aleksey Y.; Yu, Yi-Kuo
2011-01-01
Querying MS/MS spectra against a database containing only proteotypic peptides reduces data analysis time due to reduction of database size. Despite the speed advantage, this search strategy is challenged by issues of statistical significance and coverage. The former requires separating systematically significant identifications from less confident identifications, while the latter arises when the underlying peptide is not present, due to single amino acid polymorphisms (SAPs) or post-translational modifications (PTMs), in the proteotypic peptide libraries searched. To address both issues simultaneously, we have extended RAId’s knowledge database to include proteotypic information, utilized RAId’s statistical strategy to assign statistical significance to proteotypic peptides, and modified RAId’s programs to allow for consideration of proteotypic information during database searches. The extended database alleviates the coverage problem since all annotated modifications, even those occurred within proteotypic peptides, may be considered. Taking into account the likelihoods of observation, the statistical strategy of RAId provides accurate E-value assignments regardless whether a candidate peptide is proteotypic or not. The advantage of including proteotypic information is evidenced by its superior retrieval performance when compared to regular database searches. PMID:21055489
Estimation of the geochemical threshold and its statistical significance
Miesch, A.T.
1981-01-01
A statistic is proposed for estimating the geochemical threshold and its statistical significance, or it may be used to identify a group of extreme values that can be tested for significance by other means. The statistic is the maximum gap between adjacent values in an ordered array after each gap has been adjusted for the expected frequency. The values in the ordered array are geochemical values transformed by either ln(?? - ??) or ln(?? - ??) and then standardized so that the mean is zero and the variance is unity. The expected frequency is taken from a fitted normal curve with unit area. The midpoint of an adjusted gap that exceeds the corresponding critical value may be taken as an estimate of the geochemical threshold, and the associated probability indicates the likelihood that the threshold separates two geochemical populations. The adjusted gap test may fail to identify threshold values if the variation tends to be continuous from background values to the higher values that reflect mineralized ground. However, the test will serve to identify other anomalies that may be too subtle to have been noted by other means. ?? 1981.
Weak additivity principle for current statistics in d dimensions.
Pérez-Espigares, C; Garrido, P L; Hurtado, P I
2016-04-01
The additivity principle (AP) allows one to compute the current distribution in many one-dimensional nonequilibrium systems. Here we extend this conjecture to general d-dimensional driven diffusive systems, and validate its predictions against both numerical simulations of rare events and microscopic exact calculations of three paradigmatic models of diffusive transport in d=2. Crucially, the existence of a structured current vector field at the fluctuating level, coupled to the local mobility, turns out to be essential to understand current statistics in d>1. We prove that, when compared to the straightforward extension of the AP to high d, the so-called weak AP always yields a better minimizer of the macroscopic fluctuation theory action for current statistics.
Addition of Cryoprotectant Significantly Alters the Epididymal Sperm Proteome.
Yoon, Sung-Jae; Rahman, Md Saidur; Kwon, Woo-Sung; Park, Yoo-Jin; Pang, Myung-Geol
2016-01-01
Although cryopreservation has been developed and optimized over the past decades, it causes various stresses, including cold shock, osmotic stress, and ice crystal formation, thereby reducing fertility. During cryopreservation, addition of cryoprotective agent (CPA) is crucial for protecting spermatozoa from freezing damage. However, the intrinsic toxicity and osmotic stress induced by CPA cause damage to spermatozoa. To identify the effects of CPA addition during cryopreservation, we assessed the motility (%), motion kinematics, capacitation status, and viability of epididymal spermatozoa using computer-assisted sperm analysis and Hoechst 33258/chlortetracycline fluorescence staining. Moreover, the effects of CPA addition were also demonstrated at the proteome level using two-dimensional electrophoresis. Our results demonstrated that CPA addition significantly reduced sperm motility (%), curvilinear velocity, viability (%), and non-capacitated spermatozoa, whereas straightness and acrosome-reacted spermatozoa increased significantly (p < 0.05). Ten proteins were differentially expressed (two decreased and eight increased) (>3 fold, p < 0.05) after CPA, whereas NADH dehydrogenase flavoprotein 2, f-actin-capping protein subunit beta, superoxide dismutase 2, and outer dense fiber protein 2 were associated with several important signaling pathways (p < 0.05). The present study provides a mechanistic basis for specific cryostresses and potential markers of CPA-induced stress. Therefore, these might provide information about the development of safe biomaterials for cryopreservation and basic ground for sperm cryopreservation. PMID:27031703
Addition of Cryoprotectant Significantly Alters the Epididymal Sperm Proteome
Yoon, Sung-Jae; Rahman, Md Saidur; Kwon, Woo-Sung; Park, Yoo-Jin; Pang, Myung-Geol
2016-01-01
Although cryopreservation has been developed and optimized over the past decades, it causes various stresses, including cold shock, osmotic stress, and ice crystal formation, thereby reducing fertility. During cryopreservation, addition of cryoprotective agent (CPA) is crucial for protecting spermatozoa from freezing damage. However, the intrinsic toxicity and osmotic stress induced by CPA cause damage to spermatozoa. To identify the effects of CPA addition during cryopreservation, we assessed the motility (%), motion kinematics, capacitation status, and viability of epididymal spermatozoa using computer-assisted sperm analysis and Hoechst 33258/chlortetracycline fluorescence staining. Moreover, the effects of CPA addition were also demonstrated at the proteome level using two-dimensional electrophoresis. Our results demonstrated that CPA addition significantly reduced sperm motility (%), curvilinear velocity, viability (%), and non-capacitated spermatozoa, whereas straightness and acrosome-reacted spermatozoa increased significantly (p < 0.05). Ten proteins were differentially expressed (two decreased and eight increased) (>3 fold, p < 0.05) after CPA, whereas NADH dehydrogenase flavoprotein 2, f-actin-capping protein subunit beta, superoxide dismutase 2, and outer dense fiber protein 2 were associated with several important signaling pathways (p < 0.05). The present study provides a mechanistic basis for specific cryostresses and potential markers of CPA-induced stress. Therefore, these might provide information about the development of safe biomaterials for cryopreservation and basic ground for sperm cryopreservation. PMID:27031703
Fostering Students' Statistical Literacy through Significant Learning Experience
ERIC Educational Resources Information Center
Krishnan, Saras
2015-01-01
A major objective of statistics education is to develop students' statistical literacy that enables them to be educated users of data in context. Teaching statistics in today's educational settings is not an easy feat because teachers have a huge task in keeping up with the demands of the new generation of learners. The present day students have…
A Tutorial on Hunting Statistical Significance by Chasing N
Szucs, Denes
2016-01-01
There is increasing concern about the replicability of studies in psychology and cognitive neuroscience. Hidden data dredging (also called p-hacking) is a major contributor to this crisis because it substantially increases Type I error resulting in a much larger proportion of false positive findings than the usually expected 5%. In order to build better intuition to avoid, detect and criticize some typical problems, here I systematically illustrate the large impact of some easy to implement and so, perhaps frequent data dredging techniques on boosting false positive findings. I illustrate several forms of two special cases of data dredging. First, researchers may violate the data collection stopping rules of null hypothesis significance testing by repeatedly checking for statistical significance with various numbers of participants. Second, researchers may group participants post hoc along potential but unplanned independent grouping variables. The first approach ‘hacks’ the number of participants in studies, the second approach ‘hacks’ the number of variables in the analysis. I demonstrate the high amount of false positive findings generated by these techniques with data from true null distributions. I also illustrate that it is extremely easy to introduce strong bias into data by very mild selection and re-testing. Similar, usually undocumented data dredging steps can easily lead to having 20–50%, or more false positives. PMID:27713723
Assessing statistical significance in multivariable genome wide association analysis
Buzdugan, Laura; Kalisch, Markus; Navarro, Arcadi; Schunk, Daniel; Fehr, Ernst; Bühlmann, Peter
2016-01-01
Motivation: Although Genome Wide Association Studies (GWAS) genotype a very large number of single nucleotide polymorphisms (SNPs), the data are often analyzed one SNP at a time. The low predictive power of single SNPs, coupled with the high significance threshold needed to correct for multiple testing, greatly decreases the power of GWAS. Results: We propose a procedure in which all the SNPs are analyzed in a multiple generalized linear model, and we show its use for extremely high-dimensional datasets. Our method yields P-values for assessing significance of single SNPs or groups of SNPs while controlling for all other SNPs and the family wise error rate (FWER). Thus, our method tests whether or not a SNP carries any additional information about the phenotype beyond that available by all the other SNPs. This rules out spurious correlations between phenotypes and SNPs that can arise from marginal methods because the ‘spuriously correlated’ SNP merely happens to be correlated with the ‘truly causal’ SNP. In addition, the method offers a data driven approach to identifying and refining groups of SNPs that jointly contain informative signals about the phenotype. We demonstrate the value of our method by applying it to the seven diseases analyzed by the Wellcome Trust Case Control Consortium (WTCCC). We show, in particular, that our method is also capable of finding significant SNPs that were not identified in the original WTCCC study, but were replicated in other independent studies. Availability and implementation: Reproducibility of our research is supported by the open-source Bioconductor package hierGWAS. Contact: peter.buehlmann@stat.math.ethz.ch Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27153677
Statistical controversies in clinical research: statistical significance-too much of a good thing ….
Buyse, M; Hurvitz, S A; Andre, F; Jiang, Z; Burris, H A; Toi, M; Eiermann, W; Lindsay, M-A; Slamon, D
2016-05-01
The use and interpretation of P values is a matter of debate in applied research. We argue that P values are useful as a pragmatic guide to interpret the results of a clinical trial, not as a strict binary boundary that separates real treatment effects from lack thereof. We illustrate our point using the result of BOLERO-1, a randomized, double-blind trial evaluating the efficacy and safety of adding everolimus to trastuzumab and paclitaxel as first-line therapy for HER2+ advanced breast cancer. In this trial, the benefit of everolimus was seen only in the predefined subset of patients with hormone receptor-negative breast cancer at baseline (progression-free survival hazard ratio = 0.66, P = 0.0049). A strict interpretation of this finding, based on complex 'alpha splitting' rules to assess statistical significance, led to the conclusion that the benefit of everolimus was not statistically significant either overall or in the subset. We contend that this interpretation does not do justice to the data, and we argue that the benefit of everolimus in hormone receptor-negative breast cancer is both statistically compelling and clinically relevant. PMID:26861602
Statistically significant data base of rock properties for geothermal use
NASA Astrophysics Data System (ADS)
Koch, A.; Jorand, R.; Clauser, C.
2009-04-01
The high risk of failure due to the unknown properties of the target rocks at depth is a major obstacle for the exploration of geothermal energy. In general, the ranges of thermal and hydraulic properties given in compilations of rock properties are too large to be useful to constrain properties at a specific site. To overcome this problem, we study the thermal and hydraulic rock properties of the main rock types in Germany in a statistical approach. An important aspect is the use of data from exploration wells that are largely untapped for the purpose of geothermal exploration. In the current project stage, we have been analyzing mostly Devonian and Carboniferous drill cores from 20 deep boreholes in the region of the Lower Rhine Embayment and the Ruhr area (western North Rhine Westphalia). In total, we selected 230 core samples with a length of up to 30 cm from the core archive of the State Geological Survey. The use of core scanning technology allowed the rapid measurement of thermal conductivity, sonic velocity, and gamma density under dry and water saturated conditions with high resolution for a large number of samples. In addition, we measured porosity, bulk density, and matrix density based on Archimedes' principle and pycnometer analysis. As first results we present arithmetic means, medians and standard deviations characterizing the petrophysical properties and their variability for specific lithostratigraphic units. Bi- and multimodal frequency distributions correspond to the occurrence of different lithologies such as shale, limestone, dolomite, sandstone, siltstone, marlstone, and quartz-schist. In a next step, the data set will be combined with logging data and complementary mineralogical analyses to derive the variation of thermal conductivity with depth. As a final result, this may be used to infer thermal conductivity for boreholes without appropriate core data which were drilled in similar geological settings.
Shukla, R.; Yu Daohai; Fulk, F.
1995-12-31
Short-term toxicity tests with aquatic organisms are a valuable measurement tool in the assessment of the toxicity of effluents, environmental samples and single chemicals. Currently toxicity tests are utilized in a wide range of US EPA regulatory activities including effluent discharge compliance. In the current approach for determining the No Observed Effect Concentration, an effluent concentration is presumed safe if there is no statistically significant difference in toxicant response versus control response. The conclusion of a safe concentration may be due to the fact that it truly is safe, or alternatively, that the ability of the statistical test to detect an effect, given its existence, is inadequate. Results of research of a new statistical approach, the basis of which is to move away from a demonstration of no difference to a demonstration of equivalence, will be discussed. The concept of observed confidence distributions, first suggested by Cox, is proposed as a measure of the strength of evidence for practically equivalent responses between a given effluent concentration and the control. The research included determination of intervals of practically equivalent responses as a function of the variability of control response. The approach is illustrated using reproductive data from tests with Ceriodaphnia dubia and survival and growth data from tests with fathead minnow. The data are from the US EPA`s National Reference Toxicant Database.
Statistical downscaling rainfall using artificial neural network: significantly wetter Bangkok?
NASA Astrophysics Data System (ADS)
Vu, Minh Tue; Aribarg, Thannob; Supratid, Siriporn; Raghavan, Srivatsan V.; Liong, Shie-Yui
2015-08-01
Artificial neural network (ANN) is an established technique with a flexible mathematical structure that is capable of identifying complex nonlinear relationships between input and output data. The present study utilizes ANN as a method of statistically downscaling global climate models (GCMs) during the rainy season at meteorological site locations in Bangkok, Thailand. The study illustrates the applications of the feed forward back propagation using large-scale predictor variables derived from both the ERA-Interim reanalyses data and present day/future GCM data. The predictors are first selected over different grid boxes surrounding Bangkok region and then screened by using principal component analysis (PCA) to filter the best correlated predictors for ANN training. The reanalyses downscaled results of the present day climate show good agreement against station precipitation with a correlation coefficient of 0.8 and a Nash-Sutcliffe efficiency of 0.65. The final downscaled results for four GCMs show an increasing trend of precipitation for rainy season over Bangkok by the end of the twenty-first century. The extreme values of precipitation determined using statistical indices show strong increases of wetness. These findings will be useful for policy makers in pondering adaptation measures due to flooding such as whether the current drainage network system is sufficient to meet the changing climate and to plan for a range of related adaptation/mitigation measures.
Stork, LeAnna M.; Gennings, Chris; Carchman, Richard; Carter, Jr., Walter H.; Pounds, Joel G.; Mumtaz, Moiz
2006-12-01
Several assumptions, defined and undefined, are used in the toxicity assessment of chemical mixtures. In scientific practice mixture components in the low-dose region, particularly subthreshold doses, are often assumed to behave additively (i.e., zero interaction) based on heuristic arguments. This assumption has important implications in the practice of risk assessment, but has not been experimentally tested. We have developed methodology to test for additivity in the sense of Berenbaum (Advances in Cancer Research, 1981), based on the statistical equivalence testing literature where the null hypothesis of interaction is rejected for the alternative hypothesis of additivity when data support the claim. The implication of this approach is that conclusions of additivity are made with a false positive rate controlled by the experimenter. The claim of additivity is based on prespecified additivity margins, which are chosen using expert biological judgment such that small deviations from additivity, which are not considered to be biologically important, are not statistically significant. This approach is in contrast to the usual hypothesis-testing framework that assumes additivity in the null hypothesis and rejects when there is significant evidence of interaction. In this scenario, failure to reject may be due to lack of statistical power making the claim of additivity problematic. The proposed method is illustrated in a mixture of five organophosphorus pesticides that were experimentally evaluated alone and at relevant mixing ratios. Motor activity was assessed in adult male rats following acute exposure. Four low-dose mixture groups were evaluated. Evidence of additivity is found in three of the four low-dose mixture groups.The proposed method tests for additivity of the whole mixture and does not take into account subset interactions (e.g., synergistic, antagonistic) that may have occurred and cancelled each other out.
Understanding the Sampling Distribution and Its Use in Testing Statistical Significance.
ERIC Educational Resources Information Center
Breunig, Nancy A.
Despite the increasing criticism of statistical significance testing by researchers, particularly in the publication of the 1994 American Psychological Association's style manual, statistical significance test results are still popular in journal articles. For this reason, it remains important to understand the logic of inferential statistics. A…
ERIC Educational Resources Information Center
Monterde-i-Bort, Hector; Frias-Navarro, Dolores; Pascual-Llobell, Juan
2010-01-01
The empirical study we present here deals with a pedagogical issue that has not been thoroughly explored up until now in our field. Previous empirical studies in other sectors have identified the opinions of researchers about this topic, showing that completely unacceptable interpretations have been made of significance tests and other statistical…
ERIC Educational Resources Information Center
Huston, Holly L.
This paper begins with a general discussion of statistical significance, effect size, and power analysis; and concludes by extending the discussion to the multivariate case (MANOVA). Historically, traditional statistical significance testing has guided researchers' thinking about the meaningfulness of their data. The use of significance testing…
Assessing Genome-Wide Statistical Significance for Large p Small n Problems
Diao, Guoqing; Vidyashankar, Anand N.
2013-01-01
Assessing genome-wide statistical significance is an important issue in genetic studies. We describe a new resampling approach for determining the appropriate thresholds for statistical significance. Our simulation results demonstrate that the proposed approach accurately controls the genome-wide type I error rate even under the large p small n situations. PMID:23666935
Assessing genome-wide statistical significance for large p small n problems.
Diao, Guoqing; Vidyashankar, Anand N
2013-07-01
Assessing genome-wide statistical significance is an important issue in genetic studies. We describe a new resampling approach for determining the appropriate thresholds for statistical significance. Our simulation results demonstrate that the proposed approach accurately controls the genome-wide type I error rate even under the large p small n situations.
ERIC Educational Resources Information Center
Norris, John M.
2015-01-01
Traditions of statistical significance testing in second language (L2) quantitative research are strongly entrenched in how researchers design studies, select analyses, and interpret results. However, statistical significance tests using "p" values are commonly misinterpreted by researchers, reviewers, readers, and others, leading to…
A Review of Post-1994 Literature on Whether Statistical Significance Tests Should Be Banned.
ERIC Educational Resources Information Center
Sullivan, Jeremy R.
This paper summarizes the literature regarding statistical significance testing with an emphasis on: (1) the post-1994 literature in various disciplines; (2) alternatives to statistical significance testing; and (3) literature exploring why researchers have demonstrably failed to be influenced by the 1994 American Psychological Association…
"What If" Analyses: Ways to Interpret Statistical Significance Test Results Using EXCEL or "R"
ERIC Educational Resources Information Center
Ozturk, Elif
2012-01-01
The present paper aims to review two motivations to conduct "what if" analyses using Excel and "R" to understand the statistical significance tests through the sample size context. "What if" analyses can be used to teach students what statistical significance tests really do and in applied research either prospectively to estimate what sample size…
ERIC Educational Resources Information Center
Fidalgo, Angel M.; Alavi, Seyed Mohammad; Amirian, Seyed Mohammad Reza
2014-01-01
This study examines three controversial aspects in differential item functioning (DIF) detection by logistic regression (LR) models: first, the relative effectiveness of different analytical strategies for detecting DIF; second, the suitability of the Wald statistic for determining the statistical significance of the parameters of interest; and…
NASA Technical Reports Server (NTRS)
Xu, Kuan-Man
2006-01-01
A new method is proposed to compare statistical differences between summary histograms, which are the histograms summed over a large ensemble of individual histograms. It consists of choosing a distance statistic for measuring the difference between summary histograms and using a bootstrap procedure to calculate the statistical significance level. Bootstrapping is an approach to statistical inference that makes few assumptions about the underlying probability distribution that describes the data. Three distance statistics are compared in this study. They are the Euclidean distance, the Jeffries-Matusita distance and the Kuiper distance. The data used in testing the bootstrap method are satellite measurements of cloud systems called cloud objects. Each cloud object is defined as a contiguous region/patch composed of individual footprints or fields of view. A histogram of measured values over footprints is generated for each parameter of each cloud object and then summary histograms are accumulated over all individual histograms in a given cloud-object size category. The results of statistical hypothesis tests using all three distances as test statistics are generally similar, indicating the validity of the proposed method. The Euclidean distance is determined to be most suitable after comparing the statistical tests of several parameters with distinct probability distributions among three cloud-object size categories. Impacts on the statistical significance levels resulting from differences in the total lengths of satellite footprint data between two size categories are also discussed.
Chládek, J; Brázdil, M; Halámek, J; Plešinger, F; Jurák, P
2013-01-01
We present an off-line analysis procedure for exploring brain activity recorded from intra-cerebral electroencephalographic data (SEEG). The objective is to determine the statistical differences between different types of stimulations in the time-frequency domain. The procedure is based on computing relative signal power change and subsequent statistical analysis. An example of characteristic statistically significant event-related de/synchronization (ERD/ERS) detected across different frequency bands following different oddball stimuli is presented. The method is used for off-line functional classification of different brain areas. PMID:24109865
Coulson, Melissa; Healey, Michelle; Fidler, Fiona; Cumming, Geoff
2010-01-01
A statistically significant result, and a non-significant result may differ little, although significance status may tempt an interpretation of difference. Two studies are reported that compared interpretation of such results presented using null hypothesis significance testing (NHST), or confidence intervals (CIs). Authors of articles published in psychology, behavioral neuroscience, and medical journals were asked, via email, to interpret two fictitious studies that found similar results, one statistically significant, and the other non-significant. Responses from 330 authors varied greatly, but interpretation was generally poor, whether results were presented as CIs or using NHST. However, when interpreting CIs respondents who mentioned NHST were 60% likely to conclude, unjustifiably, the two results conflicted, whereas those who interpreted CIs without reference to NHST were 95% likely to conclude, justifiably, the two results were consistent. Findings were generally similar for all three disciplines. An email survey of academic psychologists confirmed that CIs elicit better interpretations if NHST is not invoked. Improved statistical inference can result from encouragement of meta-analytic thinking and use of CIs but, for full benefit, such highly desirable statistical reform requires also that researchers interpret CIs without recourse to NHST. PMID:21607077
NASA Astrophysics Data System (ADS)
Wilks, Daniel S.
1996-04-01
A simple approach to long-range forecasting of monthly or seasonal quantities is as the average of observations over some number of the most recent years. Finding this `optimal climate normal' (OCN) involves examining the relationships between the observed variable and averages of its values over the previous one to 30 years and selecting the averaging period yielding the best results. This procedure involves a multiplicity of comparisons, which will lead to misleadingly positive results for developments data. The statistical significance of these OCNs are assessed here using a resampling procedure, in which time series of U.S. Climate Division data are repeatedly shuffled to produce statistical distributions of forecast performance measures, under the null hypothesis that the OCNs exhibit no predictive skill. Substantial areas in the United States are found for which forecast performance appears to be significantly better than would occur by chance.Another complication in the assessment of the statistical significance of the OCNs derives from the spatial correlation exhibited by the data. Because of this correlation, instances of Type I errors (false rejections of local null hypotheses) will tend to occur with spatial coherency and accordingly have the potential to be confused with regions for which there may be real predictability. The `field significance' of the collections of local tests is also assessed here by simultaneously and coherently shuffling the time series for the Climate Divisions. Areas exhibiting significant local tests are large enough to conclude that seasonal OCN temperature forecasts exhibit significant skill over parts of the United States for all seasons except SON, OND, and NDJ, and that seasonal OCN precipitation forecasts are significantly skillful only in the fall. Statistical significance is weaker for monthly than for seasonal OCN temperature forecasts, and the monthly OCN precipitation forecasts do not exhibit significant predictive
Weighing the costs of different errors when determining statistical significant during monitoring
Technology Transfer Automated Retrieval System (TEKTRAN)
Selecting appropriate significance levels when constructing confidence intervals and performing statistical analyses with rangeland monitoring data is not a straightforward process. This process is burdened by the conventional selection of “95% confidence” (i.e., Type I error rate, a =0.05) as the d...
ERIC Educational Resources Information Center
Linting, Marielle; van Os, Bart Jan; Meulman, Jacqueline J.
2011-01-01
In this paper, the statistical significance of the contribution of variables to the principal components in principal components analysis (PCA) is assessed nonparametrically by the use of permutation tests. We compare a new strategy to a strategy used in previous research consisting of permuting the columns (variables) of a data matrix…
ERIC Educational Resources Information Center
Spinella, Sarah
2011-01-01
As result replicability is essential to science and difficult to achieve through external replicability, the present paper notes the insufficiency of null hypothesis statistical significance testing (NHSST) and explains the bootstrap as a plausible alternative, with a heuristic example to illustrate the bootstrap method. The bootstrap relies on…
Interpreting Statistical Significance Test Results: A Proposed New "What If" Method.
ERIC Educational Resources Information Center
Kieffer, Kevin M.; Thompson, Bruce
As the 1994 publication manual of the American Psychological Association emphasized, "p" values are affected by sample size. As a result, it can be helpful to interpret the results of statistical significant tests in a sample size context by conducting so-called "what if" analyses. However, these methods can be inaccurate unless "corrected" effect…
Recent Literature on Whether Statistical Significance Tests Should or Should Not Be Banned.
ERIC Educational Resources Information Center
Deegear, James
This paper summarizes the literature regarding statistical significant testing with an emphasis on recent literature in various discipline and literature exploring why researchers have demonstrably failed to be influenced by the American Psychological Association publication manual's encouragement to report effect sizes. Also considered are…
Statistical Significance of the Trends in Monthly Heavy Precipitation Over the US
Mahajan, Salil; North, Dr. Gerald R.; Saravanan, Dr. R.; Genton, Dr. Marc G.
2012-01-01
Trends in monthly heavy precipitation, defined by a return period of one year, are assessed for statistical significance in observations and Global Climate Model (GCM) simulations over the contiguous United States using Monte Carlo non-parametric and parametric bootstrapping techniques. The results from the two Monte Carlo approaches are found to be similar to each other, and also to the traditional non-parametric Kendall's {tau} test, implying the robustness of the approach. Two different observational data-sets are employed to test for trends in monthly heavy precipitation and are found to exhibit consistent results. Both data-sets demonstrate upward trends, one of which is found to be statistically significant at the 95% confidence level. Upward trends similar to observations are observed in some climate model simulations of the twentieth century, but their statistical significance is marginal. For projections of the twenty-first century, a statistically significant upwards trend is observed in most of the climate models analyzed. The change in the simulated precipitation variance appears to be more important in the twenty-first century projections than changes in the mean precipitation. Stochastic fluctuations of the climate-system are found to be dominate monthly heavy precipitation as some GCM simulations show a downwards trend even in the twenty-first century projections when the greenhouse gas forcings are strong.
ERIC Educational Resources Information Center
Snyder, Patricia; Lawson, Stephen
Magnitude of effect measures (MEMs), when adequately understood and correctly used, are important aids for researchers who do not want to rely solely on tests of statistical significance in substantive result interpretation. The MEM tells how much of the dependent variable can be controlled, predicted, or explained by the independent variables.…
Reflections on Statistical and Substantive Significance, with a Slice of Replication.
ERIC Educational Resources Information Center
Robinson, Daniel H.; Levin, Joel R.
1997-01-01
Proposes modifications to the recent suggestions by B. Thompson (1996) for an American Educational Research Association editorial policy on statistical significance testing. Points out that, although it is useful to include effect sizes, they can be misinterpreted, and argues, as does Thompson, for greater attention to replication in educational…
Lonie, John M; Rahim, Hamid
2010-12-01
The objective of this study was to determine if the addition of a reflective writing component in a fourth year (P-2) pharmacy communication skills course would significantly affect 2 measures of learning: (1) objective multiple choice examination questions and (2) a patient counseling Objective Structured Clinical Examination (OSCE) score. Using a nonequivalent group quasi-experimental retrospective comparison design, 98 randomly selected final examination scores from students taking a non-writing intensive (NWI) communication skills course were compared with 112 randomly selected final examination scores from students that took a communication skills course in which students engaged in several reflective writing assignments. In addition, 91 randomly selected patient counseling OSCE scores from a NWI course were statistically compared with 112 scores from students that took the writing intensive (WI) course. There were statistically significant improvements in multiple choice examination scores in the group that took the reflective writing communication skills course. There was not a statistically significant difference in patient counseling OSCE scores after students completed the WI course. Studying the effects of using reflective writing assignments in communication skills courses may improve the retention and retrieval of information presented within the course.
Alves, Gelio; Wang, Guanghui; Ogurtsov, Aleksey Y; Drake, Steven K; Gucek, Marjan; Suffredini, Anthony F; Sacks, David B; Yu, Yi-Kuo
2016-02-01
Correct and rapid identification of microorganisms is the key to the success of many important applications in health and safety, including, but not limited to, infection treatment, food safety, and biodefense. With the advance of mass spectrometry (MS) technology, the speed of identification can be greatly improved. However, the increasing number of microbes sequenced is challenging correct microbial identification because of the large number of choices present. To properly disentangle candidate microbes, one needs to go beyond apparent morphology or simple 'fingerprinting'; to correctly prioritize the candidate microbes, one needs to have accurate statistical significance in microbial identification. We meet these challenges by using peptidome profiles of microbes to better separate them and by designing an analysis method that yields accurate statistical significance. Here, we present an analysis pipeline that uses tandem MS (MS/MS) spectra for microbial identification or classification. We have demonstrated, using MS/MS data of 81 samples, each composed of a single known microorganism, that the proposed pipeline can correctly identify microorganisms at least at the genus and species levels. We have also shown that the proposed pipeline computes accurate statistical significances, i.e., E-values for identified peptides and unified E-values for identified microorganisms. The proposed analysis pipeline has been implemented in MiCId, a freely available software for Microorganism Classification and Identification. MiCId is available for download at http://www.ncbi.nlm.nih.gov/CBBresearch/Yu/downloads.html . Graphical Abstract ᅟ.
NASA Astrophysics Data System (ADS)
Alves, Gelio; Wang, Guanghui; Ogurtsov, Aleksey Y.; Drake, Steven K.; Gucek, Marjan; Suffredini, Anthony F.; Sacks, David B.; Yu, Yi-Kuo
2016-02-01
Correct and rapid identification of microorganisms is the key to the success of many important applications in health and safety, including, but not limited to, infection treatment, food safety, and biodefense. With the advance of mass spectrometry (MS) technology, the speed of identification can be greatly improved. However, the increasing number of microbes sequenced is challenging correct microbial identification because of the large number of choices present. To properly disentangle candidate microbes, one needs to go beyond apparent morphology or simple `fingerprinting'; to correctly prioritize the candidate microbes, one needs to have accurate statistical significance in microbial identification. We meet these challenges by using peptidome profiles of microbes to better separate them and by designing an analysis method that yields accurate statistical significance. Here, we present an analysis pipeline that uses tandem MS (MS/MS) spectra for microbial identification or classification. We have demonstrated, using MS/MS data of 81 samples, each composed of a single known microorganism, that the proposed pipeline can correctly identify microorganisms at least at the genus and species levels. We have also shown that the proposed pipeline computes accurate statistical significances, i.e., E-values for identified peptides and unified E-values for identified microorganisms. The proposed analysis pipeline has been implemented in MiCId, a freely available software for Microorganism Classification and Identification. MiCId is available for download at http://www.ncbi.nlm.nih.gov/CBBresearch/Yu/downloads.html.
Rudd, James; Moore, Jason H; Urbanowicz, Ryan J
2013-11-01
Permutation-based statistics for evaluating the significance of class prediction, predictive attributes, and patterns of association have only appeared within the learning classifier system (LCS) literature since 2012. While still not widely utilized by the LCS research community, formal evaluations of test statistic confidence are imperative to large and complex real world applications such as genetic epidemiology where it is standard practice to quantify the likelihood that a seemingly meaningful statistic could have been obtained purely by chance. LCS algorithms are relatively computationally expensive on their own. The compounding requirements for generating permutation-based statistics may be a limiting factor for some researchers interested in applying LCS algorithms to real world problems. Technology has made LCS parallelization strategies more accessible and thus more popular in recent years. In the present study we examine the benefits of externally parallelizing a series of independent LCS runs such that permutation testing with cross validation becomes more feasible to complete on a single multi-core workstation. We test our python implementation of this strategy in the context of a simulated complex genetic epidemiological data mining problem. Our evaluations indicate that as long as the number of concurrent processes does not exceed the number of CPU cores, the speedup achieved is approximately linear. PMID:24358057
On an Additive Semigraphoid Model for Statistical Networks With Application to Pathway Analysis
Li, Bing; Chun, Hyonho; Zhao, Hongyu
2014-01-01
We introduce a nonparametric method for estimating non-gaussian graphical models based on a new statistical relation called additive conditional independence, which is a three-way relation among random vectors that resembles the logical structure of conditional independence. Additive conditional independence allows us to use one-dimensional kernel regardless of the dimension of the graph, which not only avoids the curse of dimensionality but also simplifies computation. It also gives rise to a parallel structure to the gaussian graphical model that replaces the precision matrix by an additive precision operator. The estimators derived from additive conditional independence cover the recently introduced nonparanormal graphical model as a special case, but outperform it when the gaussian copula assumption is violated. We compare the new method with existing ones by simulations and in genetic pathway analysis. PMID:26401064
Martinez-Val, Ana; Garcia, Fernando; Ximénez-Embún, Pilar; Ibarz, Nuria; Zarzuela, Eduardo; Ruppen, Isabel; Mohammed, Shabaz; Munoz, Javier
2016-09-01
Isobaric labeling is gaining popularity in proteomics due to its multiplexing capacity. However, copeptide fragmentation introduces a bias that undermines its accuracy. Several strategies have been shown to partially and, in some cases, completely solve this issue. However, it is still not clear how ratio compression affects the ability to identify a protein's change of abundance as statistically significant. Here, by using the "two proteomes" approach (E. coli lysates with fixed 2.5 ratios in the presence or absence of human lysates acting as the background interference) and manipulating isolation width values, we were able to model isobaric data with different levels of accuracy and precision in three types of mass spectrometers: LTQ Orbitrap Velos, Impact, and Q Exactive. We determined the influence of these variables on the statistical significance of the distorted ratios and compared them to the ratios measured without impurities. Our results confirm previous findings1-4 regarding the importance of optimizing acquisition parameters in each instrument in order to minimize interference without compromising precision and identification. We also show that, under these experimental conditions, the inclusion of a second replicate increases statistical sensitivity 2-3-fold and counterbalances to a large extent the issue of ratio compression.
Li, Qingbo; Roxas, Bryan AP
2009-01-01
Background Many studies have provided algorithms or methods to assess a statistical significance in quantitative proteomics when multiple replicates for a protein sample and a LC/MS analysis are available. But, confidence is still lacking in using datasets for a biological interpretation without protein sample replicates. Although a fold-change is a conventional threshold that can be used when there are no sample replicates, it does not provide an assessment of statistical significance such as a false discovery rate (FDR) which is an important indicator of the reliability to identify differentially expressed proteins. In this work, we investigate whether differentially expressed proteins can be detected with a statistical significance from a pair of unlabeled protein samples without replicates and with only duplicate LC/MS injections per sample. A FDR is used to gauge the statistical significance of the differentially expressed proteins. Results We have experimented to operate on several parameters to control a FDR, including a fold-change, a statistical test, and a minimum number of permuted significant pairings. Although none of these parameters alone gives a satisfactory control of a FDR, we find that a combination of these parameters provides a very effective means to control a FDR without compromising the sensitivity. The results suggest that it is possible to perform a significance analysis without protein sample replicates. Only duplicate LC/MS injections per sample are needed. We illustrate that differentially expressed proteins can be detected with a FDR between 0 and 15% at a positive rate of 4–16%. The method is evaluated for its sensitivity and specificity by a ROC analysis, and is further validated with a [15N]-labeled internal-standard protein sample and additional unlabeled protein sample replicates. Conclusion We demonstrate that a statistical significance can be inferred without protein sample replicates in label-free quantitative proteomics. The
Deriving statistical significance maps for SVM based image classification and group comparisons.
Gaonkar, Bilwaj; Davatzikos, Christos
2012-01-01
Population based pattern analysis and classification for quantifying structural and functional differences between diverse groups has been shown to be a powerful tool for the study of a number of diseases, and is quite commonly used especially in neuroimaging. The alternative to these pattern analysis methods, namely mass univariate methods such as voxel based analysis and all related methods, cannot detect multivariate patterns associated with group differences, and are not particularly suitable for developing individual-based diagnostic and prognostic biomarkers. A commonly used pattern analysis tool is the support vector machine (SVM). Unlike univariate statistical frameworks for morphometry, analytical tools for statistical inference are unavailable for the SVM. In this paper, we show that null distributions ordinarily obtained by permutation tests using SVMs can be analytically approximated from the data. The analytical computation takes a small fraction of the time it takes to do an actual permutation test, thereby rendering it possible to quickly create statistical significance maps derived from SVMs. Such maps are critical for understanding imaging patterns of group differences and interpreting which anatomical regions are important in determining the classifier's decision.
Jefferson, L; Cooper, E; Hewitt, C; Torgerson, T; Cook, L; Tharmanathan, P; Cockayne, S; Torgerson, D
2016-01-01
Objective Time-lag from study completion to publication is a potential source of publication bias in randomised controlled trials. This study sought to update the evidence base by identifying the effect of the statistical significance of research findings on time to publication of trial results. Design Literature searches were carried out in four general medical journals from June 2013 to June 2014 inclusive (BMJ, JAMA, the Lancet and the New England Journal of Medicine). Setting Methodological review of four general medical journals. Participants Original research articles presenting the primary analyses from phase 2, 3 and 4 parallel-group randomised controlled trials were included. Main outcome measures Time from trial completion to publication. Results The median time from trial completion to publication was 431 days (n = 208, interquartile range 278–618). A multivariable adjusted Cox model found no statistically significant difference in time to publication for trials reporting positive or negative results (hazard ratio: 0.86, 95% CI 0.64 to 1.16, p = 0.32). Conclusion In contrast to previous studies, this review did not demonstrate the presence of time-lag bias in time to publication. This may be a result of these articles being published in four high-impact general medical journals that may be more inclined to publish rapidly, whatever the findings. Further research is needed to explore the presence of time-lag bias in lower quality studies and lower impact journals. PMID:27757242
McKay, J Lucas; Welch, Torrence D J; Vidakovic, Brani; Ting, Lena H
2013-01-01
We developed wavelet-based functional ANOVA (wfANOVA) as a novel approach for comparing neurophysiological signals that are functions of time. Temporal resolution is often sacrificed by analyzing such data in large time bins, increasing statistical power by reducing the number of comparisons. We performed ANOVA in the wavelet domain because differences between curves tend to be represented by a few temporally localized wavelets, which we transformed back to the time domain for visualization. We compared wfANOVA and ANOVA performed in the time domain (tANOVA) on both experimental electromyographic (EMG) signals from responses to perturbation during standing balance across changes in peak perturbation acceleration (3 levels) and velocity (4 levels) and on simulated data with known contrasts. In experimental EMG data, wfANOVA revealed the continuous shape and magnitude of significant differences over time without a priori selection of time bins. However, tANOVA revealed only the largest differences at discontinuous time points, resulting in features with later onsets and shorter durations than those identified using wfANOVA (P < 0.02). Furthermore, wfANOVA required significantly fewer (~1/4;×; P < 0.015) significant F tests than tANOVA, resulting in post hoc tests with increased power. In simulated EMG data, wfANOVA identified known contrast curves with a high level of precision (r(2) = 0.94 ± 0.08) and performed better than tANOVA across noise levels (P < <0.01). Therefore, wfANOVA may be useful for revealing differences in the shape and magnitude of neurophysiological signals (e.g., EMG, firing rates) across multiple conditions with both high temporal resolution and high statistical power. PMID:23100136
2014-01-01
Background Most work on the topic of activity landscapes has focused on their quantitative description and visual representation, with the aim of aiding navigation of SAR. Recent developments have addressed applications such as quantifying the proportion of activity cliffs, investigating the predictive abilities of activity landscape methods and so on. However, all these publications have worked under the assumption that the activity landscape models are “real” (i.e., statistically significant). Results The current study addresses for the first time, in a quantitative manner, the significance of a landscape or individual cliffs in the landscape. In particular, we question whether the activity landscape derived from observed (experimental) activity data is different from a randomly generated landscape. To address this we used the SALI measure with six different data sets tested against one or more molecular targets. We also assessed the significance of the landscapes for single and multiple representations. Conclusions We find that non-random landscapes are data set and molecular representation dependent. For the data sets and representations used in this work, our results suggest that not all representations lead to non-random landscapes. This indicates that not all molecular representations should be used to a) interpret the SAR and b) combined to generate consensus models. Our results suggest that significance testing of activity landscape models and in particular, activity cliffs, is key, prior to the use of such models. PMID:24694189
NASA Astrophysics Data System (ADS)
Santer, B. D.; Wigley, T. M. L.; Boyle, J. S.; Gaffen, D. J.; Hnilo, J. J.; Nychka, D.; Parker, D. E.; Taylor, K. E.
2000-03-01
This paper examines trend uncertainties in layer-average free atmosphere temperatures arising from the use of different trend estimation methods. It also considers statistical issues that arise in assessing the significance of individual trends and of trend differences between data sets. Possible causes of these trends are not addressed. We use data from satellite and radiosonde measurements and from two reanalysis projects. To facilitate intercomparison, we compute from reanalyses and radiosonde data temperatures equivalent to those from the satellite-based Microwave Sounding Unit (MSU). We compare linear trends based on minimization of absolute deviations (LA) and minimization of squared deviations (LS). Differences are generally less than 0.05°C/decade over 1959-1996. Over 1979-1993, they exceed 0.10°C/decade for lower tropospheric time series and 0.15°C/decade for the lower stratosphere. Trend fitting by the LA method can degrade the lower-tropospheric trend agreement of 0.03°C/decade (over 1979-1996) previously reported for the MSU and radiosonde data. In assessing trend significance we employ two methods to account for temporal autocorrelation effects. With our preferred method, virtually none of the individual 1979-1993 trends in deep-layer temperatures are significantly different from zero. To examine trend differences between data sets we compute 95% confidence intervals for individual trends and show that these overlap for almost all data sets considered. Confidence intervals for lower-tropospheric trends encompass both zero and the model-projected trends due to anthropogenic effects. We also test the significance of a trend in d(t), the time series of differences between a pair of data sets. Use of d(t) removes variability common to both time series and facilitates identification of small trend differences. This more discerning test reveals that roughly 30% of the data set comparisons have significant differences in lower-tropospheric trends
Crow, C.J.
1985-01-01
Middle Ordovician age Chickamauga Group carbonates crop out along the Birmingham and Murphrees Valley anticlines in central Alabama. The macrofossil contents on exposed surfaces of seven bioherms have been counted to determine their various paleontologic characteristics. Twelve groups of organisms are present in these bioherms. Dominant organisms include bryozoans, algae, brachiopods, sponges, pelmatozoans, stromatoporoids and corals. Minor accessory fauna include predators, scavengers and grazers such as gastropods, ostracods, trilobites, cephalopods and pelecypods. Vertical and horizontal niche zonation has been detected for some of the bioherm dwelling fauna. No one bioherm of those studied exhibits all 12 groups of organisms; rather, individual bioherms display various subsets of the total diversity. Statistical treatment (G-test) of the diversity data indicates a lack of statistical homogeneity of the bioherms, both within and between localities. Between-locality population heterogeneity can be ascribed to differences in biologic responses to such gross environmental factors as water depth and clarity, and energy levels. At any one locality, gross aspects of the paleoenvironments are assumed to have been more uniform. Significant differences among bioherms at any one locality may have resulted from patchy distribution of species populations, differential preservation and other factors.
A Network-Based Method to Assess the Statistical Significance of Mild Co-Regulation Effects
Horvát, Emőke-Ágnes; Zhang, Jitao David; Uhlmann, Stefan; Sahin, Özgür; Zweig, Katharina Anna
2013-01-01
Recent development of high-throughput, multiplexing technology has initiated projects that systematically investigate interactions between two types of components in biological networks, for instance transcription factors and promoter sequences, or microRNAs (miRNAs) and mRNAs. In terms of network biology, such screening approaches primarily attempt to elucidate relations between biological components of two distinct types, which can be represented as edges between nodes in a bipartite graph. However, it is often desirable not only to determine regulatory relationships between nodes of different types, but also to understand the connection patterns of nodes of the same type. Especially interesting is the co-occurrence of two nodes of the same type, i.e., the number of their common neighbours, which current high-throughput screening analysis fails to address. The co-occurrence gives the number of circumstances under which both of the biological components are influenced in the same way. Here we present SICORE, a novel network-based method to detect pairs of nodes with a statistically significant co-occurrence. We first show the stability of the proposed method on artificial data sets: when randomly adding and deleting observations we obtain reliable results even with noise exceeding the expected level in large-scale experiments. Subsequently, we illustrate the viability of the method based on the analysis of a proteomic screening data set to reveal regulatory patterns of human microRNAs targeting proteins in the EGFR-driven cell cycle signalling system. Since statistically significant co-occurrence may indicate functional synergy and the mechanisms underlying canalization, and thus hold promise in drug target identification and therapeutic development, we provide a platform-independent implementation of SICORE with a graphical user interface as a novel tool in the arsenal of high-throughput screening analysis. PMID:24039936
Mar, Raymond A; Spreng, R Nathan; Deyoung, Colin G
2013-09-01
Personality neuroscience involves examining relations between cognitive or behavioral variability and neural variables like brain structure and function. Such studies have uncovered a number of fascinating associations but require large samples, which are expensive to collect. Here, we propose a system that capitalizes on neuroimaging data commonly collected for separate purposes and combines it with new behavioral data to test novel hypotheses. Specifically, we suggest that groups of researchers compile a database of structural (i.e., anatomical) and resting-state functional scans produced for other task-based investigations and pair these data with contact information for the participants who contributed the data. This contact information can then be used to collect additional cognitive, behavioral, or individual-difference data that are then reassociated with the neuroimaging data for analysis. This would allow for novel hypotheses regarding brain-behavior relations to be tested on the basis of large sample sizes (with adequate statistical power) for low additional cost. This idea can be implemented at small scales at single institutions, among a group of collaborating researchers, or perhaps even within a single lab. It can also be implemented at a large scale across institutions, although doing so would entail a number of additional complications.
Statistics, Probability, Significance, Likelihood: Words Mean What We Define Them to Mean
ERIC Educational Resources Information Center
Drummond, Gordon B.; Tom, Brian D. M.
2011-01-01
Statisticians use words deliberately and specifically, but not necessarily in the way they are used colloquially. For example, in general parlance "statistics" can mean numerical information, usually data. In contrast, one large statistics textbook defines the term "statistic" to denote "a characteristic of a "sample", such as the average score",…
NASA Astrophysics Data System (ADS)
Kellerer-Pirklbauer, Andreas
2016-04-01
Longer data series (e.g. >10 a) of ground temperatures in alpine regions are helpful to improve the understanding regarding the effects of present climate change on distribution and thermal characteristics of seasonal frost- and permafrost-affected areas. Beginning in 2004 - and more intensively since 2006 - a permafrost and seasonal frost monitoring network was established in Central and Eastern Austria by the University of Graz. This network consists of c.60 ground temperature (surface and near-surface) monitoring sites which are located at 1922-3002 m a.s.l., at latitude 46°55'-47°22'N and at longitude 12°44'-14°41'E. These data allow conclusions about general ground thermal conditions, potential permafrost occurrence, trend during the observation period, and regional pattern of changes. Calculations and analyses of several different temperature-related parameters were accomplished. At an annual scale a region-wide statistical significant warming during the observation period was revealed by e.g. an increase in mean annual temperature values (mean, maximum) or the significant lowering of the surface frost number (F+). At a seasonal scale no significant trend of any temperature-related parameter was in most cases revealed for spring (MAM) and autumn (SON). Winter (DJF) shows only a weak warming. In contrast, the summer (JJA) season reveals in general a significant warming as confirmed by several different temperature-related parameters such as e.g. mean seasonal temperature, number of thawing degree days, number of freezing degree days, or days without night frost. On a monthly basis August shows the statistically most robust and strongest warming of all months, although regional differences occur. Despite the fact that the general ground temperature warming during the last decade is confirmed by the field data in the study region, complications in trend analyses arise by temperature anomalies (e.g. warm winter 2006/07) or substantial variations in the winter
Significant enhancement of the superconducting properties of MgB2 by polyvinyl alcohol additives
NASA Astrophysics Data System (ADS)
Zeng, R.; Lu, L.; Dou, S. X.
2008-08-01
We report a systematic study of the effects of polymer addition on the lattice parameters, microstructure and superconducting properties of MgB2. Polyvinyl alcohol [-C2H4O-]n (PVA) as a typical polymer was used as an additive to MgB2. It was found that PVA additions have the following features: (1) the polymer can have a very low oxygen (O) content or even none at all, and PVA has a low O content (C:O = 2:1), which reduces the impurities brought into MgB2 from the doping, and (2) PVA decomposes at a temperature of 400-650 °C, which means that the reaction occurs in the same temperature range as MgB2 formation, providing highly reactive C, which homogeneously substitutes for B at this low-temperature range. The above considerations significantly enhance the critical current, Jc, the irreversibility field, Hirr, and the upper critical field, Hc2, of MgB2 compared to un-doped samples or those doped with other carbon sources. In this work, suitable PVA doping levels improved both the connectivity and flux pinning, so that the Jc of PVA-doped MgB2 was improved over the whole field range.
Carr, J.R.; Roberts, K.P.
1989-02-01
Universal kriging is compared with ordinary kriging for estimation of earthquake ground motion. Ordinary kriging is based on a stationary random function model; universal kriging is based on a nonstationary random function model representing first-order drift. Accuracy of universal kriging is compared with that for ordinary kriging; cross-validation is used as the basis for comparison. Hypothesis testing on these results shows that accuracy obtained using universal kriging is not significantly different from accuracy obtained using ordinary kriging. Test based on normal distribution assumptions are applied to errors measured in the cross-validation procedure; t and F tests reveal no evidence to suggest universal and ordinary kriging are different for estimation of earthquake ground motion. Nonparametric hypothesis tests applied to these errors and jackknife statistics yield the same conclusion: universal and ordinary kriging are not significantly different for this application as determined by a cross-validation procedure. These results are based on application to four independent data sets (four different seismic events).
Key statistics related to CO/sub 2/ emissions: Significant contributing countries
Kellogg, M.A.; Edmonds, J.A.; Scott, M.J.; Pomykala, J.S.
1987-07-01
This country selection task report describes and applies a methodology for identifying a set of countries responsible for significant present and anticipated future emissions of CO/sub 2/ and other radiatively important gases (RIGs). The identification of countries responsible for CO/sub 2/ and other RIGs emissions will help determine to what extent a select number of countries might be capable of influencing future emissions. Once identified, those countries could potentially exercise cooperative collective control of global emissions and thus mitigate the associated adverse affects of those emissions. The methodology developed consists of two approaches: the resource approach and the emissions approach. While conceptually very different, both approaches yield the same fundamental conclusion. The core of any international initiative to control global emissions must include three key countries: the US, USSR, and the People's Republic of China. It was also determined that broader control can be achieved through the inclusion of sixteen additional countries with significant contributions to worldwide emissions.
WISCOD: a statistical web-enabled tool for the identification of significant protein coding regions.
Vilardell, Mireia; Parra, Genis; Civit, Sergi
2014-01-01
Classically, gene prediction programs are based on detecting signals such as boundary sites (splice sites, starts, and stops) and coding regions in the DNA sequence in order to build potential exons and join them into a gene structure. Although nowadays it is possible to improve their performance with additional information from related species or/and cDNA databases, further improvement at any step could help to obtain better predictions. Here, we present WISCOD, a web-enabled tool for the identification of significant protein coding regions, a novel software tool that tackles the exon prediction problem in eukaryotic genomes. WISCOD has the capacity to detect real exons from large lists of potential exons, and it provides an easy way to use global P value called expected probability of being a false exon (EPFE) that is useful for ranking potential exons in a probabilistic framework, without additional computational costs. The advantage of our approach is that it significantly increases the specificity and sensitivity (both between 80% and 90%) in comparison to other ab initio methods (where they are in the range of 70-75%). WISCOD is written in JAVA and R and is available to download and to run in a local mode on Linux and Windows platforms. PMID:25313355
WISCOD: a statistical web-enabled tool for the identification of significant protein coding regions.
Vilardell, Mireia; Parra, Genis; Civit, Sergi
2014-01-01
Classically, gene prediction programs are based on detecting signals such as boundary sites (splice sites, starts, and stops) and coding regions in the DNA sequence in order to build potential exons and join them into a gene structure. Although nowadays it is possible to improve their performance with additional information from related species or/and cDNA databases, further improvement at any step could help to obtain better predictions. Here, we present WISCOD, a web-enabled tool for the identification of significant protein coding regions, a novel software tool that tackles the exon prediction problem in eukaryotic genomes. WISCOD has the capacity to detect real exons from large lists of potential exons, and it provides an easy way to use global P value called expected probability of being a false exon (EPFE) that is useful for ranking potential exons in a probabilistic framework, without additional computational costs. The advantage of our approach is that it significantly increases the specificity and sensitivity (both between 80% and 90%) in comparison to other ab initio methods (where they are in the range of 70-75%). WISCOD is written in JAVA and R and is available to download and to run in a local mode on Linux and Windows platforms.
Alexandrov, N. N.; Go, N.
1994-01-01
We have completed an exhaustive search for the common spatial arrangements of backbone fragments (SARFs) in nonhomologous proteins. This type of local structural similarity, incorporating short fragments of backbone atoms, arranged not necessarily in the same order along the polypeptide chain, appears to be important for protein function and stability. To estimate the statistical significance of the similarities, we have introduced a similarity score. We present several locally similar structures, with a large similarity score, which have not yet been reported. On the basis of the results of pairwise comparison, we have performed hierarchical cluster analysis of protein structures. Our analysis is not limited by comparison of single chains but also includes complex molecules consisting of several subunits. The SARFs with backbone fragments from different polypeptide chains provide a stable interaction between subunits in protein molecules. In many cases the active site of enzyme is located at the same position relative to the common SARFs, implying a function of the certain SARFs as a universal interface of the protein-substrate interaction. PMID:8069217
Hulshizer, Randall; Blalock, Eric M
2007-01-01
Background Researchers using RNA expression microarrays in experimental designs with more than two treatment groups often identify statistically significant genes with ANOVA approaches. However, the ANOVA test does not discriminate which of the multiple treatment groups differ from one another. Thus, post hoc tests, such as linear contrasts, template correlations, and pairwise comparisons are used. Linear contrasts and template correlations work extremely well, especially when the researcher has a priori information pointing to a particular pattern/template among the different treatment groups. Further, all pairwise comparisons can be used to identify particular, treatment group-dependent patterns of gene expression. However, these approaches are biased by the researcher's assumptions, and some treatment-based patterns may fail to be detected using these approaches. Finally, different patterns may have different probabilities of occurring by chance, importantly influencing researchers' conclusions about a pattern and its constituent genes. Results We developed a four step, post hoc pattern matching (PPM) algorithm to automate single channel gene expression pattern identification/significance. First, 1-Way Analysis of Variance (ANOVA), coupled with post hoc 'all pairwise' comparisons are calculated for all genes. Second, for each ANOVA-significant gene, all pairwise contrast results are encoded to create unique pattern ID numbers. The # genes found in each pattern in the data is identified as that pattern's 'actual' frequency. Third, using Monte Carlo simulations, those patterns' frequencies are estimated in random data ('random' gene pattern frequency). Fourth, a Z-score for overrepresentation of the pattern is calculated ('actual' against 'random' gene pattern frequencies). We wrote a Visual Basic program (StatiGen) that automates PPM procedure, constructs an Excel workbook with standardized graphs of overrepresented patterns, and lists of the genes comprising
Huang, Ruili; Southall, Noel; Xia, Menghang; Cho, Ming-Hsuang; Jadhav, Ajit; Nguyen, Dac-Trung; Inglese, James; Tice, Raymond R.; Austin, Christopher P.
2009-01-01
In support of the U.S. Tox21 program, we have developed a simple and chemically intuitive model we call weighted feature significance (WFS) to predict the toxicological activity of compounds, based on the statistical enrichment of structural features in toxic compounds. We trained and tested the model on the following: (1) data from quantitative high–throughput screening cytotoxicity and caspase activation assays conducted at the National Institutes of Health Chemical Genomics Center, (2) data from Salmonella typhimurium reverse mutagenicity assays conducted by the U.S. National Toxicology Program, and (3) hepatotoxicity data published in the Registry of Toxic Effects of Chemical Substances. Enrichments of structural features in toxic compounds are evaluated for their statistical significance and compiled into a simple additive model of toxicity and then used to score new compounds for potential toxicity. The predictive power of the model for cytotoxicity was validated using an independent set of compounds from the U.S. Environmental Protection Agency tested also at the National Institutes of Health Chemical Genomics Center. We compared the performance of our WFS approach with classical classification methods such as Naive Bayesian clustering and support vector machines. In most test cases, WFS showed similar or slightly better predictive power, especially in the prediction of hepatotoxic compounds, where WFS appeared to have the best performance among the three methods. The new algorithm has the important advantages of simplicity, power, interpretability, and ease of implementation. PMID:19805409
Statistical Significance of Periodicity and Log-Periodicity with Heavy-Tailed Correlated Noise
NASA Astrophysics Data System (ADS)
Zhou, Wei-Xing; Sornette, Didier
We estimate the probability that random noise, of several plausible standard distributions, creates a false alarm that a periodicity (or log-periodicity) is found in a time series. The solution of this problem is already known for independent Gaussian distributed noise. We investigate more general situations with non-Gaussian correlated noises and present synthetic tests on the detectability and statistical significance of periodic components. A periodic component of a time series is usually detected by some sort of Fourier analysis. Here, we use the Lomb periodogram analysis, which is suitable and outperforms Fourier transforms for unevenly sampled time series. We examine the false-alarm probability of the largest spectral peak of the Lomb periodogram in the presence of power-law distributed noises, of short-range and of long-range fractional-Gaussian noises. Increasing heavy-tailness (respectively correlations describing persistence) tends to decrease (respectively increase) the false-alarm probability of finding a large spurious Lomb peak. Increasing anti-persistence tends to decrease the false-alarm probability. We also study the interplay between heavy-tailness and long-range correlations. In order to fully determine if a Lomb peak signals a genuine rather than a spurious periodicity, one should in principle characterize the Lomb peak height, its width and its relations to other peaks in the complete spectrum. As a step towards this full characterization, we construct the joint-distribution of the frequency position (relative to other peaks) and of the height of the highest peak of the power spectrum. We also provide the distributions of the ratio of the highest Lomb peak to the second highest one. Using the insight obtained by the present statistical study, we re-examine previously reported claims of ``log-periodicity'' and find that the credibility for log-periodicity in 2D-freely decaying turbulence is weakened while it is strengthened for fracture, for the
2013-01-01
Background Relative validity (RV), a ratio of ANOVA F-statistics, is often used to compare the validity of patient-reported outcome (PRO) measures. We used the bootstrap to establish the statistical significance of the RV and to identify key factors affecting its significance. Methods Based on responses from 453 chronic kidney disease (CKD) patients to 16 CKD-specific and generic PRO measures, RVs were computed to determine how well each measure discriminated across clinically-defined groups of patients compared to the most discriminating (reference) measure. Statistical significance of RV was quantified by the 95% bootstrap confidence interval. Simulations examined the effects of sample size, denominator F-statistic, correlation between comparator and reference measures, and number of bootstrap replicates. Results The statistical significance of the RV increased as the magnitude of denominator F-statistic increased or as the correlation between comparator and reference measures increased. A denominator F-statistic of 57 conveyed sufficient power (80%) to detect an RV of 0.6 for two measures correlated at r = 0.7. Larger denominator F-statistics or higher correlations provided greater power. Larger sample size with a fixed denominator F-statistic or more bootstrap replicates (beyond 500) had minimal impact. Conclusions The bootstrap is valuable for establishing the statistical significance of RV estimates. A reasonably large denominator F-statistic (F > 57) is required for adequate power when using the RV to compare the validity of measures with small or moderate correlations (r < 0.7). Substantially greater power can be achieved when comparing measures of a very high correlation (r > 0.9). PMID:23721463
NASA Astrophysics Data System (ADS)
Zhu, Yunmin; Rong, Haibo; Mai, Shaowei; Luo, Xueyi; Li, Xiaoping; Li, Weishan
2015-12-01
Spinel lithium manganese oxide, LiMn2O4, is a promising cathode for lithium ion battery in large-scale applications, because it possesses many advantages compared with currently used layered lithium cobalt oxide (LiCoO2) and olivine phosphate (LiFePO4), including naturally abundant resource, environmental friendliness and high and long work potential plateau. Its poor cyclability under high temperature, however, limits its application. In this work, we report a significant cyclability improvement of LiMn2O4 under elevated temperature by using dimethyl phenylphonite (DMPP) as an electrolyte additive. Charge/discharge tests demonstrate that the application of 0.5 wt.% DMPP yields a capacity retention improvement from 16% to 82% for LiMn2O4 after 200 cycles under 55 °C at 1 C (1C = 148 mAh g-1) between 3 and 4.5 V. Electrochemical and physical characterizations indicate that DMPP is electrochemically oxidized at the potential lower than that for lithium extraction, forming a protective cathode interphase on LiMn2O4, which suppresses the electrolyte decomposition and prevents LiMn2O4 from crystal destruction.
NASA Astrophysics Data System (ADS)
Woodruff, J. D.; Donnelly, J. P.; Emanuel, K.
2007-12-01
Coastal overwash deposits preserved within backbarrier sediments extend the documented record of tropical cyclone strikes back several millennia, providing valuable new data that help to elucidate links between tropical cyclone activity and climate variability. Certain caveats should be considered, however, when assessing trends observed within these paleo-storm records. For instance, gaps in overwash activity at a particular site could simply be artifacts produced by the random nature of these episodic events. Recently, a 5000 year record of intense hurricane strikes has been developed using coarse-grained overwash deposits from Laguna Playa Grande (LPG), a coastal lagoon located on the island of Vieques, Puerto Rico. The LPG record exhibits periods of frequent and infrequent hurricane-induced overwash activity spanning many centuries. These trends are consistent with overwash reconstructions from western Long Island, NY, and have been linked in part to variability in the El Niño/Southern Oscillation and the West African monsoon. Here we assess the statistical significance for active and inactive periods at LPG by creating thousands of synthetic overwash records for the site using storm tracks generated by a coupled ocean-atmosphere hurricane model set to mimic modern climatology. Results show that periods of infrequent overwash activity at the LPG site between 3600 and 1500 yrs BP and 1000 and 250 yrs BP are extremely unlikely to occur under modern climate conditions (above 99 percent confidence). This suggests that the variability observed in the Vieques record is consistent with changing climatic boundary conditions. Overwash frequency is greatest over the last 300 years, with 2 to 3 deposits/century compared to 0.6 deposits/century for earlier active regimes from 2500 to 1000 yrs BP and 5000 to 3600 yrs BP. While this may reflect an unprecedented level of activity over the last 5000 years, it may also in part be due to an undercounting of events in older
Saisubramanian, N; Edwinoliver, N G; Nandakumar, N; Kamini, N R; Puvanakrishnan, R
2006-08-01
The efficacy of lipase from Aspergillus niger MTCC 2594 as an additive in laundry detergent formulations was assessed using response surface methodology (RSM). A five-level four-factorial central composite design was chosen to explain the washing protocol with four critical factors, viz. detergent concentration, lipase concentration, buffer pH and washing temperature. The model suggested that all the factors chosen had a significant impact on oil removal and the optimal conditions for the removal of olive oil from cotton fabric were 1.0% detergent, 75 U of lipase, buffer pH of 9.5 and washing temperature of 25 degrees C. Under optimal conditions, the removal of olive oil from cotton fabric was 33 and 17.1% at 25 and 49 degrees C, respectively, in the presence of lipase over treatment with detergent alone. Hence, lipase from A. niger could be effectively used as an additive in detergent formulation for the removal of triglyceride soil both in cold and warm wash conditions.
NASA Technical Reports Server (NTRS)
Smalheer, C. V.
1973-01-01
The chemistry of lubricant additives is discussed to show what the additives are chemically and what functions they perform in the lubrication of various kinds of equipment. Current theories regarding the mode of action of lubricant additives are presented. The additive groups discussed include the following: (1) detergents and dispersants, (2) corrosion inhibitors, (3) antioxidants, (4) viscosity index improvers, (5) pour point depressants, and (6) antifouling agents.
NASA Astrophysics Data System (ADS)
Alves, Gelio
After the sequencing of many complete genomes, we are in a post-genomic era in which the most important task has changed from gathering genetic information to organizing the mass of data as well as under standing how components interact with each other. The former is usually undertaking using bioinformatics methods, while the latter task is generally termed proteomics. Success in both parts demands correct statistical significance assignments for results found. In my dissertation. I study two concrete examples: global sequence alignment statistics and peptide sequencing/identification using mass spectrometry. High-performance liquid chromatography coupled to a mass spectrometer (HPLC/MS/MS), enabling peptide identifications and thus protein identifications, has become the tool of choice in large-scale proteomics experiments. Peptide identification is usually done by database searches methods. The lack of robust statistical significance assignment among current methods motivated the development of a novel de novo algorithm, RAId, whose score statistics then provide statistical significance for high scoring peptides found in our custom, enzyme-digested peptide library. The ease of incorporating post-translation modifications is another important feature of RAId. To organize the massive protein/DNA data accumulated, biologists often cluster proteins according to their similarity via tools such as sequence alignment. Homologous proteins share similar domains. To assess the similarity of two domains usually requires alignment from head to toe, ie. a global alignment. A good alignment score statistics with an appropriate null model enable us to distinguish the biologically meaningful similarity from chance similarity. There has been much progress in local alignment statistics, which characterize score statistics when alignments tend to appear as a short segment of the whole sequence. For global alignment, which is useful in domain alignment, there is still much room for
Links to sources of cancer-related statistics, including the Surveillance, Epidemiology and End Results (SEER) Program, SEER-Medicare datasets, cancer survivor prevalence data, and the Cancer Trends Progress Report.
NASA Astrophysics Data System (ADS)
Williams, Arnold C.; Pachowicz, Peter W.
2004-09-01
Current mine detection research indicates that no single sensor or single look from a sensor will detect mines/minefields in a real-time manner at a performance level suitable for a forward maneuver unit. Hence, the integrated development of detectors and fusion algorithms are of primary importance. A problem in this development process has been the evaluation of these algorithms with relatively small data sets, leading to anecdotal and frequently over trained results. These anecdotal results are often unreliable and conflicting among various sensors and algorithms. Consequently, the physical phenomena that ought to be exploited and the performance benefits of this exploitation are often ambiguous. The Army RDECOM CERDEC Night Vision Laboratory and Electron Sensors Directorate has collected large amounts of multisensor data such that statistically significant evaluations of detection and fusion algorithms can be obtained. Even with these large data sets care must be taken in algorithm design and data processing to achieve statistically significant performance results for combined detectors and fusion algorithms. This paper discusses statistically significant detection and combined multilook fusion results for the Ellipse Detector (ED) and the Piecewise Level Fusion Algorithm (PLFA). These statistically significant performance results are characterized by ROC curves that have been obtained through processing this multilook data for the high resolution SAR data of the Veridian X-Band radar. We discuss the implications of these results on mine detection and the importance of statistical significance, sample size, ground truth, and algorithm design in performance evaluation.
Injection route and TLR9 agonist addition significantly impact heroin vaccine efficacy.
Bremer, Paul T; Schlosburg, Joel E; Lively, Jenny M; Janda, Kim D
2014-03-01
Active immunization is an effective means of blocking the pharmacodynamic effects of drugs and holds promise as a treatment for heroin addiction. Previously, we demonstrated the efficacy of our first-generation vaccine in blocking heroin self-administration in rats, however, many vaccine components can be modified to further improve performance. Herein we examine the effects of varying heroin vaccine injection route and adjuvant formulation. Mice immunized via subcutaneous (sc) injection exhibited inferior anti-heroin titers compared to intraperitoneal (ip) and sc/ip coadministration injection routes. Addition of TLR9 agonist cytosine-guanine oligodeoxynucleotide 1826 (CpG ODN 1826) to the original alum adjuvant elicited superior antibody titers and opioid affinities compared to alum alone. To thoroughly assess vaccine efficacy, full dose-response curves were generated for heroin-induced analgesia in both hot plate and tail immersion tests. Mice treated with CpG ODN 1826 exhibited greatly shifted dose-response curves (10-13-fold vs unvaccinated controls) while non-CpG ODN vaccine groups did not exhibit the same robust effect (2-7-fold shift for ip and combo, 2-3-fold shift for sc). Our results suggest that CpG ODN 1826 is a highly potent adjuvant, and injection routes should be considered for development of small molecule-protein conjugate vaccines. Lastly, this study has established a new standard for assessing drugs of abuse vaccines, wherein a full dose-response curve should be performed in an appropriate behavioral task.
Addition of Zn to the ternary Mg-Ca-Sr alloys significantly improves their antibacterial property
He, Guanping; Wu, Yuanhao; Zhang, Yu; Zhu, Ye; Liu, Yang; Li, Nan; Li, Mei; Zheng, Guan; He, Baohua; Yin, Qingshui; Zheng, Yufeng; Mao, Chuanbin
2015-01-01
Most of the magnesium (Mg) alloys possess excellent biocompatibility, mechanical property and biodegradability in orthopedic applications. However, these alloys may suffer from bacterial infections due to their insufficient antibacterial capability. In order to reduce the post-surgical infections, a series of biocompatible Mg–1Ca-0.5Sr-xZn (x=0, 2, 4, 6) alloys were fabricated with the addition of antibacterial Zn with variable content and evaluated in terms of their biocompatibility and antibacterial property. The in vitro corrosion study showed that Mg-1Ca-0.5Sr-6Zn alloys exhibited a higher hydrogen evolution volume after 100 h immersion and resulted in a higher pH value of the immersion solution. Our work indicated that Zn-containing Mg alloys exhibited good biocompatibility with high cell viability. The antibacterial studies reveal that the number of bacteria adhered on all of these Mg alloy samples diminished remarkably compared to the Ti-6Al-4V control group. We also found that the proliferation of the bacteria was inhibited by these Mg alloys extracts. Among the prepared alloys, Mg-1Ca-0.5Sr-6Zn alloy not only exhibited a strong antibacterial effect, but also promoted the proliferation of MC3T3-E1 osteoblasts, suggesting that it is a promising alloy with both good antibacterial property and good biocompatibility for use as an orthopedic implant. PMID:26693010
Krumbholz, Aniko; Anielski, Patricia; Gfrerer, Lena; Graw, Matthias; Geyer, Hans; Schänzer, Wilhelm; Dvorak, Jiri; Thieme, Detlef
2014-01-01
Clenbuterol is a well-established β2-agonist, which is prohibited in sports and strictly regulated for use in the livestock industry. During the last few years clenbuterol-positive results in doping controls and in samples from residents or travellers from a high-risk country were suspected to be related the illegal use of clenbuterol for fattening. A sensitive liquid chromatography-tandem mass spectrometry (LC-MS/MS) method was developed to detect low clenbuterol residues in hair with a detection limit of 0.02 pg/mg. A sub-therapeutic application study and a field study with volunteers, who have a high risk of contamination, were performed. For the application study, a total dosage of 30 µg clenbuterol was applied to 20 healthy volunteers on 5 subsequent days. One month after the beginning of the application, clenbuterol was detected in the proximal hair segment (0-1 cm) in concentrations between 0.43 and 4.76 pg/mg. For the second part, samples of 66 Mexican soccer players were analyzed. In 89% of these volunteers, clenbuterol was detectable in their hair at concentrations between 0.02 and 1.90 pg/mg. A comparison of both parts showed no statistical difference between sub-therapeutic application and contamination. In contrast, discrimination to a typical abuse of clenbuterol is apparently possible. Due to these findings results of real doping control samples can be evaluated. PMID:25388545
Krumbholz, Aniko; Anielski, Patricia; Gfrerer, Lena; Graw, Matthias; Geyer, Hans; Schänzer, Wilhelm; Dvorak, Jiri; Thieme, Detlef
2014-01-01
Clenbuterol is a well-established β2-agonist, which is prohibited in sports and strictly regulated for use in the livestock industry. During the last few years clenbuterol-positive results in doping controls and in samples from residents or travellers from a high-risk country were suspected to be related the illegal use of clenbuterol for fattening. A sensitive liquid chromatography-tandem mass spectrometry (LC-MS/MS) method was developed to detect low clenbuterol residues in hair with a detection limit of 0.02 pg/mg. A sub-therapeutic application study and a field study with volunteers, who have a high risk of contamination, were performed. For the application study, a total dosage of 30 µg clenbuterol was applied to 20 healthy volunteers on 5 subsequent days. One month after the beginning of the application, clenbuterol was detected in the proximal hair segment (0-1 cm) in concentrations between 0.43 and 4.76 pg/mg. For the second part, samples of 66 Mexican soccer players were analyzed. In 89% of these volunteers, clenbuterol was detectable in their hair at concentrations between 0.02 and 1.90 pg/mg. A comparison of both parts showed no statistical difference between sub-therapeutic application and contamination. In contrast, discrimination to a typical abuse of clenbuterol is apparently possible. Due to these findings results of real doping control samples can be evaluated.
Injection Route and TLR9 Agonist Addition Significantly Impact Heroin Vaccine Efficacy
2015-01-01
Active immunization is an effective means of blocking the pharmacodynamic effects of drugs and holds promise as a treatment for heroin addiction. Previously, we demonstrated the efficacy of our first-generation vaccine in blocking heroin self-administration in rats, however, many vaccine components can be modified to further improve performance. Herein we examine the effects of varying heroin vaccine injection route and adjuvant formulation. Mice immunized via subcutaneous (sc) injection exhibited inferior anti-heroin titers compared to intraperitoneal (ip) and sc/ip coadministration injection routes. Addition of TLR9 agonist cytosine-guanine oligodeoxynucleotide 1826 (CpG ODN 1826) to the original alum adjuvant elicited superior antibody titers and opioid affinities compared to alum alone. To thoroughly assess vaccine efficacy, full dose–response curves were generated for heroin-induced analgesia in both hot plate and tail immersion tests. Mice treated with CpG ODN 1826 exhibited greatly shifted dose–response curves (10–13-fold vs unvaccinated controls) while non-CpG ODN vaccine groups did not exhibit the same robust effect (2–7-fold shift for ip and combo, 2–3-fold shift for sc). Our results suggest that CpG ODN 1826 is a highly potent adjuvant, and injection routes should be considered for development of small molecule–protein conjugate vaccines. Lastly, this study has established a new standard for assessing drugs of abuse vaccines, wherein a full dose–response curve should be performed in an appropriate behavioral task. PMID:24517171
ERIC Educational Resources Information Center
Hojat, Mohammadreza; Xu, Gang
2004-01-01
Effect Sizes (ES) are an increasingly important index used to quantify the degree of practical significance of study results. This paper gives an introduction to the computation and interpretation of effect sizes from the perspective of the consumer of the research literature. The key points made are: (1) "ES" is a useful indicator of the…
Petykhov, A B; Maev, I V; Deriabin, V E
2012-01-01
Anthropometry--a technique, allowing to obtain the necessary features for the characteristic of human body's changes in norm and at pathology. Statistical analysis of anthropometric parameters, such as--body mass, length, waist line, hip, shoulder and wrist circumferences, skin rolls of fat thickness: on triceps, under a bladebone, on a breast, on a venter and on a biceps, with calculation of indexes and an assessment of possible age influence was carried out for the first time in domestic medicine. Complexes of showing interrelations anthropometric characteristics were detected. Correlation coefficients (r) were counted and the factorial (on a method main a component with the subsequent rotation--a varimax method), covariance and discriminative analyses (with application of the Kaiser and Wilks criterions and F-test) is applied. Study of intergroup variability of body composition was carried out on separate characteristics in healthy individuals groups (135 surveyed aged 45,6 +/- 1,2 years, 56,3% men and 43,7% women) and at internal pathology: patients after a gastrectomy--121 (57,7 +/- 1,2 years, 52% men and 48% women); after Billroth operation--214 (56,1 +/- 1,0 years, 53% men and 47% women); after enterectomy--103 (44,5 +/- 1,8 years, 53% men and 47% women); after mixed genesis protein-energy wasting--206 (29,04 +/- 1,6 years, 79% men and 21% women). The group of interlocking characteristics which includes anthropometric parameters of hypodermic lipopexia (rolls of fat thickness on triceps, a biceps, under a bladebone, on a venter) and fatty body mass was defined by results of the analysis. These characteristics are interconnected with age and growth and have more expressed dependence at women, that reflects development of a fatty component of a body, at assessment of body mass index at women (unlike men). The waist-hip circumference index differs irrespective of body composition indicators that doesn't allow to characterize it with the terms of truncal or
Petykhov, A B; Maev, I V; Deriabin, V E
2012-01-01
Anthropometry--a technique, allowing to obtain the necessary features for the characteristic of human body's changes in norm and at pathology. Statistical analysis of anthropometric parameters, such as--body mass, length, waist line, hip, shoulder and wrist circumferences, skin rolls of fat thickness: on triceps, under a bladebone, on a breast, on a venter and on a biceps, with calculation of indexes and an assessment of possible age influence was carried out for the first time in domestic medicine. Complexes of showing interrelations anthropometric characteristics were detected. Correlation coefficients (r) were counted and the factorial (on a method main a component with the subsequent rotation--a varimax method), covariance and discriminative analyses (with application of the Kaiser and Wilks criterions and F-test) is applied. Study of intergroup variability of body composition was carried out on separate characteristics in healthy individuals groups (135 surveyed aged 45,6 +/- 1,2 years, 56,3% men and 43,7% women) and at internal pathology: patients after a gastrectomy--121 (57,7 +/- 1,2 years, 52% men and 48% women); after Billroth operation--214 (56,1 +/- 1,0 years, 53% men and 47% women); after enterectomy--103 (44,5 +/- 1,8 years, 53% men and 47% women); after mixed genesis protein-energy wasting--206 (29,04 +/- 1,6 years, 79% men and 21% women). The group of interlocking characteristics which includes anthropometric parameters of hypodermic lipopexia (rolls of fat thickness on triceps, a biceps, under a bladebone, on a venter) and fatty body mass was defined by results of the analysis. These characteristics are interconnected with age and growth and have more expressed dependence at women, that reflects development of a fatty component of a body, at assessment of body mass index at women (unlike men). The waist-hip circumference index differs irrespective of body composition indicators that doesn't allow to characterize it with the terms of truncal or
ERIC Educational Resources Information Center
Oshima, T. C.; Raju, Nambury S.; Nanda, Alice O.
2006-01-01
A new item parameter replication method is proposed for assessing the statistical significance of the noncompensatory differential item functioning (NCDIF) index associated with the differential functioning of items and tests framework. In this new method, a cutoff score for each item is determined by obtaining a (1-alpha ) percentile rank score…
NASA Technical Reports Server (NTRS)
Staubert, R.
1985-01-01
Methods for calculating the statistical significance of excess events and the interpretation of the formally derived values are discussed. It is argued that a simple formula for a conservative estimate should generally be used in order to provide a common understanding of quoted values.
Kavvoura, Fotini K.; McQueen, Matthew B.; Khoury, Muin J.; Tanzi, Rudolph E.; Bertram, Lars
2008-01-01
The authors evaluated whether there is an excess of statistically significant results in studies of genetic associations with Alzheimer's disease reflecting either between-study heterogeneity or bias. Among published articles on genetic associations entered into the comprehensive AlzGene database (www.alzgene.org) through January 31, 2007, 1,348 studies included in 175 meta-analyses with 3 or more studies each were analyzed. The number of observed studies (O) with statistically significant results (P = 0.05 threshold) was compared with the expected number (E) under different assumptions for the magnitude of the effect size. In the main analysis, the plausible effect size of each association was the summary effect presented in the respective meta-analysis. Overall, 19 meta-analyses (all with eventually nonsignificant summary effects) had a documented excess of O over E: Typically single studies had significant effects pointing in opposite directions and early summary effects were dissipated over time. Across the whole domain, O was 235 (17.4%), while E was 164.8 (12.2%) (P < 10−6). The excess showed a predilection for meta-analyses with nonsignificant summary effects and between-study heterogeneity. The excess was seen for all levels of statistical significance and also for studies with borderline P values (P = 0.05–0.10). The excess of significant findings may represent significance-chasing biases in a setting of massive testing. PMID:18779388
Howard, Réka; Carriquiry, Alicia L.; Beavis, William D.
2014-01-01
Parametric and nonparametric methods have been developed for purposes of predicting phenotypes. These methods are based on retrospective analyses of empirical data consisting of genotypic and phenotypic scores. Recent reports have indicated that parametric methods are unable to predict phenotypes of traits with known epistatic genetic architectures. Herein, we review parametric methods including least squares regression, ridge regression, Bayesian ridge regression, least absolute shrinkage and selection operator (LASSO), Bayesian LASSO, best linear unbiased prediction (BLUP), Bayes A, Bayes B, Bayes C, and Bayes Cπ. We also review nonparametric methods including Nadaraya-Watson estimator, reproducing kernel Hilbert space, support vector machine regression, and neural networks. We assess the relative merits of these 14 methods in terms of accuracy and mean squared error (MSE) using simulated genetic architectures consisting of completely additive or two-way epistatic interactions in an F2 population derived from crosses of inbred lines. Each simulated genetic architecture explained either 30% or 70% of the phenotypic variability. The greatest impact on estimates of accuracy and MSE was due to genetic architecture. Parametric methods were unable to predict phenotypic values when the underlying genetic architecture was based entirely on epistasis. Parametric methods were slightly better than nonparametric methods for additive genetic architectures. Distinctions among parametric methods for additive genetic architectures were incremental. Heritability, i.e., proportion of phenotypic variability, had the second greatest impact on estimates of accuracy and MSE. PMID:24727289
Potts, T.T.; Hylko, J.M.; Almond, D.
2007-07-01
A company's overall safety program becomes an important consideration to continue performing work and for procuring future contract awards. When injuries or accidents occur, the employer ultimately loses on two counts - increased medical costs and employee absences. This paper summarizes the human and organizational components that contributed to successful safety programs implemented by WESKEM, LLC's Environmental, Safety, and Health Departments located in Paducah, Kentucky, and Oak Ridge, Tennessee. The philosophy of 'safety, compliance, and then production' and programmatic components implemented at the start of the contracts were qualitatively identified as contributing factors resulting in a significant accumulation of safe work hours and an Experience Modification Rate (EMR) of <1.0. Furthermore, a study by the Associated General Contractors of America quantitatively validated components, already found in the WESKEM, LLC programs, as contributing factors to prevent employee accidents and injuries. Therefore, an investment in the human and organizational components now can pay dividends later by reducing the EMR, which is the key to reducing Workers' Compensation premiums. Also, knowing your employees' demographics and taking an active approach to evaluate and prevent fatigue may help employees balance work and non-work responsibilities. In turn, this approach can assist employers in maintaining a healthy and productive workforce. For these reasons, it is essential that safety needs be considered as the starting point when performing work. (authors)
Wang, Q.; Denton, D.L.; Shukla, R.
2000-01-01
As a follow up to the recommendations of the September 1995 SETAC Pellston Workshop on Whole Effluent Toxicity (WET) on test methods and appropriate endpoints, this paper will discuss the applications and statistical properties of using a statistical criterion of minimum significant difference (MSD). The authors examined the upper limits of acceptable MSDs as acceptance criterion in the case of normally distributed data. The implications of this approach are examined in terms of false negative rate as well as false positive rate. Results indicated that the proposed approach has reasonable statistical properties. Reproductive data from short-term chronic WET test with Ceriodaphnia dubia tests were used to demonstrate the applications of the proposed approach. The data were collected by the North Carolina Department of Environment, Health, and Natural Resources (Raleigh, NC, USA) as part of their National Pollutant Discharge Elimination System program.
Denton, Debra L; Diamond, Jerry; Zheng, Lei
2011-05-01
The U.S. Environmental Protection Agency (U.S. EPA) and state agencies implement the Clean Water Act, in part, by evaluating the toxicity of effluent and surface water samples. A common goal for both regulatory authorities and permittees is confidence in an individual test result (e.g., no-observed-effect concentration [NOEC], pass/fail, 25% effective concentration [EC25]), which is used to make regulatory decisions, such as reasonable potential determinations, permit compliance, and watershed assessments. This paper discusses an additional statistical approach (test of significant toxicity [TST]), based on bioequivalence hypothesis testing, or, more appropriately, test of noninferiority, which examines whether there is a nontoxic effect at a single concentration of concern compared with a control. Unlike the traditional hypothesis testing approach in whole effluent toxicity (WET) testing, TST is designed to incorporate explicitly both α and β error rates at levels of toxicity that are unacceptable and acceptable, given routine laboratory test performance for a given test method. Regulatory management decisions are used to identify unacceptable toxicity levels for acute and chronic tests, and the null hypothesis is constructed such that test power is associated with the ability to declare correctly a truly nontoxic sample as acceptable. This approach provides a positive incentive to generate high-quality WET data to make informed decisions regarding regulatory decisions. This paper illustrates how α and β error rates were established for specific test method designs and tests the TST approach using both simulation analyses and actual WET data. In general, those WET test endpoints having higher routine (e.g., 50th percentile) within-test control variation, on average, have higher method-specific α values (type I error rate), to maintain a desired type II error rate. This paper delineates the technical underpinnings of this approach and demonstrates the benefits
NASA Astrophysics Data System (ADS)
Baluev, Roman V.
2013-11-01
We consider the `multifrequency' periodogram, in which the putative signal is modelled as a sum of two or more sinusoidal harmonics with independent frequencies. It is useful in cases when the data may contain several periodic components, especially when their interaction with each other and with the data sampling patterns might produce misleading results. Although the multifrequency statistic itself was constructed earlier, for example by G. Foster in his CLEANest algorithm, its probabilistic properties (the detection significance levels) are still poorly known and much of what is deemed known is not rigorous. These detection levels are nonetheless important for data analysis. We argue that to prove the simultaneous existence of all n components revealed in a multiperiodic variation, it is mandatory to apply at least 2n - 1 significance tests, among which most involve various multifrequency statistics, and only n tests are single-frequency ones. The main result of this paper is an analytic estimation of the statistical significance of the frequency tuples that the multifrequency periodogram can reveal. Using the theory of extreme values of random fields (the generalized Rice method), we find a useful approximation to the relevant false alarm probability. For the double-frequency periodogram, this approximation is given by the elementary formula (π/16)W2e- zz2, where W denotes the normalized width of the settled frequency range, and z is the observed periodogram maximum. We carried out intensive Monte Carlo simulations to show that the practical quality of this approximation is satisfactory. A similar analytic expression for the general multifrequency periodogram is also given, although with less numerical verification.
NASA Astrophysics Data System (ADS)
Casati, Michele
2014-05-01
The assertion that solar activity may play a significant role in the trigger of large volcanic eruptions is, and has been discussed by many geophysicists. Numerous scientific papers have established a possible correlation between these events and the electromagnetic coupling between the Earth and the Sun, but none of them has been able to highlight a possible statistically significant relationship between large volcanic eruptions and any of the series, such as geomagnetic activity, solar wind, sunspots number. In our research, we compare the 148 volcanic eruptions with index VEI4, the major 37 historical volcanic eruptions equal to or greater than index VEI5, recorded from 1610 to 2012 , with its sunspots number. Staring, as the threshold value, a monthly sunspot number of 46 (recorded during the great eruption of Krakatoa VEI6 historical index, August 1883), we note some possible relationships and conduct a statistical test. • Of the historical 31 large volcanic eruptions with index VEI5+, recorded between 1610 and 1955, 29 of these were recorded when the SSN<46. The remaining 2 eruptions were not recorded when the SSN<46, but rather during solar maxima of the solar cycle of the year 1739 and in the solar cycle No. 14 (Shikotsu eruption of 1739 and Ksudach 1907). • Of the historical 8 large volcanic eruptions with index VEI6+, recorded from 1610 to the present, 7 of these were recorded with SSN<46 and more specifically, within the three large solar minima known : Maunder (1645-1710), Dalton (1790-1830) and during the solar minimums occurred between 1880 and 1920. As the only exception, we note the eruption of Pinatubo of June 1991, recorded in the solar maximum of cycle 22. • Of the historical 6 major volcanic eruptions with index VEI5+, recorded after 1955, 5 of these were not recorded during periods of low solar activity, but rather during solar maxima, of the cycles 19,21 and 22. The significant tests, conducted with the chi-square χ ² = 7,782, detect a
Webb-Robertson, Bobbie-Jo M.; McCue, Lee Ann; Waters, Katrina M.; Matzke, Melissa M.; Jacobs, Jon M.; Metz, Thomas O.; Varnum, Susan M.; Pounds, Joel G.
2010-11-01
Liquid chromatography-mass spectrometry-based (LC-MS) proteomics uses peak intensities of proteolytic peptides to infer the differential abundance of peptides/proteins. However, substantial run-to-run variability in peptide intensities and observations (presence/absence) of peptides makes data analysis quite challenging. The missing abundance values in LC-MS proteomics data are difficult to address with traditional imputation-based approaches because the mechanisms by which data are missing are unknown a priori. Data can be missing due to random mechanisms such as experimental error, or non-random mechanisms such as a true biological effect. We present a statistical approach that uses a test of independence known as a G-test to test the null hypothesis of independence between the number of missing values and the experimental groups. We pair the G-test results evaluating independence of missing data (IMD) with a standard analysis of variance (ANOVA) that uses only means and variances computed from the observed data. Each peptide is therefore represented by two statistical confidence metrics, one for qualitative differential observation and one for quantitative differential intensity. We use two simulated and two real LC-MS datasets to demonstrate the robustness and sensitivity of the ANOVA-IMD approach for assigning confidence to peptides with significant differential abundance among experimental groups.
NASA Technical Reports Server (NTRS)
Meador, Michael A.
2005-01-01
Single-wall carbon nanotubes have been shown to possess a combination of outstanding mechanical, electrical, and thermal properties. The use of carbon nanotubes as an additive to improve the mechanical properties of polymers and/or enhance their thermal and electrical conductivity has been a topic of intense interest. Nanotube-modified polymeric materials could find a variety of applications in NASA missions including large-area antennas, solar arrays, and solar sails; radiation shielding materials for vehicles, habitats, and extravehicular activity suits; and multifunctional materials for vehicle structures and habitats. Use of these revolutionary materials could reduce vehicle weight significantly and improve vehicle performance and capabilities.
NASA Astrophysics Data System (ADS)
Wang, H. J.; Shi, W. L.; Chen, X. H.
2006-05-01
The West Development Policy being implemented in China is causing significant land use and land cover (LULC) changes in West China. With the up-to-date satellite database of the Global Land Cover Characteristics Database (GLCCD) that characterizes the lower boundary conditions, the regional climate model RIEMS-TEA is used to simulate possible impacts of the significant LULC variation. The model was run for five continuous three-month periods from 1 June to 1 September of 1993, 1994, 1995, 1996, and 1997, and the results of the five groups are examined by means of a student t-test to identify the statistical significance of regional climate variation. The main results are: (1) The regional climate is affected by the LULC variation because the equilibrium of water and heat transfer in the air-vegetation interface is changed. (2) The integrated impact of the LULC variation on regional climate is not only limited to West China where the LULC varies, but also to some areas in the model domain where the LULC does not vary at all. (3) The East Asian monsoon system and its vertical structure are adjusted by the large scale LULC variation in western China, where the consequences axe the enhancement of the westward water vapor transfer from the east east and the relevant increase of wet-hydrostatic energy in the middle-upper atmospheric layers. (4) The ecological engineering in West China affects significantly the regional climate in Northwest China, North China and the middle-lower reaches of the Yangtze River; there are obvious effects in South, Northeast, and Southwest China, but minor effects in Tibet.
Kossobokov, V.G.; Romashkova, L.L.; Keilis-Borok, V. I.; Healy, J.H.
1999-01-01
Algorithms M8 and MSc (i.e., the Mendocino Scenario) were used in a real-time intermediate-term research prediction of the strongest earthquakes in the Circum-Pacific seismic belt. Predictions are made by M8 first. Then, the areas of alarm are reduced by MSc at the cost that some earthquakes are missed in the second approximation of prediction. In 1992-1997, five earthquakes of magnitude 8 and above occurred in the test area: all of them were predicted by M8 and MSc identified correctly the locations of four of them. The space-time volume of the alarms is 36% and 18%, correspondingly, when estimated with a normalized product measure of empirical distribution of epicenters and uniform time. The statistical significance of the achieved results is beyond 99% both for M8 and MSc. For magnitude 7.5 + , 10 out of 19 earthquakes were predicted by M8 in 40% and five were predicted by M8-MSc in 13% of the total volume considered. This implies a significance level of 81% for M8 and 92% for M8-MSc. The lower significance levels might result from a global change in seismic regime in 1993-1996, when the rate of the largest events has doubled and all of them become exclusively normal or reversed faults. The predictions are fully reproducible; the algorithms M8 and MSc in complete formal definitions were published before we started our experiment [Keilis-Borok, V.I., Kossobokov, V.G., 1990. Premonitory activation of seismic flow: Algorithm M8, Phys. Earth and Planet. Inter. 61, 73-83; Kossobokov, V.G., Keilis-Borok, V.I., Smith, S.W., 1990. Localization of intermediate-term earthquake prediction, J. Geophys. Res., 95, 19763-19772; Healy, J.H., Kossobokov, V.G., Dewey, J.W., 1992. A test to evaluate the earthquake prediction algorithm, M8. U.S. Geol. Surv. OFR 92-401]. M8 is available from the IASPEI Software Library [Healy, J.H., Keilis-Borok, V.I., Lee, W.H.K. (Eds.), 1997. Algorithms for Earthquake Statistics and Prediction, Vol. 6. IASPEI Software Library]. ?? 1999 Elsevier
ERIC Educational Resources Information Center
Harrison, Judith; Thompson, Bruce; Vannest, Kimberly J.
2009-01-01
This article reviews the literature on interventions targeting the academic performance of students with attention-deficit/hyperactivity disorder (ADHD) and does so within the context of the statistical significance testing controversy. Both the arguments for and against null hypothesis statistical significance tests are reviewed. Recent standards…
ERIC Educational Resources Information Center
Weigle, David C.
The purposes of the present paper are to address the historical development of statistical significance testing and to briefly examine contemporary practices regarding such testing in the light of these historical origins. Precursors leading to the advent of statistical significance testing are examined as are more recent controversies surrounding…
Zamboras, R.L.
1995-10-01
The Oligocene Hackberry sands of the Hackberry Embayment represent a complex and elusive exploration target. 3-D seismic evaluation along the headward erosional limits of the embayment provides a reconstructive framework of tectonic and sedimentation patterns which facilitate hydrocarbon exploration. The 3-D seismic along the Orange County, Texas portion of the Oligocene Hackberry trend indicates: (1) similarities of Hackberry structural and depositional setting to that of the underlying Eocene Yegua Formation; (2) four distinct cyclical sedimentation episodes associated with basin floor slump faulting: (3) the usefulness of seismic attributes as direct hydrocarbon indicators, and (4) the potential for significant oil and gas reserves additions in a mature trend. The Hackberry embayment represents a microcosm of the basin structural and depositional processes. Utilizing 3-D seismic to lower risk and finding cost will renew interest in trends such as the Hackberry of the Upper Texas-Louisiana Gulf Coast.
Liu, Yiwen; Ni, Bing-Jie
2015-01-01
The application of anaerobic ammonium oxidation (Anammox) process is often limited by the slow growth rate of Anammox bacteria. As the essential substrate element that required for culturing Anammox sludge, Fe (II) is expected to affect Anammox bacterial growth. This work systematically studied the effects of Fe (II) addition on Anammox activity based on the kinetic analysis of specific growth rate using data from batch tests with an enriched Anammox sludge at different dosing levels. Results clearly demonstrated that appropriate Fe (II) dosing (i.e., 0.09 mM) significantly enhanced the specific Anammox growth rate up to 0.172 d−1 compared to 0.118 d−1 at regular Fe (II) level (0.03 mM). The relationship between Fe (II) concentration and specific Anammox growth rate was found to be well described by typical substrate inhibition kinetics, which was integrated into currently well-established Anammox model to describe the enhanced Anammox growth with Fe (II) addition. The validity of the integrated Anammox model was verified using long-term experimental data from three independent Anammox reactors with different Fe (II) dosing levels. This Fe (II)-based approach could be potentially implemented to enhance the process rate for possible mainstream application of Anammox technology, in order for an energy autarchic wastewater treatment. PMID:25644239
NASA Astrophysics Data System (ADS)
Liu, Yiwen; Ni, Bing-Jie
2015-02-01
The application of anaerobic ammonium oxidation (Anammox) process is often limited by the slow growth rate of Anammox bacteria. As the essential substrate element that required for culturing Anammox sludge, Fe (II) is expected to affect Anammox bacterial growth. This work systematically studied the effects of Fe (II) addition on Anammox activity based on the kinetic analysis of specific growth rate using data from batch tests with an enriched Anammox sludge at different dosing levels. Results clearly demonstrated that appropriate Fe (II) dosing (i.e., 0.09 mM) significantly enhanced the specific Anammox growth rate up to 0.172 d-1 compared to 0.118 d-1 at regular Fe (II) level (0.03 mM). The relationship between Fe (II) concentration and specific Anammox growth rate was found to be well described by typical substrate inhibition kinetics, which was integrated into currently well-established Anammox model to describe the enhanced Anammox growth with Fe (II) addition. The validity of the integrated Anammox model was verified using long-term experimental data from three independent Anammox reactors with different Fe (II) dosing levels. This Fe (II)-based approach could be potentially implemented to enhance the process rate for possible mainstream application of Anammox technology, in order for an energy autarchic wastewater treatment.
Liu, Yiwen; Ni, Bing-Jie
2015-01-01
The application of anaerobic ammonium oxidation (Anammox) process is often limited by the slow growth rate of Anammox bacteria. As the essential substrate element that required for culturing Anammox sludge, Fe (II) is expected to affect Anammox bacterial growth. This work systematically studied the effects of Fe (II) addition on Anammox activity based on the kinetic analysis of specific growth rate using data from batch tests with an enriched Anammox sludge at different dosing levels. Results clearly demonstrated that appropriate Fe (II) dosing (i.e., 0.09 mM) significantly enhanced the specific Anammox growth rate up to 0.172 d(-1) compared to 0.118 d(-1) at regular Fe (II) level (0.03 mM). The relationship between Fe (II) concentration and specific Anammox growth rate was found to be well described by typical substrate inhibition kinetics, which was integrated into currently well-established Anammox model to describe the enhanced Anammox growth with Fe (II) addition. The validity of the integrated Anammox model was verified using long-term experimental data from three independent Anammox reactors with different Fe (II) dosing levels. This Fe (II)-based approach could be potentially implemented to enhance the process rate for possible mainstream application of Anammox technology, in order for an energy autarchic wastewater treatment. PMID:25644239
Bornmann, Lutz; Leydesdorff, Loet
2013-01-01
Using the InCites tool of Thomson Reuters, this study compares normalized citation impact values calculated for China, Japan, France, Germany, United States, and the UK throughout the time period from 1981 to 2010. InCites offers a unique opportunity to study the normalized citation impacts of countries using (i) a long publication window (1981 to 2010), (ii) a differentiation in (broad or more narrow) subject areas, and (iii) allowing for the use of statistical procedures in order to obtain an insightful investigation of national citation trends across the years. Using four broad categories, our results show significantly increasing trends in citation impact values for France, the UK, and especially Germany across the last thirty years in all areas. The citation impact of papers from China is still at a relatively low level (mostly below the world average), but the country follows an increasing trend line. The USA exhibits a stable pattern of high citation impact values across the years. With small impact differences between the publication years, the US trend is increasing in engineering and technology but decreasing in medical and health sciences as well as in agricultural sciences. Similar to the USA, Japan follows increasing as well as decreasing trends in different subject areas, but the variability across the years is small. In most of the years, papers from Japan perform below or approximately at the world average in each subject area.
Bornmann, Lutz; Leydesdorff, Loet
2013-01-01
Using the InCites tool of Thomson Reuters, this study compares normalized citation impact values calculated for China, Japan, France, Germany, United States, and the UK throughout the time period from 1981 to 2010. InCites offers a unique opportunity to study the normalized citation impacts of countries using (i) a long publication window (1981 to 2010), (ii) a differentiation in (broad or more narrow) subject areas, and (iii) allowing for the use of statistical procedures in order to obtain an insightful investigation of national citation trends across the years. Using four broad categories, our results show significantly increasing trends in citation impact values for France, the UK, and especially Germany across the last thirty years in all areas. The citation impact of papers from China is still at a relatively low level (mostly below the world average), but the country follows an increasing trend line. The USA exhibits a stable pattern of high citation impact values across the years. With small impact differences between the publication years, the US trend is increasing in engineering and technology but decreasing in medical and health sciences as well as in agricultural sciences. Similar to the USA, Japan follows increasing as well as decreasing trends in different subject areas, but the variability across the years is small. In most of the years, papers from Japan perform below or approximately at the world average in each subject area. PMID:23418600
Stuijfzand, Wynand J.; Danad, Ibrahim; Raijmakers, Pieter G.; Marcu, C. Bogdan; Heymans, Martijn W.; van Kuijk, Cornelis C.; van Rossum, Albert C.; Nieman, Koen; Min, James K.; Leipsic, Jonathon; van Royen, Niels; Knaapen, Paul
2015-01-01
OBJECTIVES The current study evaluates the incremental value of transluminal attenuation gradient (TAG), TAG with corrected contrast opacification (CCO), and TAG with exclusion of calcified coronary segments (ExC) over coronary computed tomography angiogram (CTA) alone using fractional flow reserve (FFR) as the gold standard. BACKGROUND TAG is defined as the contrast opacification gradient along the length of a coronary artery on a coronary CTA. Preliminary data suggest that TAG provides additional functional information. Interpretation of TAG is hampered by multiple heartbeat acquisition algorithms and coronary calcifications. Two correction models have been proposed based on either dephasing of contrast delivery by relating coronary density to corresponding descending aortic opacification (TAG-CCO) or excluding calcified coronary segments (TAG-ExC). METHODS Eighty-five patients with intermediate probability of coronary artery disease were prospectively included. All patients underwent step-and-shoot 256-slice coronary CTA. TAG, TAG-CCO, and TAG-ExC analyses were performed followed by invasive coronary angiography in conjunction with FFR measurements of all major coronary branches. RESULTS Thirty-four patients (40%) were diagnosed with hemodynamically-significant coronary artery disease (i.e., FFR ≤0.80). On a per-vessel basis (n = 253), 59 lesions (23%) were graded as hemodynamically significant, and the diagnostic accuracy of coronary CTA (diameter stenosis ≥50%) was 95%, 75%, 98%, and 54% for sensitivity, specificity, negative predictive value, and positive predictive value, respectively. TAG and TAG-ExC did not discriminate between vessels with or without hemodynamically significant lesions (−13.5 ± 17.1 HU [Hounsfield units] × 10 mm−1 vs. −11.6 ± 13.3 HU × 10 mm−1, p = 0.36; and 13.1 ± 15.9 HU × 10 mm−1 vs. −11.4 ± 11.7 HU × 10 mm−1, p = 0.77, respectively). TAG-CCO was lower in vessels with a hemodynamically-significant lesion (−0
NASA Technical Reports Server (NTRS)
Friedlander, Alan L.; Harry, David P., III
1960-01-01
An exploratory analysis of vehicle guidance during the approach to a target planet is presented. The objective of the guidance maneuver is to guide the vehicle to a specific perigee distance with a high degree of accuracy and minimum corrective velocity expenditure. The guidance maneuver is simulated by considering the random sampling of real measurements with significant error and reducing this information to prescribe appropriate corrective action. The instrumentation system assumed includes optical and/or infrared devices to indicate range and a reference angle in the trajectory plane. Statistical results are obtained by Monte-Carlo techniques and are shown as the expectation of guidance accuracy and velocity-increment requirements. Results are nondimensional and applicable to any planet within limits of two-body assumptions. The problem of determining how many corrections to make and when to make them is a consequence of the conflicting requirement of accurate trajectory determination and propulsion. Optimum values were found for a vehicle approaching a planet along a parabolic trajectory with an initial perigee distance of 5 radii and a target perigee of 1.02 radii. In this example measurement errors were less than i minute of arc. Results indicate that four corrections applied in the vicinity of 50, 16, 15, and 1.5 radii, respectively, yield minimum velocity-increment requirements. Thrust devices capable of producing a large variation of velocity-increment size are required. For a vehicle approaching the earth, miss distances within 32 miles are obtained with 90-percent probability. Total velocity increments used in guidance are less than 3300 feet per second with 90-percent probability. It is noted that the above representative results are valid only for the particular guidance scheme hypothesized in this analysis. A parametric study is presented which indicates the effects of measurement error size, initial perigee, and initial energy on the guidance
Fisher, Aaron; Anderson, G Brooke; Peng, Roger; Leek, Jeff
2014-01-01
Scatterplots are the most common way for statisticians, scientists, and the public to visually detect relationships between measured variables. At the same time, and despite widely publicized controversy, P-values remain the most commonly used measure to statistically justify relationships identified between variables. Here we measure the ability to detect statistically significant relationships from scatterplots in a randomized trial of 2,039 students in a statistics massive open online course (MOOC). Each subject was shown a random set of scatterplots and asked to visually determine if the underlying relationships were statistically significant at the P < 0.05 level. Subjects correctly classified only 47.4% (95% CI [45.1%-49.7%]) of statistically significant relationships, and 74.6% (95% CI [72.5%-76.6%]) of non-significant relationships. Adding visual aids such as a best fit line or scatterplot smooth increased the probability a relationship was called significant, regardless of whether the relationship was actually significant. Classification of statistically significant relationships improved on repeat attempts of the survey, although classification of non-significant relationships did not. Our results suggest: (1) that evidence-based data analysis can be used to identify weaknesses in theoretical procedures in the hands of average users, (2) data analysts can be trained to improve detection of statistically significant results with practice, but (3) data analysts have incorrect intuition about what statistically significant relationships look like, particularly for small effects. We have built a web tool for people to compare scatterplots with their corresponding p-values which is available here: http://glimmer.rstudio.com/afisher/EDA/. PMID:25337457
Fisher, Aaron; Anderson, G. Brooke; Peng, Roger
2014-01-01
Scatterplots are the most common way for statisticians, scientists, and the public to visually detect relationships between measured variables. At the same time, and despite widely publicized controversy, P-values remain the most commonly used measure to statistically justify relationships identified between variables. Here we measure the ability to detect statistically significant relationships from scatterplots in a randomized trial of 2,039 students in a statistics massive open online course (MOOC). Each subject was shown a random set of scatterplots and asked to visually determine if the underlying relationships were statistically significant at the P < 0.05 level. Subjects correctly classified only 47.4% (95% CI [45.1%–49.7%]) of statistically significant relationships, and 74.6% (95% CI [72.5%–76.6%]) of non-significant relationships. Adding visual aids such as a best fit line or scatterplot smooth increased the probability a relationship was called significant, regardless of whether the relationship was actually significant. Classification of statistically significant relationships improved on repeat attempts of the survey, although classification of non-significant relationships did not. Our results suggest: (1) that evidence-based data analysis can be used to identify weaknesses in theoretical procedures in the hands of average users, (2) data analysts can be trained to improve detection of statistically significant results with practice, but (3) data analysts have incorrect intuition about what statistically significant relationships look like, particularly for small effects. We have built a web tool for people to compare scatterplots with their corresponding p-values which is available here: http://glimmer.rstudio.com/afisher/EDA/. PMID:25337457
Greenhalgh, T.
1997-01-01
It is possible to be seriously misled by taking the statistical competence (and/or the intellectual honesty) of authors for granted. Some common errors committed (deliberately or inadvertently) by the authors of papers are given in the final box. PMID:9277611
Nibhanipudi, Kumara V
2016-01-01
Objective. A study to determine if addition of palatal petechiae to Centor criteria adds more value for clinical diagnosis of acute strep pharyngitis in children. Hypothesis. In children, Centor Criteria does not cover all the symptoms and signs of acute strep pharyngitis. We hypothesize that addition of palatal petechiae to Centor Criteria will increase the possibility of clinical diagnosis of group A streptococcal pharyngitis in children. Methods. One hundred patients with a complaint of sore throat were enrolled in the study. All the patients were examined clinically using the Centor Criteria. They were also examined for other signs and symptoms like petechial lesions over the palate, abdominal pain, and skin rash. All the patients were given rapid strep tests, and throat cultures were sent. No antibiotics were given until culture results were obtained. Results. The sample size was 100 patients. All 100 had fever, sore throat, and erythema of tonsils. Twenty of the 100 patients had tonsillar exudates, 85/100 had tender anterior cervical lymph nodes, and 86/100 had no cough. In total, 9 out of the 100 patients had positive throat cultures. We observed that petechiae over the palate, a very significant sign, is not included in the Centor Criteria. Palatal petechiae were present in 8 out of the 100 patients. Six out of these 8 with palatal petechiae had positive throat culture for strep (75%). Only 7 out of 20 with exudates had positive strep culture. Sixteen out of the 100 patients had rapid strep test positive. Those 84/100 who had negative rapid strep also had negative throat culture. Statistics. We used Fisher's exact test, comparing throat culture positive and negative versus presence of exudates and palatal hemorrhages with positive and negative throat cultures and the resultant P value <.0001. Conclusion. Our study concludes that addition of petechiae over the palate to Centor Criteria will increase the possibility of diagnosing acute group A streptococcal
Nibhanipudi, Kumara V.
2016-01-01
Objective. A study to determine if addition of palatal petechiae to Centor criteria adds more value for clinical diagnosis of acute strep pharyngitis in children. Hypothesis. In children, Centor Criteria does not cover all the symptoms and signs of acute strep pharyngitis. We hypothesize that addition of palatal petechiae to Centor Criteria will increase the possibility of clinical diagnosis of group A streptococcal pharyngitis in children. Methods. One hundred patients with a complaint of sore throat were enrolled in the study. All the patients were examined clinically using the Centor Criteria. They were also examined for other signs and symptoms like petechial lesions over the palate, abdominal pain, and skin rash. All the patients were given rapid strep tests, and throat cultures were sent. No antibiotics were given until culture results were obtained. Results. The sample size was 100 patients. All 100 had fever, sore throat, and erythema of tonsils. Twenty of the 100 patients had tonsillar exudates, 85/100 had tender anterior cervical lymph nodes, and 86/100 had no cough. In total, 9 out of the 100 patients had positive throat cultures. We observed that petechiae over the palate, a very significant sign, is not included in the Centor Criteria. Palatal petechiae were present in 8 out of the 100 patients. Six out of these 8 with palatal petechiae had positive throat culture for strep (75%). Only 7 out of 20 with exudates had positive strep culture. Sixteen out of the 100 patients had rapid strep test positive. Those 84/100 who had negative rapid strep also had negative throat culture. Statistics. We used Fisher’s exact test, comparing throat culture positive and negative versus presence of exudates and palatal hemorrhages with positive and negative throat cultures and the resultant P value <.0001. Conclusion. Our study concludes that addition of petechiae over the palate to Centor Criteria will increase the possibility of diagnosing acute group A streptococcal
Collar, Concha; Conte, Paola; Fadda, Costantino; Piga, Antonio
2015-10-01
The capability of different gluten-free (GF) basic formulations made of flour (rice, amaranth and chickpea) and starch (corn and cassava) blends, to make machinable and viscoelastic GF-doughs in absence/presence of single hydrocolloids (guar gum, locust bean and psyllium fibre), proteins (milk and egg white) and surfactants (neutral, anionic and vegetable oil) have been investigated. Macroscopic (high deformation) and macromolecular (small deformation) mechanical, viscometric (gelatinization, pasting, gelling) and thermal (gelatinization, melting, retrogradation) approaches were performed on the different matrices in order to (a) identify similarities and differences in GF-doughs in terms of a small number of rheological and thermal analytical parameters according to the formulations and (b) to assess single and interactive effects of basic ingredients and additives on GF-dough performance to achieve GF-flat breads. Larger values for the static and dynamic mechanical characteristics and higher viscometric profiles during both cooking and cooling corresponded to doughs formulated with guar gum and Psyllium fibre added to rice flour/starch and rice flour/corn starch/chickpea flour, while surfactant- and protein-formulated GF-doughs added to rice flour/starch/amaranth flour based GF-doughs exhibited intermediate and lower values for the mechanical parameters and poorer viscometric profiles. In addition, additive-free formulations exhibited higher values for the temperature of both gelatinization and retrogradation and lower enthalpies for the thermal transitions. Single addition of 10% of either chickpea flour or amaranth flour to rice flour/starch blends provided a large GF-dough hardening effect in presence of corn starch and an intermediate effect in presence of cassava starch (chickpea), and an intermediate reinforcement of GF-dough regardless the source of starch (amaranth). At macromolecular level, both chickpea and amaranth flours, singly added, determined
Collar, Concha; Conte, Paola; Fadda, Costantino; Piga, Antonio
2015-10-01
The capability of different gluten-free (GF) basic formulations made of flour (rice, amaranth and chickpea) and starch (corn and cassava) blends, to make machinable and viscoelastic GF-doughs in absence/presence of single hydrocolloids (guar gum, locust bean and psyllium fibre), proteins (milk and egg white) and surfactants (neutral, anionic and vegetable oil) have been investigated. Macroscopic (high deformation) and macromolecular (small deformation) mechanical, viscometric (gelatinization, pasting, gelling) and thermal (gelatinization, melting, retrogradation) approaches were performed on the different matrices in order to (a) identify similarities and differences in GF-doughs in terms of a small number of rheological and thermal analytical parameters according to the formulations and (b) to assess single and interactive effects of basic ingredients and additives on GF-dough performance to achieve GF-flat breads. Larger values for the static and dynamic mechanical characteristics and higher viscometric profiles during both cooking and cooling corresponded to doughs formulated with guar gum and Psyllium fibre added to rice flour/starch and rice flour/corn starch/chickpea flour, while surfactant- and protein-formulated GF-doughs added to rice flour/starch/amaranth flour based GF-doughs exhibited intermediate and lower values for the mechanical parameters and poorer viscometric profiles. In addition, additive-free formulations exhibited higher values for the temperature of both gelatinization and retrogradation and lower enthalpies for the thermal transitions. Single addition of 10% of either chickpea flour or amaranth flour to rice flour/starch blends provided a large GF-dough hardening effect in presence of corn starch and an intermediate effect in presence of cassava starch (chickpea), and an intermediate reinforcement of GF-dough regardless the source of starch (amaranth). At macromolecular level, both chickpea and amaranth flours, singly added, determined
Yokoyama, Shozo; Takenaka, Naomi
2005-04-01
Red-green color vision is strongly suspected to enhance the survival of its possessors. Despite being red-green color blind, however, many species have successfully competed in nature, which brings into question the evolutionary advantage of achieving red-green color vision. Here, we propose a new method of identifying positive selection at individual amino acid sites with the premise that if positive Darwinian selection has driven the evolution of the protein under consideration, then it should be found mostly at the branches in the phylogenetic tree where its function had changed. The statistical and molecular methods have been applied to 29 visual pigments with the wavelengths of maximal absorption at approximately 510-540 nm (green- or middle wavelength-sensitive [MWS] pigments) and at approximately 560 nm (red- or long wavelength-sensitive [LWS] pigments), which are sampled from a diverse range of vertebrate species. The results show that the MWS pigments are positively selected through amino acid replacements S180A, Y277F, and T285A and that the LWS pigments have been subjected to strong evolutionary conservation. The fact that these positively selected M/LWS pigments are found not only in animals with red-green color vision but also in those with red-green color blindness strongly suggests that both red-green color vision and color blindness have undergone adaptive evolution independently in different species.
Kurtz, S.E.; Fields, D.E.
1983-10-01
This report describes a version of the TERPED/P computer code that is very useful for small data sets. A new algorithm for determining the Kolmogorov-Smirnov (KS) statistics is used to extend program applicability. The TERPED/P code facilitates the analysis of experimental data and assists the user in determining its probability distribution function. Graphical and numerical tests are performed interactively in accordance with the user's assumption of normally or log-normally distributed data. Statistical analysis options include computation of the chi-square statistic and the KS one-sample test statistic and the corresponding significance levels. Cumulative probability plots of the user's data are generated either via a local graphics terminal, a local line printer or character-oriented terminal, or a remote high-resolution graphics device such as the FR80 film plotter or the Calcomp paper plotter. Several useful computer methodologies suffer from limitations of their implementations of the KS nonparametric test. This test is one of the more powerful analysis tools for examining the validity of an assumption about the probability distribution of a set of data. KS algorithms are found in other analysis codes, including the Statistical Analysis Subroutine (SAS) package and earlier versions of TERPED. The inability of these algorithms to generate significance levels for sample sizes less than 50 has limited their usefulness. The release of the TERPED code described herein contains algorithms to allow computation of the KS statistic and significance level for data sets of, if the user wishes, as few as three points. Values computed for the KS statistic are within 3% of the correct value for all data set sizes.
Fixed-ratio ray designs have been used for detecting and characterizing interactions of large numbers of chemicals in combination. Single chemical dose-response data are used to predict an “additivity curve” along an environmentally relevant ray. A “mixture curve” is estimated fr...
Boareto, Marcelo; Caticha, Nestor
2014-01-01
Microarray data analysis typically consists in identifying a list of differentially expressed genes (DEG), i.e., the genes that are differentially expressed between two experimental conditions. Variance shrinkage methods have been considered a better choice than the standard t-test for selecting the DEG because they correct the dependence of the error with the expression level. This dependence is mainly caused by errors in background correction, which more severely affects genes with low expression values. Here, we propose a new method for identifying the DEG that overcomes this issue and does not require background correction or variance shrinkage. Unlike current methods, our methodology is easy to understand and implement. It consists of applying the standard t-test directly on the normalized intensity data, which is possible because the probe intensity is proportional to the gene expression level and because the t-test is scale- and location-invariant. This methodology considerably improves the sensitivity and robustness of the list of DEG when compared with the t-test applied to preprocessed data and to the most widely used shrinkage methods, Significance Analysis of Microarrays (SAM) and Linear Models for Microarray Data (LIMMA). Our approach is useful especially when the genes of interest have small differences in expression and therefore get ignored by standard variance shrinkage methods.
NASA Astrophysics Data System (ADS)
Miranda, M.; Dorrío, B. V.; Blanco, J.; Diz-Bugarín, J.; Ribas, F.
2011-01-01
Several metrological applications base their measurement principle in the phase sum or difference between two patterns, one original s(r,phi) and another modified t(r,phi+Δphi). Additive or differential phase shifting algorithms directly recover the sum 2phi+Δphi or the difference Δphi of phases without requiring prior calculation of the individual phases. These algorithms can be constructed, for example, from a suitable combination of known phase shifting algorithms. Little has been written on the design, analysis and error compensation of these new two-stage algorithms. Previously we have used computer simulation to study, in a linear approach or with a filter process in reciprocal space, the response of several families of them to the main error sources. In this work we present an error analysis that uses Monte Carlo simulation to achieve results in good agreement with those obtained with spatial and temporal methods.
NASA Technical Reports Server (NTRS)
Druzhinin, I. P.; Khamyanova, N. V.; Yagodinskiy, V. N.
1974-01-01
Statistical evaluations of the significance of the relationship of abrupt changes in solar activity and discontinuities in the multi-year pattern of an epidemic process are reported. They reliably (with probability of more than 99.9%) show the real nature of this relationship and its great specific weight (about half) in the formation of discontinuities in the multi-year pattern of the processes in question.
Mark Burden, Adrian; Lewis, Sandra Elizabeth; Willcox, Emma
2014-12-01
Numerous ways exist to process raw electromyograms (EMGs). However, the effect of altering processing methods on peak and mean EMG has seldom been investigated. The aim of this study was to investigate the effect of using different root mean square (RMS) window lengths and overlaps on the amplitude, reliability and inter-individual variability of gluteus maximus EMGs recorded during the clam exercise, and on the statistical significance and clinical relevance of amplitude differences between two exercise conditions. Mean and peak RMS of 10 repetitions from 17 participants were obtained using processing window lengths of 0.01, 0.15, 0.2, 0.25 and 1 s, with no overlap and overlaps of 25, 50 and 75% of window length. The effect of manipulating window length on reliability and inter-individual variability was greater for peak EMG (coefficient of variation [CV] <9%) than for mean EMG (CV <3%), with the 1 s window generally displaying the lowest variability. As a consequence, neither statistical significance nor clinical relevance (effect size [ES]) of mean EMG was affected by manipulation of window length. Statistical significance of peak EMG was more sensitive to changes in window length, with lower p-values generally being recorded for the 1 s window. As use of different window lengths has a greater effect on variability and statistical significance of the peak EMG, then clinicians should use the mean EMG. They should also be aware that use of different numbers of exercise repetitions and participants can have a greater effect on EMG parameters than length of processing window.
Aouinti, Safa; Malouche, Dhafer; Giudicelli, Véronique; Kossida, Sofia; Lefranc, Marie-Paule
2015-01-01
The adaptive immune responses of humans and of other jawed vertebrate species (gnasthostomata) are characterized by the B and T cells and their specific antigen receptors, the immunoglobulins (IG) or antibodies and the T cell receptors (TR) (up to 2.1012 different IG and TR per individual). IMGT, the international ImMunoGeneTics information system (http://www.imgt.org), was created in 1989 by Marie-Paule Lefranc (Montpellier University and CNRS) to manage the huge and complex diversity of these antigen receptors. IMGT built on IMGT-ONTOLOGY concepts of identification (keywords), description (labels), classification (gene and allele nomenclature) and numerotation (IMGT unique numbering), is at the origin of immunoinformatics, a science at the interface between immunogenetics and bioinformatics. IMGT/HighV-QUEST, the first web portal, and so far the only one, for the next generation sequencing (NGS) analysis of IG and TR, is the paradigm for immune repertoire standardized outputs and immunoprofiles of the adaptive immune responses. It provides the identification of the variable (V), diversity (D) and joining (J) genes and alleles, analysis of the V-(D)-J junction and complementarity determining region 3 (CDR3) and the characterization of the 'IMGT clonotype (AA)' (AA for amino acid) diversity and expression. IMGT/HighV-QUEST compares outputs of different batches, up to one million nucleotide sequencesfor the statistical module. These high throughput IG and TR repertoire immunoprofiles are of prime importance in vaccination, cancer, infectious diseases, autoimmunity and lymphoproliferative disorders, however their comparative statistical analysis still remains a challenge. We present a standardized statistical procedure to analyze IMGT/HighV-QUEST outputs for the evaluation of the significance of the IMGT clonotype (AA) diversity differences in proportions, per gene of a given group, between NGS IG and TR repertoire immunoprofiles. The procedure is generic and
Best, R; Harrell, A; Geesey, C; Libby, B; Wijesooriya, K
2014-06-15
Purpose: The purpose of this study is to inter-compare and find statistically significant differences between flattened field fixed-beam (FB) IMRT with flattening-filter free (FFF) volumetric modulated arc therapy (VMAT) for stereotactic body radiation therapy SBRT. Methods: SBRT plans using FB IMRT and FFF VMAT were generated for fifteen SBRT lung patients using 6 MV beams. For each patient, both IMRT and VMAT plans were created for comparison. Plans were generated utilizing RTOG 0915 (peripheral, 10 patients) and RTOG 0813 (medial, 5 patients) lung protocols. Target dose, critical structure dose, and treatment time were compared and tested for statistical significance. Parameters of interest included prescription isodose surface coverage, target dose heterogeneity, high dose spillage (location and volume), low dose spillage (location and volume), lung dose spillage, and critical structure maximum- and volumetric-dose limits. Results: For all criteria, we found equivalent or higher conformality with VMAT plans as well as reduced critical structure doses. Several differences passed a Student's t-test of significance: VMAT reduced the high dose spillage, evaluated with conformality index (CI), by an average of 9.4%±15.1% (p=0.030) compared to IMRT. VMAT plans reduced the lung volume receiving 20 Gy by 16.2%±15.0% (p=0.016) compared with IMRT. For the RTOG 0915 peripheral lesions, the volumes of lung receiving 12.4 Gy and 11.6 Gy were reduced by 27.0%±13.8% and 27.5%±12.6% (for both, p<0.001) in VMAT plans. Of the 26 protocol pass/fail criteria, VMAT plans were able to achieve an average of 0.2±0.7 (p=0.026) more constraints than the IMRT plans. Conclusions: FFF VMAT has dosimetric advantages over fixed beam IMRT for lung SBRT. Significant advantages included increased dose conformity, and reduced organs-at-risk doses. The overall improvements in terms of protocol pass/fail criteria were more modest and will require more patient data to establish difference
Maric, Marija; de Haan, Else; Hogendoorn, Sanne M; Wolters, Lidewij H; Huizenga, Hilde M
2015-03-01
Single-case experimental designs are useful methods in clinical research practice to investigate individual client progress. Their proliferation might have been hampered by methodological challenges such as the difficulty applying existing statistical procedures. In this article, we describe a data-analytic method to analyze univariate (i.e., one symptom) single-case data using the common package SPSS. This method can help the clinical researcher to investigate whether an intervention works as compared with a baseline period or another intervention type, and to determine whether symptom improvement is clinically significant. First, we describe the statistical method in a conceptual way and show how it can be implemented in SPSS. Simulation studies were performed to determine the number of observation points required per intervention phase. Second, to illustrate this method and its implications, we present a case study of an adolescent with anxiety disorders treated with cognitive-behavioral therapy techniques in an outpatient psychotherapy clinic, whose symptoms were regularly assessed before each session. We provide a description of the data analyses and results of this case study. Finally, we discuss the advantages and shortcomings of the proposed method.
NASA Astrophysics Data System (ADS)
Zhu, Lili; Wu, Xinhu; Zhao, Gaiqing; Wang, Xiaobo
2016-02-01
A new antiwear additive of Bisphenol AF bis(diphenyl phosphate) (BAFDP) was synthesized and characterized. The tribological behaviors of the additive for polyalkylene glycol (PAG) and polyurea grease (PG) application in steel/steel contacts were evaluated on an Optimol SRV-IV oscillating reciprocating friction and wear tester at elevated temperature. The results revealed that BAFDP could drastically reduce friction and wear of sliding pairs in both PAG and also in PG at 100 °C. The tribological properties of BAFDP are superior to the normally used zinc dialkyldithiophosphate-based additive package (ZDDP) in PAG and PG. Moreover, BAFDP as additive for PAG and PG displays relatively significant tribological properties in temperature-ramp tests by performing well at 50-300 °C, indicating the excellent high temperature friction reduction and anti-wear capacity of BAFDP. XPS results showed that boundary lubrication films composed of Fe(OH)O, Fe3O4, FePO4, FeF2, FeF3, compounds containing the Psbnd O bonds, nitrogen oxide, and so forth, were formed on the worn surface, which contributed to excellent friction reduction and antiwear performance.
Aouinti, Safa; Malouche, Dhafer; Giudicelli, Véronique; Kossida, Sofia; Lefranc, Marie-Paule
2015-01-01
The adaptive immune responses of humans and of other jawed vertebrate species (gnasthostomata) are characterized by the B and T cells and their specific antigen receptors, the immunoglobulins (IG) or antibodies and the T cell receptors (TR) (up to 2.1012 different IG and TR per individual). IMGT, the international ImMunoGeneTics information system (http://www.imgt.org), was created in 1989 by Marie-Paule Lefranc (Montpellier University and CNRS) to manage the huge and complex diversity of these antigen receptors. IMGT built on IMGT-ONTOLOGY concepts of identification (keywords), description (labels), classification (gene and allele nomenclature) and numerotation (IMGT unique numbering), is at the origin of immunoinformatics, a science at the interface between immunogenetics and bioinformatics. IMGT/HighV-QUEST, the first web portal, and so far the only one, for the next generation sequencing (NGS) analysis of IG and TR, is the paradigm for immune repertoire standardized outputs and immunoprofiles of the adaptive immune responses. It provides the identification of the variable (V), diversity (D) and joining (J) genes and alleles, analysis of the V-(D)-J junction and complementarity determining region 3 (CDR3) and the characterization of the ‘IMGT clonotype (AA)’ (AA for amino acid) diversity and expression. IMGT/HighV-QUEST compares outputs of different batches, up to one million nucleotide sequencesfor the statistical module. These high throughput IG and TR repertoire immunoprofiles are of prime importance in vaccination, cancer, infectious diseases, autoimmunity and lymphoproliferative disorders, however their comparative statistical analysis still remains a challenge. We present a standardized statistical procedure to analyze IMGT/HighV-QUEST outputs for the evaluation of the significance of the IMGT clonotype (AA) diversity differences in proportions, per gene of a given group, between NGS IG and TR repertoire immunoprofiles. The procedure is generic and
Jäger, Markus; Bottlender, Ronald; Strauss, Anton; Möller, Hans-Jürgen
2005-01-01
Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition (DSM-IV), after Kraepelin's original description of "manic-depressive insanity," embodied a broad concept of affective disorders including mood-congruent and mood-incongruent psychotic features. Controversial results have been reported about the prognostic significance of psychotic symptoms in depressive disorders challenging this broad concept of affective disorders. One hundred seventeen inpatients first hospitalized in 1980 to 1982 who retrospectively fulfilled the DSM-IV criteria for depressive disorders with mood-congruent or mood-incongruent psychotic features (n = 20), nonpsychotic depressive disorders (n = 33), or schizophrenia (n = 64) were followed up 15 years after their first hospitalization. Global functioning was recorded with the Global Assessment Scale; the clinical picture at follow-up was assessed using the Hamilton Rating Scale for Depression, the Positive and Negative Syndrome Scale, and the Scale for the Assessment of Negative Symptoms. With respect to global functioning, clinical picture, and social impairment at follow-up, depressive disorders with psychotic features were similar to those without, but markedly different from schizophrenia. However, patients with psychotic depressive disorders experienced more rehospitalizations than those with nonpsychotic ones. The findings indicating low prognostic significance of psychotic symptoms in depressive disorders are in line with the broad concept of affective disorders in DSM-IV.
Cho, Yun Hee; Cho, Kyu Ran; Park, Eun Kyung; Seo, Bo Kyoung; Woo, Ok Hee; Cho, Sung Bum; Bae, Jeoung Won
2016-01-01
Background In preoperative assessment of breast cancer, MRI has been shown to identify more additional breast lesions than are detectable using conventional imaging techniques. The characterization of additional lesions is more important than detection for optimal surgical treatment. Additional breast lesions can be included in focus, mass, and non-mass enhancement (NME) on MRI. According to the fifth edition of the breast imaging reporting and data system (BI-RADS®), which includes several changes in the NME descriptors, few studies to date have evaluated NME in preoperative assessment of breast cancer. Objectives We investigated the diagnostic accuracy of BI-RADS descriptors in predicting malignancy for additional NME lesions detected on preoperative 3T dynamic contrast enhanced MRI (DCE-MRI) in patients with newly diagnosed breast cancer. Patients and Methods Between January 2008 and December 2012, 88 patients were enrolled in our study, all with NME lesions other than the index cancer on preoperative 3T DCE-MRI and all with accompanying histopathologic examination. The MRI findings were analyzed according to the BI-RADS MRI lexicon. We evaluated the size, distribution, internal enhancement pattern, and location of NME lesions relative to the index cancer (i.e., same quadrant, different quadrant, or contralateral breast). Results On histopathologic analysis of the 88 NME lesions, 73 (83%) were malignant and 15 (17%) were benign. Lesion size did not differ significantly between malignant and benign lesions (P = 0.410). Malignancy was more frequent in linear (P = 0.005) and segmental (P = 0.011) distributions, and benignancy was more frequent in focal (P = 0.004) and regional (P < 0.001) NME lesions. The highest positive predictive value (PPV) for malignancy occurred in segmental (96.8%), linear (95.1%), clustered ring (100%), and clumped (92.0%) enhancement. Asymmetry demonstrated a high positive predictive value of 85.9%. The frequency of malignancy was higher
Minor changes in the indicator used to measure fine PM, which cause only modest changes in Mass concentrations, can lead to dramatic changes in the statistical relationship of fine PM mass with cardiovascular mortality. An epidemiologic study in Phoenix (Mar et al., 2000), augme...
Hunt, N C; Ghosh, K M; Blain, A P; Rushton, S P; Longstaff, L M; Deehan, D J
2015-05-01
The aim of this study was to compare the maximum laxity conferred by the cruciate-retaining (CR) and posterior-stabilised (PS) Triathlon single-radius total knee arthroplasty (TKA) for anterior drawer, varus-valgus opening and rotation in eight cadaver knees through a defined arc of flexion (0º to 110º). The null hypothesis was that the limits of laxity of CR- and PS-TKAs are not significantly different. The investigation was undertaken in eight loaded cadaver knees undergoing subjective stress testing using a measurement rig. Firstly the native knee was tested prior to preparation for CR-TKA and subsequently for PS-TKA implantation. Surgical navigation was used to track maximal displacements/rotations at 0º, 30º, 60º, 90º and 110° of flexion. Mixed-effects modelling was used to define the behaviour of the TKAs. The laxity measured for the CR- and PS-TKAs revealed no statistically significant differences over the studied flexion arc for the two versions of TKA. Compared with the native knee both TKAs exhibited slightly increased anterior drawer and decreased varus-valgus and internal-external roational laxities. We believe further study is required to define the clinical states for which the additional constraint offered by a PS-TKA implant may be beneficial.
NASA Astrophysics Data System (ADS)
Temme, F. P.
1992-12-01
Realisation of the invariance properties of the p ⩽ 2 number partitional inventory components of the 20-fold spin algebra associated with [A] 20 nuclear spin clusters under SU2 × L20 allows the mappings {[λ] → Γ} to be derived. In addition, recent general inner tensor product expressions under Ln, for n even (odd), also facilitates the evaluation of many higher [λ] ( L20; p = 3) correlative mappings onto SU3↓SO(3) × L↓20T A 5 subduced symmetry from SU2 duality, thus providing results that determine the nature of adapted NMR bases for both dodecahedrane and its d 20 analogue. The significance of this work lies in the pertinence of nuclear spin statistics to both selective MQ-NMR and to other spectroscopic aspects of cage clusters, e.g., [ 13C] n, n = 20, 60, fullerenes. Mappings onto Ln irreps sets of specific p ⩽ 3 number partitions arise in combinatorial treatment of {M iti} Rota fields, defining scalar invariants in the context of Cayley algebra. Inclusion of the Ln group in the specific Racah chain for NMR symmetry gives rise to significant further physical insight.
NASA Astrophysics Data System (ADS)
Topal, Uğur; Aksan, Mehmet Ali
2016-05-01
Magnetite nanoparticles (MNPs) are extensively investigated for biomedical applications, particularly as contrast agents for Magnetic Resonance Imaging and as drug delivery agent and heat mediators for cancer therapy. Tuning the magnetic properties of the magnetite nanoparticles with doping of foreign atoms has a crucial importance for determining the application areas of these materials and so attracts much interests. On the other hand the doping with foreign atoms requires high temperature annealing, and it causes a phase transition to the hematite phase above 400 °C. In this work the phase transition temperature from the magnetite to the hematite phase has been increased by 200 °C, which is the highest enhancement reported in literature. It was achieved by addition of the appropriate amounts of B2O3. Our experiments indicates that the 5.0 wt% of B2O3 addition stabilizes and keeps the existence of single phase magnetite up to 600 °C.
Ding, Shipeng; Liu, Fudong; Shi, Xiaoyan; Liu, Kuo; Lian, Zhihua; Xie, Lijuan; He, Hong
2015-05-13
A novel Mo-promoted Ce-Zr mixed oxide catalyst prepared by a homogeneous precipitation method was used for the selective catalytic reduction (SCR) of NO(x) with NH3. The optimal catalyst showed high NH3-SCR activity, SO2/H2O durability, and thermal stability under test conditions. The addition of Mo inhibited growth of the CeO2 particle size, improved the redox ability, and increased the amount of surface acidity, especially the Lewis acidity, all of which were favorable for the excellent NH3-SCR performance. It is believed that the catalyst is promising for the removal of NO(x) from diesel engine exhaust.
Tachibana, Takayoshi; Tanaka, Masatsugu; Hagihara, Maki; Kawasaki, Rika; Yamazaki, Etsuko; Koharazawa, Hideyuki; Taguchi, Jun; Tomita, Naoto; Fujimaki, Katsumichi; Sakai, Rika; Fujita, Hiroyuki; Fujisawa, Shin; Maruta, Atsuo; Ishigatsubo, Yoshiaki; Kanamori, Heiwa
2015-10-01
A multicenter retrospective study was performed to determine the significance of adding cytarabine (CA) or thiotepa (TT) in the context of total body irradiation (TBI) and cyclophosphamide (CY). A total of 322 patients who underwent allogeneic hematopoietic cell transplantation (HCT) were distributed to the following three groups: TBI/CY (n = 75), TBI/CY/CA (n = 77), and TBI/CY/TT (n = 170). In the TBI/CY/TT group, 164 of patients (96 %) received HCT during the previous year (2000-2005). Multivariate analysis revealed that the TBI/CY/TT group demonstrated a trend of poorer survival rate than the TBI/CY group, [hazard ratio (HR) = 1.49, 95 % confidence interval (CI) 0.99-2.24, P = 0.055] with a higher non-relapse mortality (NRM) (HR = 2.34, 95 % CI 1.35-4.06, P = 0.002) rates, while TBI/CY/CA group demonstrated similar outcomes. Even in the subgroup analyses of disease type or disease risk, the outcomes with intensified conditioning regimens were not superior to those with TBI/CY. In conclusion, although the significant bias has to be carefully considered, the clinical benefit of adding CA or TT to the TBI/CY regimen was not demonstrated.
ERIC Educational Resources Information Center
Cicchetti, Domenic V.; Koenig, Kathy; Klin, Ami; Volkmar, Fred R.; Paul, Rhea; Sparrow, Sara
2011-01-01
The objectives of this report are: (a) to trace the theoretical roots of the concept clinical significance that derives from Bayesian thinking, Marginal Utility/Diminishing Returns in Economics, and the "just noticeable difference", in Psychophysics. These concepts then translated into: Effect Size (ES), strength of agreement, clinical…
Cosmic statistics of statistics
NASA Astrophysics Data System (ADS)
Szapudi, István; Colombi, Stéphane; Bernardeau, Francis
1999-12-01
The errors on statistics measured in finite galaxy catalogues are exhaustively investigated. The theory of errors on factorial moments by Szapudi & Colombi is applied to cumulants via a series expansion method. All results are subsequently extended to the weakly non-linear regime. Together with previous investigations this yields an analytic theory of the errors for moments and connected moments of counts in cells from highly non-linear to weakly non-linear scales. For non-linear functions of unbiased estimators, such as the cumulants, the phenomenon of cosmic bias is identified and computed. Since it is subdued by the cosmic errors in the range of applicability of the theory, correction for it is inconsequential. In addition, the method of Colombi, Szapudi & Szalay concerning sampling effects is generalized, adapting the theory for inhomogeneous galaxy catalogues. While previous work focused on the variance only, the present article calculates the cross-correlations between moments and connected moments as well for a statistically complete description. The final analytic formulae representing the full theory are explicit but somewhat complicated. Therefore we have made available a fortran program capable of calculating the described quantities numerically (for further details e-mail SC at colombi@iap.fr). An important special case is the evaluation of the errors on the two-point correlation function, for which this should be more accurate than any method put forward previously. This tool will be immensely useful in the future for assessing the precision of measurements from existing catalogues, as well as aiding the design of new galaxy surveys. To illustrate the applicability of the results and to explore the numerical aspects of the theory qualitatively and quantitatively, the errors and cross-correlations are predicted under a wide range of assumptions for the future Sloan Digital Sky Survey. The principal results concerning the cumulants ξ, Q3 and Q4 is that
NASA Technical Reports Server (NTRS)
Wilson, Robert M.
2001-01-01
Since 1750, the number of cataclysmic volcanic eruptions (volcanic explosivity index (VEI)>=4) per decade spans 2-11, with 96 percent located in the tropics and extra-tropical Northern Hemisphere. A two-point moving average of the volcanic time series has higher values since the 1860's than before, being 8.00 in the 1910's (the highest value) and 6.50 in the 1980's, the highest since the 1910's peak. Because of the usual behavior of the first difference of the two-point moving averages, one infers that its value for the 1990's will measure approximately 6.50 +/- 1, implying that approximately 7 +/- 4 cataclysmic volcanic eruptions should be expected during the present decade (2000-2009). Because cataclysmic volcanic eruptions (especially those having VEI>=5) nearly always have been associated with short-term episodes of global cooling, the occurrence of even one might confuse our ability to assess the effects of global warming. Poisson probability distributions reveal that the probability of one or more events with a VEI>=4 within the next ten years is >99 percent. It is approximately 49 percent for an event with a VEI>=5, and 18 percent for an event with a VEI>=6. Hence, the likelihood that a climatically significant volcanic eruption will occur within the next ten years appears reasonably high.
Tonello, Lucio; Conway de Macario, Everly; Marino Gammazza, Antonella; Cocchi, Massimo; Gabrielli, Fabio; Zummo, Giovanni; Cappello, Francesco; Macario, Alberto J L
2015-03-01
The pathogenesis of Hashimoto's thyroiditis includes autoimmunity involving thyroid antigens, autoantibodies, and possibly cytokines. It is unclear what role plays Hsp60, but our recent data indicate that it may contribute to pathogenesis as an autoantigen. Its role in the induction of cytokine production, pro- or anti-inflammatory, was not elucidated, except that we found that peripheral blood mononucleated cells (PBMC) from patients or from healthy controls did not respond with cytokine production upon stimulation by Hsp60 in vitro with patterns that would differentiate patients from controls with statistical significance. This "negative" outcome appeared when the data were pooled and analyzed with conventional statistical methods. We re-analyzed our data with non-conventional statistical methods based on data mining using the classification and regression tree learning algorithm and clustering methodology. The results indicate that by focusing on IFN-γ and IL-2 levels before and after Hsp60 stimulation of PBMC in each patient, it is possible to differentiate patients from controls. A major general conclusion is that when trying to identify disease markers such as levels of cytokines and Hsp60, reference to standards obtained from pooled data from many patients may be misleading. The chosen biomarker, e.g., production of IFN-γ and IL-2 by PBMC upon stimulation with Hsp60, must be assessed before and after stimulation and the results compared within each patient and analyzed with conventional and data mining statistical methods.
Suite versus composite statistics
Balsillie, J.H.; Tanner, W.F.
1999-01-01
Suite and composite methodologies, two statistically valid approaches for producing statistical descriptive measures, are investigated for sample groups representing a probability distribution where, in addition, each sample is probability distribution. Suite and composite means (first moment measures) are always equivalent. Composite standard deviations (second moment measures) are always larger than suite standard deviations. Suite and composite values for higher moment measures have more complex relationships. Very seldom, however, are they equivalent, and they normally yield statistically significant but different results. Multiple samples are preferable to single samples (including composites) because they permit the investigator to examine sample-to-sample variability. These and other relationships for suite and composite probability distribution analyses are investigated and reported using granulometric data.
Septic tank additive impacts on microbial populations.
Pradhan, S; Hoover, M T; Clark, G H; Gumpertz, M; Wollum, A G; Cobb, C; Strock, J
2008-01-01
Environmental health specialists, other onsite wastewater professionals, scientists, and homeowners have questioned the effectiveness of septic tank additives. This paper describes an independent, third-party, field scale, research study of the effects of three liquid bacterial septic tank additives and a control (no additive) on septic tank microbial populations. Microbial populations were measured quarterly in a field study for 12 months in 48 full-size, functioning septic tanks. Bacterial populations in the 48 septic tanks were statistically analyzed with a mixed linear model. Additive effects were assessed for three septic tank maintenance levels (low, intermediate, and high). Dunnett's t-test for tank bacteria (alpha = .05) indicated that none of the treatments were significantly different, overall, from the control at the statistical level tested. In addition, the additives had no significant effects on septic tank bacterial populations at any of the septic tank maintenance levels. Additional controlled, field-based research iswarranted, however, to address additional additives and experimental conditions.
Rule, Simon; Smith, Paul; Johnson, Peter W.M.; Bolam, Simon; Follows, George; Gambell, Joanne; Hillmen, Peter; Jack, Andrew; Johnson, Stephen; Kirkwood, Amy A; Kruger, Anton; Pocock, Christopher; Seymour, John F.; Toncheva, Milena; Walewski, Jan; Linch, David
2016-01-01
Mantle cell lymphoma is an incurable and generally aggressive lymphoma that is more common in elderly patients. Whilst a number of different chemotherapeutic regimens are active in this disease, there is no established gold standard therapy. Rituximab has been used widely to good effect in B-cell malignancies but there is no evidence that it improves outcomes when added to chemotherapy in this disease. We performed a randomized, open-label, multicenter study looking at the addition of rituximab to the standard chemotherapy regimen of fludarabine and cyclophosphamide in patients with newly diagnosed mantle cell lymphoma. A total of 370 patients were randomized. With a median follow up of six years, rituximab improved the median progression-free survival from 14.9 to 29.8 months (P<0.001) and overall survival from 37.0 to 44.5 months (P=0.005). This equates to absolute differences of 9.0% and 22.1% for overall and progression-free survival, respectively, at two years. Overall response rates were similar, but complete response rates were significantly higher in the rituximab arm: 52.7% vs. 39.9% (P=0.014). There was no clinically significant additional toxicity observed with the addition of rituximab. Overall, approximately 18% of patients died of non-lymphomatous causes, most commonly infections. The addition of rituximab to fludarabine and cyclophosphamide chemotherapy significantly improves outcomes in patients with mantle cell lymphoma. However, these regimens have significant late toxicity and should be used with caution. This trial has been registered (ISRCTN81133184 and clinicaltrials.gov:00641095) and is supported by the UK National Cancer Research Network. PMID:26611473
Grossling, Bernardo F.
1975-01-01
Exploratory drilling is still in incipient or youthful stages in those areas of the world where the bulk of the potential petroleum resources is yet to be discovered. Methods of assessing resources from projections based on historical production and reserve data are limited to mature areas. For most of the world's petroleum-prospective areas, a more speculative situation calls for a critical review of resource-assessment methodology. The language of mathematical statistics is required to define more rigorously the appraisal of petroleum resources. Basically, two approaches have been used to appraise the amounts of undiscovered mineral resources in a geologic province: (1) projection models, which use statistical data on the past outcome of exploration and development in the province; and (2) estimation models of the overall resources of the province, which use certain known parameters of the province together with the outcome of exploration and development in analogous provinces. These two approaches often lead to widely different estimates. Some of the controversy that arises results from a confusion of the probabilistic significance of the quantities yielded by each of the two approaches. Also, inherent limitations of analytic projection models-such as those using the logistic and Gomperts functions --have often been ignored. The resource-assessment problem should be recast in terms that provide for consideration of the probability of existence of the resource and of the probability of discovery of a deposit. Then the two above-mentioned models occupy the two ends of the probability range. The new approach accounts for (1) what can be expected with reasonably high certainty by mere projections of what has been accomplished in the past; (2) the inherent biases of decision-makers and resource estimators; (3) upper bounds that can be set up as goals for exploration; and (4) the uncertainties in geologic conditions in a search for minerals. Actual outcomes can then
Titanic: A Statistical Exploration.
ERIC Educational Resources Information Center
Takis, Sandra L.
1999-01-01
Uses the available data about the Titanic's passengers to interest students in exploring categorical data and the chi-square distribution. Describes activities incorporated into a statistics class and gives additional resources for collecting information about the Titanic. (ASK)
Smith, Alwyn
1969-01-01
This paper is based on an analysis of questionnaires sent to the health ministries of Member States of WHO asking for information about the extent, nature, and scope of morbidity statistical information. It is clear that most countries collect some statistics of morbidity and many countries collect extensive data. However, few countries relate their collection to the needs of health administrators for information, and many countries collect statistics principally for publication in annual volumes which may appear anything up to 3 years after the year to which they refer. The desiderata of morbidity statistics may be summarized as reliability, representativeness, and relevance to current health problems. PMID:5306722
ERIC Educational Resources Information Center
Petocz, Peter; Sowey, Eric
2012-01-01
The term "data snooping" refers to the practice of choosing which statistical analyses to apply to a set of data after having first looked at those data. Data snooping contradicts a fundamental precept of applied statistics, that the scheme of analysis is to be planned in advance. In this column, the authors shall elucidate the statistical…
NASA Technical Reports Server (NTRS)
Feiveson, Alan H.; Foy, Millennia; Ploutz-Snyder, Robert; Fiedler, James
2014-01-01
Do you have elevated p-values? Is the data analysis process getting you down? Do you experience anxiety when you need to respond to criticism of statistical methods in your manuscript? You may be suffering from Insufficient Statistical Support Syndrome (ISSS). For symptomatic relief of ISSS, come for a free consultation with JSC biostatisticians at our help desk during the poster sessions at the HRP Investigators Workshop. Get answers to common questions about sample size, missing data, multiple testing, when to trust the results of your analyses and more. Side effects may include sudden loss of statistics anxiety, improved interpretation of your data, and increased confidence in your results.
ERIC Educational Resources Information Center
Huberty, Carl J.
An approach to statistical testing, which combines Neyman-Pearson hypothesis testing and Fisher significance testing, is recommended. The use of P-values in this approach is discussed in some detail. The author also discusses some problems which are often found in introductory statistics textbooks. The problems involve the definitions of…
Spencer, Michael
1974-01-01
Food additives are discussed from the food technology point of view. The reasons for their use are summarized: (1) to protect food from chemical and microbiological attack; (2) to even out seasonal supplies; (3) to improve their eating quality; (4) to improve their nutritional value. The various types of food additives are considered, e.g. colours, flavours, emulsifiers, bread and flour additives, preservatives, and nutritional additives. The paper concludes with consideration of those circumstances in which the use of additives is (a) justified and (b) unjustified. PMID:4467857
The Surveillance, Epidemiology, and End Results (SEER) Program of the National Cancer Institute works to provide information on cancer statistics in an effort to reduce the burden of cancer among the U.S. population.
... cancer statistics across the world. U.S. Cancer Mortality Trends The best indicator of progress against cancer is ... the number of cancer survivors has increased. These trends show that progress is being made against the ...
NASA Astrophysics Data System (ADS)
Hermann, Claudine
Statistical Physics bridges the properties of a macroscopic system and the microscopic behavior of its constituting particles, otherwise impossible due to the giant magnitude of Avogadro's number. Numerous systems of today's key technologies - such as semiconductors or lasers - are macroscopic quantum objects; only statistical physics allows for understanding their fundamentals. Therefore, this graduate text also focuses on particular applications such as the properties of electrons in solids with applications, and radiation thermodynamics and the greenhouse effect.
Significant lexical relationships
Pedersen, T.; Kayaalp, M.; Bruce, R.
1996-12-31
Statistical NLP inevitably deals with a large number of rare events. As a consequence, NLP data often violates the assumptions implicit in traditional statistical procedures such as significance testing. We describe a significance test, an exact conditional test, that is appropriate for NLP data and can be performed using freely available software. We apply this test to the study of lexical relationships and demonstrate that the results obtained using this test are both theoretically more reliable and different from the results obtained using previously applied tests.
da Silva, Diana Dias; Silva, Elisabete; Carvalho, Félix; Carmo, Helena
2014-06-01
Hepatic injury after 3,4-methylenedioxymethamphetamine (MDMA; ecstasy) intoxications is highly unpredictable and does not seem to correlate with either dosage or frequency of use. The mechanisms involved include the drug metabolic bioactivation and the hyperthermic state of the liver triggered by its thermogenic action and exacerbated by the environmental circumstances of abuse at hot and crowded venues. We became interested in understanding the interaction between ecstasy and its metabolites generated in vivo as users are always exposed to mixtures of parent drug and metabolites. With this purpose, Hep G2 cells were incubated with MDMA and its main human metabolites methylenedioxyamphetamine (MDA), α-methyldopamine (α-MeDA) and N-methyl-α-methyldopamine (N-Me-α-MeDA), individually and in mixture (drugs combined in proportion to their individual EC01 ), at normal (37 °C) and hyperthermic (40.5 °C) conditions. After 48 h, viability was assessed by the 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyl tetrazolium bromide (MTT) assay. Extensive concentration-response analysis was performed with single drugs and the parameters of the individual non-linear logit fits were used to predict joint effects using the well-founded models of concentration addition (CA) and independent action (IA). Experimental testing revealed that mixture effects on cell viability conformed to CA, for both temperature settings. Additionally, substantial combination effects were attained even when each substance was present at concentrations that individually produced unnoticeable effects. Hyperthermic incubations dramatically increased the toxicity of the tested drug and metabolites, both individually and combined. These outcomes suggest that MDMA metabolism has hazard implications to liver cells even when metabolites are found in low concentrations, as they contribute additively to the overall toxic effect of MDMA.
NASA Astrophysics Data System (ADS)
Goodman, Joseph W.
2000-07-01
The Wiley Classics Library consists of selected books that have become recognized classics in their respective fields. With these new unabridged and inexpensive editions, Wiley hopes to extend the life of these important works by making them available to future generations of mathematicians and scientists. Currently available in the Series: T. W. Anderson The Statistical Analysis of Time Series T. S. Arthanari & Yadolah Dodge Mathematical Programming in Statistics Emil Artin Geometric Algebra Norman T. J. Bailey The Elements of Stochastic Processes with Applications to the Natural Sciences Robert G. Bartle The Elements of Integration and Lebesgue Measure George E. P. Box & Norman R. Draper Evolutionary Operation: A Statistical Method for Process Improvement George E. P. Box & George C. Tiao Bayesian Inference in Statistical Analysis R. W. Carter Finite Groups of Lie Type: Conjugacy Classes and Complex Characters R. W. Carter Simple Groups of Lie Type William G. Cochran & Gertrude M. Cox Experimental Designs, Second Edition Richard Courant Differential and Integral Calculus, Volume I RIchard Courant Differential and Integral Calculus, Volume II Richard Courant & D. Hilbert Methods of Mathematical Physics, Volume I Richard Courant & D. Hilbert Methods of Mathematical Physics, Volume II D. R. Cox Planning of Experiments Harold S. M. Coxeter Introduction to Geometry, Second Edition Charles W. Curtis & Irving Reiner Representation Theory of Finite Groups and Associative Algebras Charles W. Curtis & Irving Reiner Methods of Representation Theory with Applications to Finite Groups and Orders, Volume I Charles W. Curtis & Irving Reiner Methods of Representation Theory with Applications to Finite Groups and Orders, Volume II Cuthbert Daniel Fitting Equations to Data: Computer Analysis of Multifactor Data, Second Edition Bruno de Finetti Theory of Probability, Volume I Bruno de Finetti Theory of Probability, Volume 2 W. Edwards Deming Sample Design in Business Research
Statistics 101 for Radiologists.
Anvari, Arash; Halpern, Elkan F; Samir, Anthony E
2015-10-01
Diagnostic tests have wide clinical applications, including screening, diagnosis, measuring treatment effect, and determining prognosis. Interpreting diagnostic test results requires an understanding of key statistical concepts used to evaluate test efficacy. This review explains descriptive statistics and discusses probability, including mutually exclusive and independent events and conditional probability. In the inferential statistics section, a statistical perspective on study design is provided, together with an explanation of how to select appropriate statistical tests. Key concepts in recruiting study samples are discussed, including representativeness and random sampling. Variable types are defined, including predictor, outcome, and covariate variables, and the relationship of these variables to one another. In the hypothesis testing section, we explain how to determine if observed differences between groups are likely to be due to chance. We explain type I and II errors, statistical significance, and study power, followed by an explanation of effect sizes and how confidence intervals can be used to generalize observed effect sizes to the larger population. Statistical tests are explained in four categories: t tests and analysis of variance, proportion analysis tests, nonparametric tests, and regression techniques. We discuss sensitivity, specificity, accuracy, receiver operating characteristic analysis, and likelihood ratios. Measures of reliability and agreement, including κ statistics, intraclass correlation coefficients, and Bland-Altman graphs and analysis, are introduced. PMID:26466186
Statistics 101 for Radiologists.
Anvari, Arash; Halpern, Elkan F; Samir, Anthony E
2015-10-01
Diagnostic tests have wide clinical applications, including screening, diagnosis, measuring treatment effect, and determining prognosis. Interpreting diagnostic test results requires an understanding of key statistical concepts used to evaluate test efficacy. This review explains descriptive statistics and discusses probability, including mutually exclusive and independent events and conditional probability. In the inferential statistics section, a statistical perspective on study design is provided, together with an explanation of how to select appropriate statistical tests. Key concepts in recruiting study samples are discussed, including representativeness and random sampling. Variable types are defined, including predictor, outcome, and covariate variables, and the relationship of these variables to one another. In the hypothesis testing section, we explain how to determine if observed differences between groups are likely to be due to chance. We explain type I and II errors, statistical significance, and study power, followed by an explanation of effect sizes and how confidence intervals can be used to generalize observed effect sizes to the larger population. Statistical tests are explained in four categories: t tests and analysis of variance, proportion analysis tests, nonparametric tests, and regression techniques. We discuss sensitivity, specificity, accuracy, receiver operating characteristic analysis, and likelihood ratios. Measures of reliability and agreement, including κ statistics, intraclass correlation coefficients, and Bland-Altman graphs and analysis, are introduced.
Han, Yali; Liu, Jie; Sun, Meili; Zhang, Zongpu; Liu, Chuanyong; Sun, Yuping
2016-01-01
Background. There is no definitive conclusion so far on the predictive values of ERCC1 polymorphisms for clinical outcomes of platinum-based chemotherapy in non-small cell lung cancer (NSCLC). We updated this meta-analysis with an expectation to obtain some statistical advancement on this issue. Methods. Relevant studies were identified by searching MEDLINE, EMBASE databases from inception to April 2015. Primary outcomes included objective response rate (ORR), progression-free survival (PFS), and overall survival (OS). All analyses were performed using the Review Manager version 5.3 and the Stata version 12.0. Results. A total of 33 studies including 5373 patients were identified. ERCC1 C118T and C8092A could predict both ORR and OS for platinum-based chemotherapy in Asian NSCLC patients (CT + TT versus CC, ORR: OR = 0.80, 95% CI = 0.67–0.94; OS: HR = 1.24, 95% CI = 1.01–1.53) (CA + AA versus CC, ORR: OR = 0.76, 95% CI = 0.60–0.96; OS: HR = 1.37, 95% CI = 1.06–1.75). Conclusions. Current evidence strongly indicated the prospect of ERCC1 C118T and C8092A as predictive biomarkers for platinum-based chemotherapy in Asian NSCLC patients. However, the results should be interpreted with caution and large prospective studies are still required to further investigate these findings. PMID:27057082
Harrup, Mason K; Rollins, Harry W
2013-11-26
An additive comprising a phosphazene compound that has at least two reactive functional groups and at least one capping functional group bonded to phosphorus atoms of the phosphazene compound. One of the at least two reactive functional groups is configured to react with cellulose and the other of the at least two reactive functional groups is configured to react with a resin, such as an amine resin of a polycarboxylic acid resin. The at least one capping functional group is selected from the group consisting of a short chain ether group, an alkoxy group, or an aryloxy group. Also disclosed are an additive-resin admixture, a method of treating a wood product, and a wood product.
Rudolf Keller
2004-08-10
In this project, a concept to improve the performance of aluminum production cells by introducing potlining additives was examined and tested. Boron oxide was added to cathode blocks, and titanium was dissolved in the metal pool; this resulted in the formation of titanium diboride and caused the molten aluminum to wet the carbonaceous cathode surface. Such wetting reportedly leads to operational improvements and extended cell life. In addition, boron oxide suppresses cyanide formation. This final report presents and discusses the results of this project. Substantial economic benefits for the practical implementation of the technology are projected, especially for modern cells with graphitized blocks. For example, with an energy savings of about 5% and an increase in pot life from 1500 to 2500 days, a cost savings of $ 0.023 per pound of aluminum produced is projected for a 200 kA pot.
1986-01-01
Official population data for the USSR are presented for 1985 and 1986. Part 1 (pp. 65-72) contains data on capitals of union republics and cities with over one million inhabitants, including population estimates for 1986 and vital statistics for 1985. Part 2 (p. 72) presents population estimates by sex and union republic, 1986. Part 3 (pp. 73-6) presents data on population growth, including birth, death, and natural increase rates, 1984-1985; seasonal distribution of births and deaths; birth order; age-specific birth rates in urban and rural areas and by union republic; marriages; age at marriage; and divorces. PMID:12178831
SHARE: Statistical hadronization with resonances
NASA Astrophysics Data System (ADS)
Torrieri, G.; Steinke, S.; Broniowski, W.; Florkowski, W.; Letessier, J.; Rafelski, J.
2005-05-01
errors are independent, since the systematic error is not a random variable). Aside of χ, the program also calculates the statistical significance [2], defined as the probability that, given a "true" theory and a statistical (Gaussian) experimental error, the fitted χ assumes the values at or above the considered value. In the case that the best fit has statistical significance significantly below unity, the model under consideration is very likely inappropriate. In the limit of many degrees of freedom ( N), the statistical significance function depends only on χ/N, with 90% statistical significance at χ/N˜1, and falling steeply at χ/N>1. However, the degrees of freedom in fits involving ratios are generally not sufficient to reach the asymptotic limit. Hence, statistical significance depends strongly on χ and N separately. In particular, if N<20, often for a fit to have an acceptable statistical significance, a χ/N significantly less than 1 is required. The fit routine does not always find the true lowest χ minimum. Specifically, multi-parameter fits with too few degrees of freedom generally exhibit a non-trivial structure in parameter space, with several secondary minima, saddle points, valleys, etc. To help the user perform the minimization effectively, we have added tools to compute the χ contours and profiles. In addition, our program's flexibility allows for many strategies in performing the fit. It is therefore possible, by following the techniques described in Section 3.7, to scan the parameter space and ensure that the minimum found is the true one. Further systematic deviations between the model and experiment can be recognized via the program's output, which includes a particle-by-particle comparison between experiment and theory. Additional comments: In consideration of the wide stream of new data coming out from RHIC, there is an on-going activity, with several groups performing analysis of particle yields. It is our hope that SHARE will allow to
ERIC Educational Resources Information Center
Peterson, Lisa S.
2008-01-01
Clinical significance is an important concept in research, particularly in education and the social sciences. The present article first compares clinical significance to other measures of "significance" in statistics. The major methods used to determine clinical significance are explained and the strengths and weaknesses of clinical significance…
Worry, Intolerance of Uncertainty, and Statistics Anxiety
ERIC Educational Resources Information Center
Williams, Amanda S.
2013-01-01
Statistics anxiety is a problem for most graduate students. This study investigates the relationship between intolerance of uncertainty, worry, and statistics anxiety. Intolerance of uncertainty was significantly related to worry, and worry was significantly related to three types of statistics anxiety. Six types of statistics anxiety were…
Intervention for Maltreating Fathers: Statistically and Clinically Significant Change
ERIC Educational Resources Information Center
Scott, Katreena L.; Lishak, Vicky
2012-01-01
Objective: Fathers are seldom the focus of efforts to address child maltreatment and little is currently known about the effectiveness of intervention for this population. To address this gap, we examined the efficacy of a community-based group treatment program for fathers who had abused or neglected their children or exposed their children to…
Candidate Assembly Statistical Evaluation
1998-07-15
The Savannah River Site (SRS) receives aluminum clad spent Material Test Reactor (MTR) fuel from all over the world for storage and eventual reprocessing. There are hundreds of different kinds of MTR fuels and these fuels will continue to be received at SRS for approximately ten more years. SRS''s current criticality evaluation methodology requires the modeling of all MTR fuels utilizing Monte Carlo codes, which is extremely time consuming and resource intensive. Now that amore » significant number of MTR calculations have been conducted it is feasible to consider building statistical models that will provide reasonable estimations of MTR behavior. These statistical models can be incorporated into a standardized model homogenization spreadsheet package to provide analysts with a means of performing routine MTR fuel analyses with a minimal commitment of time and resources. This became the purpose for development of the Candidate Assembly Statistical Evaluation (CASE) program at SRS.« less
Statistical concepts in metrology with a postscript on statistical graphics
NASA Astrophysics Data System (ADS)
Ku, Harry H.
1988-08-01
Statistical Concepts in Metrology was originally written as Chapter 2 for the Handbook of Industrial Metrology published by the American Society of Tool and Manufacturing Engineers, 1967. It was reprinted as one of 40 papers in NBS Special Publication 300, Volume 1, Precision Measurement and Calibration; Statistical Concepts and Procedures, 1969. Since then this chapter has been used as basic text in statistics in Bureau-sponsored courses and seminars, including those for Electricity, Electronics, and Analytical Chemistry. While concepts and techniques introduced in the original chapter remain valid and appropriate, some additions on recent development of graphical methods for the treatment of data would be useful. Graphical methods can be used effectively to explore information in data sets prior to the application of classical statistical procedures. For this reason additional sections on statistical graphics are added as a postscript.
Bell, Graham
2016-01-01
In this experiment, the authors were interested in testing the effect of a small molecule inhibitor on the ratio of males and females in the offspring of their model Dipteran species. The authors report that in a wild-type population, ~50 % of offspring are male. They then test the effect of treating females with the chemical, which they think might affect the male:female ratio compared with the untreated group. They claim that there is a statistically significant increase in the percentage of males produced and conclude that the drug affects sex ratios. PMID:27338560
Perception in statistical graphics
NASA Astrophysics Data System (ADS)
VanderPlas, Susan Ruth
There has been quite a bit of research on statistical graphics and visualization, generally focused on new types of graphics, new software to create graphics, interactivity, and usability studies. Our ability to interpret and use statistical graphics hinges on the interface between the graph itself and the brain that perceives and interprets it, and there is substantially less research on the interplay between graph, eye, brain, and mind than is sufficient to understand the nature of these relationships. The goal of the work presented here is to further explore the interplay between a static graph, the translation of that graph from paper to mental representation (the journey from eye to brain), and the mental processes that operate on that graph once it is transferred into memory (mind). Understanding the perception of statistical graphics should allow researchers to create more effective graphs which produce fewer distortions and viewer errors while reducing the cognitive load necessary to understand the information presented in the graph. Taken together, these experiments should lay a foundation for exploring the perception of statistical graphics. There has been considerable research into the accuracy of numerical judgments viewers make from graphs, and these studies are useful, but it is more effective to understand how errors in these judgments occur so that the root cause of the error can be addressed directly. Understanding how visual reasoning relates to the ability to make judgments from graphs allows us to tailor graphics to particular target audiences. In addition, understanding the hierarchy of salient features in statistical graphics allows us to clearly communicate the important message from data or statistical models by constructing graphics which are designed specifically for the perceptual system.
Lubricant and additive effects on spur gear fatigue life
NASA Technical Reports Server (NTRS)
Townsend, D. P.; Zaretsky, E. V.; Scibbe, H. W.
1985-01-01
Spur gear endurance tests were conducted with six lubricants using a single lot of consumable-electrode vacuum melted (CVM) AISI 9310 spur gears. The sixth lubricant was divided into four batches each of which had a different additive content. Lubricants tested with a phosphorus-type load carrying additive showed a statistically significant improvement in life over lubricants without this type of additive. The presence of sulfur type antiwear additives in the lubricant did not appear to affect the surface fatigue life of the gears. No statistical difference in life was produced with those lubricants of different base stocks but with similar viscosity, pressure-viscosity coefficients and antiwear additives. Gears tested with a 0.1 wt % sulfur and 0.1 wt % phosphorus EP additives in the lubricant had reactive films that were 200 to 400 (0.8 to 1.6 microns) thick.
Lubricant and additive effects on spur gear fatigue life
NASA Technical Reports Server (NTRS)
Townsend, D. P.; Zaretsky, E. V.; Scibbe, H. W.
1986-01-01
Spur gear endurance tests were conducted with six lubricants using a single lot of consumable-electrode vacuum melted (CVM) AISI 9310 spur gears. The sixth lubricants was divided into four batches each of which had a different additive content. Lubricant tested with a phosphorus-type load carrying additive showed a statistically significant improvement in life over lubricants without this type of additive. The presence of sulfur type antiwear additives in the lubricant did not appear to affect the surface fatigue life of the gears. No statistical difference in life was produced with those lubricants of different base stocks but with similar viscosity, pressure-viscosity coefficients and antiwar additives. Gears tested with a 0.1 wt pct sulfur and 0.1 wt pct phosphorus EP additives in the lubricant had reactive films that were 200 to 400 (0.8 to 1.6 microns) thick.
Cosmetic Plastic Surgery Statistics
2014 Cosmetic Plastic Surgery Statistics Cosmetic Procedure Trends 2014 Plastic Surgery Statistics Report Please credit the AMERICAN SOCIETY OF PLASTIC SURGEONS when citing statistical data or using ...
DESIGNING ENVIRONMENTAL MONITORING DATABASES FOR STATISTIC ASSESSMENT
Databases designed for statistical analyses have characteristics that distinguish them from databases intended for general use. EMAP uses a probabilistic sampling design to collect data to produce statistical assessments of environmental conditions. In addition to supporting the ...
Statistics Anxiety among Postgraduate Students
ERIC Educational Resources Information Center
Koh, Denise; Zawi, Mohd Khairi
2014-01-01
Most postgraduate programmes, that have research components, require students to take at least one course of research statistics. Not all postgraduate programmes are science based, there are a significant number of postgraduate students who are from the social sciences that will be taking statistics courses, as they try to complete their…
NASA Technical Reports Server (NTRS)
1995-01-01
NASA Pocket Statistics is published for the use of NASA managers and their staff. Included herein is Administrative and Organizational information, summaries of Space Flight Activity including the NASA Major Launch Record, and NASA Procurement, Financial, and Manpower data. The NASA Major Launch Record includes all launches of Scout class and larger vehicles. Vehicle and spacecraft development flights are also included in the Major Launch Record. Shuttle missions are counted as one launch and one payload, where free flying payloads are not involved. Satellites deployed from the cargo bay of the Shuttle and placed in a separate orbit or trajectory are counted as an additional payload.
NASA Technical Reports Server (NTRS)
1994-01-01
Pocket Statistics is published for the use of NASA managers and their staff. Included herein is Administrative and Organizational information, summaries of Space Flight Activity including the NASA Major Launch Record, and NASA Procurement, Financial, and Manpower data. The NASA Major Launch Record includes all launches of Scout class and larger vehicles. Vehicle and spacecraft development flights are also included in the Major Launch Record. Shuttle missions are counted as one launch and one payload, where free flying payloads are not involved. Satellites deployed from the cargo bay of the Shuttle and placed in a separate orbit or trajectory are counted as an additional payload.
NASA Technical Reports Server (NTRS)
1996-01-01
This booklet of pocket statistics includes the 1996 NASA Major Launch Record, NASA Procurement, Financial, and Workforce data. The NASA Major Launch Record includes all launches of Scout class and larger vehicles. Vehicle and spacecraft development flights are also included in the Major Luanch Record. Shuttle missions are counted as one launch and one payload, where free flying payloads are not involved. Satellites deployed from the cargo bay of the Shuttle and placed in a separate orbit or trajectory are counted as an additional payload.
Impaired Statistical Learning in Developmental Dyslexia
Thiessen, Erik D.; Holt, Lori L.
2015-01-01
Purpose Developmental dyslexia (DD) is commonly thought to arise from phonological impairments. However, an emerging perspective is that a more general procedural learning deficit, not specific to phonological processing, may underlie DD. The current study examined if individuals with DD are capable of extracting statistical regularities across sequences of passively experienced speech and nonspeech sounds. Such statistical learning is believed to be domain-general, to draw upon procedural learning systems, and to relate to language outcomes. Method DD and control groups were familiarized with a continuous stream of syllables or sine-wave tones, the ordering of which was defined by high or low transitional probabilities across adjacent stimulus pairs. Participants subsequently judged two 3-stimulus test items with either high or low statistical coherence as being the most similar to the sounds heard during familiarization. Results As with control participants, the DD group was sensitive to the transitional probability structure of the familiarization materials as evidenced by above-chance performance. However, the performance of participants with DD was significantly poorer than controls across linguistic and nonlinguistic stimuli. In addition, reading-related measures were significantly correlated with statistical learning performance of both speech and nonspeech material. Conclusion Results are discussed in light of procedural learning impairments among participants with DD. PMID:25860795
SOCR: Statistics Online Computational Resource
Dinov, Ivo D.
2011-01-01
The need for hands-on computer laboratory experience in undergraduate and graduate statistics education has been firmly established in the past decade. As a result a number of attempts have been undertaken to develop novel approaches for problem-driven statistical thinking, data analysis and result interpretation. In this paper we describe an integrated educational web-based framework for: interactive distribution modeling, virtual online probability experimentation, statistical data analysis, visualization and integration. Following years of experience in statistical teaching at all college levels using established licensed statistical software packages, like STATA, S-PLUS, R, SPSS, SAS, Systat, etc., we have attempted to engineer a new statistics education environment, the Statistics Online Computational Resource (SOCR). This resource performs many of the standard types of statistical analysis, much like other classical tools. In addition, it is designed in a plug-in object-oriented architecture and is completely platform independent, web-based, interactive, extensible and secure. Over the past 4 years we have tested, fine-tuned and reanalyzed the SOCR framework in many of our undergraduate and graduate probability and statistics courses and have evidence that SOCR resources build student’s intuition and enhance their learning. PMID:21451741
Nursing student attitudes toward statistics.
Mathew, Lizy; Aktan, Nadine M
2014-04-01
Nursing is guided by evidence-based practice. To understand and apply research to practice, nurses must be knowledgeable in statistics; therefore, it is crucial to promote a positive attitude toward statistics among nursing students. The purpose of this quantitative cross-sectional study was to assess differences in attitudes toward statistics among undergraduate nursing, graduate nursing, and undergraduate non-nursing students. The Survey of Attitudes Toward Statistics Scale-36 (SATS-36) was used to measure student attitudes, with higher scores denoting more positive attitudes. The convenience sample was composed of 175 students from a public university in the northeastern United States. Statistically significant relationships were found among some of the key demographic variables. Graduate nursing students had a significantly lower score on the SATS-36, compared with baccalaureate nursing and non-nursing students. Therefore, an innovative nursing curriculum that incorporates knowledge of student attitudes and key demographic variables may result in favorable outcomes.
Predict! Teaching Statistics Using Informational Statistical Inference
ERIC Educational Resources Information Center
Makar, Katie
2013-01-01
Statistics is one of the most widely used topics for everyday life in the school mathematics curriculum. Unfortunately, the statistics taught in schools focuses on calculations and procedures before students have a chance to see it as a useful and powerful tool. Researchers have found that a dominant view of statistics is as an assortment of tools…
Additive Manufacturing Infrared Inspection
NASA Technical Reports Server (NTRS)
Gaddy, Darrell
2014-01-01
Additive manufacturing is a rapid prototyping technology that allows parts to be built in a series of thin layers from plastic, ceramics, and metallics. Metallic additive manufacturing is an emerging form of rapid prototyping that allows complex structures to be built using various metallic powders. Significant time and cost savings have also been observed using the metallic additive manufacturing compared with traditional techniques. Development of the metallic additive manufacturing technology has advanced significantly over the last decade, although many of the techniques to inspect parts made from these processes have not advanced significantly or have limitations. Several external geometry inspection techniques exist such as Coordinate Measurement Machines (CMM), Laser Scanners, Structured Light Scanning Systems, or even traditional calipers and gages. All of the aforementioned techniques are limited to external geometry and contours or must use a contact probe to inspect limited internal dimensions. This presentation will document the development of a process for real-time dimensional inspection technique and digital quality record of the additive manufacturing process using Infrared camera imaging and processing techniques.
Anaerobic sludge digestion with a biocatalytic additive
Ghosh, S.; Henry, M.P.; Fedde, P.A.
1982-01-01
The objective of this research was to evaluate the effects of a lactobacillus additive an anaerobic sludge digestion under normal, variable, and overload operating conditions. The additive was a whey fermentation product of an acid-tolerant strain of Lactobacillus acidophilus fortified with CaCO/sub 3/, (NH/sub 4/)/sub 2/HPO/sub 4/, ferrous lactate, and lactic acid. The lactobacillus additive is multifunctional in nature and provides growth factors, metabolic intermediates, and enzymes needed for substrate degradation and cellular synthesis. The experimental work consisted of several pairs of parallel mesophilic (35/sup 0/C) digestion runs (control and test) conducted in five experimental phases. Baseline runs without the additive showed that the two experimental digesters had the same methane content, gas production rate (GPR), and ethane yield. The effect of the additive was to increase methane yield and GPR by about 5% (which was statistically significant) during digester operation at a loading rate (LR) of 3.2 kg VS/m/sup 3/-day and a hydraulic retention time (HRT) of 14 days. Data collected from the various experimental phases showed that the biochemical additive increased methane yield, gas production rate, and VS reduction, and decreased volatile acids accumulation. In addition, it enhanced digester buffer capacity and improved the fertilizer value and dewatering characteristics of the digested residue.
Common misconceptions about data analysis and statistics.
Motulsky, Harvey J
2015-02-01
Ideally, any experienced investigator with the right tools should be able to reproduce a finding published in a peer-reviewed biomedical science journal. In fact, the reproducibility of a large percentage of published findings has been questioned. Undoubtedly, there are many reasons for this, but one reason may be that investigators fool themselves due to a poor understanding of statistical concepts. In particular, investigators often make these mistakes: (1) P-Hacking. This is when you reanalyze a data set in many different ways, or perhaps reanalyze with additional replicates, until you get the result you want. (2) Overemphasis on P values rather than on the actual size of the observed effect. (3) Overuse of statistical hypothesis testing, and being seduced by the word "significant". (4) Overreliance on standard errors, which are often misunderstood. PMID:25692012
Common misconceptions about data analysis and statistics.
Motulsky, Harvey J
2014-11-01
Ideally, any experienced investigator with the right tools should be able to reproduce a finding published in a peer-reviewed biomedical science journal. In fact, the reproducibility of a large percentage of published findings has been questioned. Undoubtedly, there are many reasons for this, but one reason maybe that investigators fool themselves due to a poor understanding of statistical concepts. In particular, investigators often make these mistakes: 1. P-Hacking. This is when you reanalyze a data set in many different ways, or perhaps reanalyze with additional replicates, until you get the result you want. 2. Overemphasis on P values rather than on the actual size of the observed effect. 3. Overuse of statistical hypothesis testing, and being seduced by the word "significant". 4. Overreliance on standard errors, which are often misunderstood. PMID:25213136
Common misconceptions about data analysis and statistics.
Motulsky, Harvey J
2014-10-01
Ideally, any experienced investigator with the right tools should be able to reproduce a finding published in a peer-reviewed biomedical science journal. In fact, however, the reproducibility of a large percentage of published findings has been questioned. Undoubtedly, there are many reasons for this, but one reason may be that investigators fool themselves due to a poor understanding of statistical concepts. In particular, investigators often make these mistakes: 1) P-hacking, which is when you reanalyze a data set in many different ways, or perhaps reanalyze with additional replicates, until you get the result you want; 2) overemphasis on P values rather than on the actual size of the observed effect; 3) overuse of statistical hypothesis testing, and being seduced by the word "significant"; and 4) over-reliance on standard errors, which are often misunderstood. PMID:25204545
Common misconceptions about data analysis and statistics.
Motulsky, Harvey J
2014-11-01
Ideally, any experienced investigator with the right tools should be able to reproduce a finding published in a peer-reviewed biomedical science journal. In fact, the reproducibility of a large percentage of published findings has been questioned. Undoubtedly, there are many reasons for this, but one reason maybe that investigators fool themselves due to a poor understanding of statistical concepts. In particular, investigators often make these mistakes: 1. P-Hacking. This is when you reanalyze a data set in many different ways, or perhaps reanalyze with additional replicates, until you get the result you want. 2. Overemphasis on P values rather than on the actual size of the observed effect. 3. Overuse of statistical hypothesis testing, and being seduced by the word "significant". 4. Overreliance on standard errors, which are often misunderstood.
Common misconceptions about data analysis and statistics.
Motulsky, Harvey J
2014-10-01
Ideally, any experienced investigator with the right tools should be able to reproduce a finding published in a peer-reviewed biomedical science journal. In fact, however, the reproducibility of a large percentage of published findings has been questioned. Undoubtedly, there are many reasons for this, but one reason may be that investigators fool themselves due to a poor understanding of statistical concepts. In particular, investigators often make these mistakes: 1) P-hacking, which is when you reanalyze a data set in many different ways, or perhaps reanalyze with additional replicates, until you get the result you want; 2) overemphasis on P values rather than on the actual size of the observed effect; 3) overuse of statistical hypothesis testing, and being seduced by the word "significant"; and 4) over-reliance on standard errors, which are often misunderstood.
Common misconceptions about data analysis and statistics.
Motulsky, Harvey J
2015-02-01
Ideally, any experienced investigator with the right tools should be able to reproduce a finding published in a peer-reviewed biomedical science journal. In fact, the reproducibility of a large percentage of published findings has been questioned. Undoubtedly, there are many reasons for this, but one reason may be that investigators fool themselves due to a poor understanding of statistical concepts. In particular, investigators often make these mistakes: (1) P-Hacking. This is when you reanalyze a data set in many different ways, or perhaps reanalyze with additional replicates, until you get the result you want. (2) Overemphasis on P values rather than on the actual size of the observed effect. (3) Overuse of statistical hypothesis testing, and being seduced by the word "significant". (4) Overreliance on standard errors, which are often misunderstood.
Antecedents of students' achievement in statistics
NASA Astrophysics Data System (ADS)
Awaludin, Izyan Syazana; Razak, Ruzanna Ab; Harris, Hezlin; Selamat, Zarehan
2015-02-01
The applications of statistics in most fields have been vast. Many degree programmes at local universities require students to enroll in at least one statistics course. The standard of these courses varies across different degree programmes. This is because of students' diverse academic backgrounds in which some comes far from the field of statistics. The high failure rate in statistics courses for non-science stream students had been concerning every year. The purpose of this research is to investigate the antecedents of students' achievement in statistics. A total of 272 students participated in the survey. Multiple linear regression was applied to examine the relationship between the factors and achievement. We found that statistics anxiety was a significant predictor of students' achievement. We also found that students' age has significant effect to achievement. Older students are more likely to achieve lowers scores in statistics. Student's level of study also has a significant impact on their achievement in statistics.
[Comment on] Statistical discrimination
NASA Astrophysics Data System (ADS)
Chinn, Douglas
In the December 8, 1981, issue of Eos, a news item reported the conclusion of a National Research Council study that sexual discrimination against women with Ph.D.'s exists in the field of geophysics. Basically, the item reported that even when allowances are made for motherhood the percentage of female Ph.D.'s holding high university and corporate positions is significantly lower than the percentage of male Ph.D.'s holding the same types of positions. The sexual discrimination conclusion, based only on these statistics, assumes that there are no basic psychological differences between men and women that might cause different populations in the employment group studied. Therefore, the reasoning goes, after taking into account possible effects from differences related to anatomy, such as women stopping their careers in order to bear and raise children, the statistical distributions of positions held by male and female Ph.D.'s ought to be very similar to one another. Any significant differences between the distributions must be caused primarily by sexual discrimination.
Statistics for People Who (Think They) Hate Statistics. Third Edition
ERIC Educational Resources Information Center
Salkind, Neil J.
2007-01-01
This text teaches an often intimidating and difficult subject in a way that is informative, personable, and clear. The author takes students through various statistical procedures, beginning with correlation and graphical representation of data and ending with inferential techniques and analysis of variance. In addition, the text covers SPSS, and…
Statistical Modelling of Compound Floods
NASA Astrophysics Data System (ADS)
Bevacqua, Emanuele; Maraun, Douglas; Vrac, Mathieu; Widmann, Martin; Manning, Colin
2016-04-01
of interest. This is based on real data for River discharge (Y RIV ER') and Sea level (Y SEA), from the River Têt in south of France. The impact of the compound flood is the water level in the area between the River and Sea station, which we define here as h = αY RIV ER + (1 ‑ α)Y SEA. Here we show the sensitivity of the system to a changes in the two physical parameters. Through variations in α we can study the system in one or two dimensions which allows for the assessment of the risk associated with either of the two variables alone or with a combination of them. Varying instead the second parameter, i.e. the dependence among the variables Y RIV ER and Y SEA, we show how an apparently weak dependence can increase the risk of flooding significantly with respect to the independent case. The model can be applied to future climate inserting predictors into the statistical model as additional conditioning variables. Through conditioning the simulation of the statistical model on the predictors obtained for future projections from Climate Models, both the change of the risk and characteristics of compound floods for the future can be analysed.
Statistical Reference Datasets
National Institute of Standards and Technology Data Gateway
Statistical Reference Datasets (Web, free access) The Statistical Reference Datasets is also supported by the Standard Reference Data Program. The purpose of this project is to improve the accuracy of statistical software by providing reference datasets with certified computational results that enable the objective evaluation of statistical software.
Explorations in statistics: statistical facets of reproducibility.
Curran-Everett, Douglas
2016-06-01
Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This eleventh installment of Explorations in Statistics explores statistical facets of reproducibility. If we obtain an experimental result that is scientifically meaningful and statistically unusual, we would like to know that our result reflects a general biological phenomenon that another researcher could reproduce if (s)he repeated our experiment. But more often than not, we may learn this researcher cannot replicate our result. The National Institutes of Health and the Federation of American Societies for Experimental Biology have created training modules and outlined strategies to help improve the reproducibility of research. These particular approaches are necessary, but they are not sufficient. The principles of hypothesis testing and estimation are inherent to the notion of reproducibility in science. If we want to improve the reproducibility of our research, then we need to rethink how we apply fundamental concepts of statistics to our science.
Introductory Statistics and Fish Management.
ERIC Educational Resources Information Center
Jardine, Dick
2002-01-01
Describes how fisheries research and management data (available on a website) have been incorporated into an Introductory Statistics course. In addition to the motivation gained from seeing the practical relevance of the course, some students have participated in the data collection and analysis for the New Hampshire Fish and Game Department. (MM)
Statistical Seismology and Induced Seismicity
NASA Astrophysics Data System (ADS)
Tiampo, K. F.; González, P. J.; Kazemian, J.
2014-12-01
While seismicity triggered or induced by natural resources production such as mining or water impoundment in large dams has long been recognized, the recent increase in the unconventional production of oil and gas has been linked to rapid rise in seismicity in many places, including central North America (Ellsworth et al., 2012; Ellsworth, 2013). Worldwide, induced events of M~5 have occurred and, although rare, have resulted in both damage and public concern (Horton, 2012; Keranen et al., 2013). In addition, over the past twenty years, the increase in both number and coverage of seismic stations has resulted in an unprecedented ability to precisely record the magnitude and location of large numbers of small magnitude events. The increase in the number and type of seismic sequences available for detailed study has revealed differences in their statistics that previously difficult to quantify. For example, seismic swarms that produce significant numbers of foreshocks as well as aftershocks have been observed in different tectonic settings, including California, Iceland, and the East Pacific Rise (McGuire et al., 2005; Shearer, 2012; Kazemian et al., 2014). Similarly, smaller events have been observed prior to larger induced events in several occurrences from energy production. The field of statistical seismology has long focused on the question of triggering and the mechanisms responsible (Stein et al., 1992; Hill et al., 1993; Steacy et al., 2005; Parsons, 2005; Main et al., 2006). For example, in most cases the associated stress perturbations are much smaller than the earthquake stress drop, suggesting an inherent sensitivity to relatively small stress changes (Nalbant et al., 2005). Induced seismicity provides the opportunity to investigate triggering and, in particular, the differences between long- and short-range triggering. Here we investigate the statistics of induced seismicity sequences from around the world, including central North America and Spain, and
[Big data in official statistics].
Zwick, Markus
2015-08-01
The concept of "big data" stands to change the face of official statistics over the coming years, having an impact on almost all aspects of data production. The tasks of future statisticians will not necessarily be to produce new data, but rather to identify and make use of existing data to adequately describe social and economic phenomena. Until big data can be used correctly in official statistics, a lot of questions need to be answered and problems solved: the quality of data, data protection, privacy, and the sustainable availability are some of the more pressing issues to be addressed. The essential skills of official statisticians will undoubtedly change, and this implies a number of challenges to be faced by statistical education systems, in universities, and inside the statistical offices. The national statistical offices of the European Union have concluded a concrete strategy for exploring the possibilities of big data for official statistics, by means of the Big Data Roadmap and Action Plan 1.0. This is an important first step and will have a significant influence on implementing the concept of big data inside the statistical offices of Germany.
[Big data in official statistics].
Zwick, Markus
2015-08-01
The concept of "big data" stands to change the face of official statistics over the coming years, having an impact on almost all aspects of data production. The tasks of future statisticians will not necessarily be to produce new data, but rather to identify and make use of existing data to adequately describe social and economic phenomena. Until big data can be used correctly in official statistics, a lot of questions need to be answered and problems solved: the quality of data, data protection, privacy, and the sustainable availability are some of the more pressing issues to be addressed. The essential skills of official statisticians will undoubtedly change, and this implies a number of challenges to be faced by statistical education systems, in universities, and inside the statistical offices. The national statistical offices of the European Union have concluded a concrete strategy for exploring the possibilities of big data for official statistics, by means of the Big Data Roadmap and Action Plan 1.0. This is an important first step and will have a significant influence on implementing the concept of big data inside the statistical offices of Germany. PMID:26077871
Ranald Macdonald and statistical inference.
Smith, Philip T
2009-05-01
Ranald Roderick Macdonald (1945-2007) was an important contributor to mathematical psychology in the UK, as a referee and action editor for British Journal of Mathematical and Statistical Psychology and as a participant and organizer at the British Psychological Society's Mathematics, statistics and computing section meetings. This appreciation argues that his most important contribution was to the foundations of significance testing, where his concern about what information was relevant in interpreting the results of significance tests led him to be a persuasive advocate for the 'Weak Fisherian' form of hypothesis testing. PMID:19351454
Forstpointner, Roswitha; Dreyling, Martin; Repp, Roland; Hermann, Sandra; Hänel, Annette; Metzner, Bernd; Pott, Christiane; Hartmann, Frank; Rothmann, Frank; Rohrberg, Robert; Böck, Hans-Peter; Wandt, Hannes; Unterhalt, Michael; Hiddemann, Wolfgang
2004-11-15
In follicular lymphoma (FL) and mantle cell lymphoma (MCL) the monoclonal antibody rituximab may improve the prognosis when combined with chemotherapy. This was investigated in a prospective randomized study in patients with relapsed disease. A total of 147 patients were randomized to receive 4 courses of chemotherapy with 25 mg/m(2) fludarabine on days 1 to 3, 200 mg/m(2) cyclophosphamide on days 1 to 3, and 8 mg/m(2) mitoxantrone on day 1 (FCM), alone or combined with rituximab (375 mg/m(2); R-FCM). Of 128 evaluable patients, 62 were randomized for FCM and 66 for R-FCM. R-FCM revealed an overall response rate of 79% (33% complete remission [CR], 45% partial remission [PR]) as compared with 58% for FCM alone (13% CR, 45% PR; P = .01), with similar results in a subgroup analysis of FL (94% vs 70%) and MCL (58% vs 46%). In the total group, the R-FCM arm was significantly superior concerning progression-free survival (PFS; P = .0381) and overall survival (OS; P = .0030). In FL PFS was significantly longer in the R-FCM arm (P = .0139) whereas in MCL a significantly longer OS was observed (P = .0042). There were no differences in clinically relevant side effects in both study arms. Hence, the addition of rituximab to FCM chemotherapy significantly improves the outcome of relapsed or refractory FL and MCL.
Chiou, Chei-Chang; Wang, Yu-Min; Lee, Li-Tze
2014-08-01
Statistical knowledge is widely used in academia; however, statistics teachers struggle with the issue of how to reduce students' statistics anxiety and enhance students' statistics learning. This study assesses the effectiveness of a "one-minute paper strategy" in reducing students' statistics-related anxiety and in improving students' statistics-related achievement. Participants were 77 undergraduates from two classes enrolled in applied statistics courses. An experiment was implemented according to a pretest/posttest comparison group design. The quasi-experimental design showed that the one-minute paper strategy significantly reduced students' statistics anxiety and improved students' statistics learning achievement. The strategy was a better instructional tool than the textbook exercise for reducing students' statistics anxiety and improving students' statistics achievement.
NASA Astrophysics Data System (ADS)
Dunbar, P. K.; Furtney, M.; McLean, S. J.; Sweeney, A. D.
2014-12-01
Tsunamis have inflicted death and destruction on the coastlines of the world throughout history. The occurrence of tsunamis and the resulting effects have been collected and studied as far back as the second millennium B.C. The knowledge gained from cataloging and examining these events has led to significant changes in our understanding of tsunamis, tsunami sources, and methods to mitigate the effects of tsunamis. The most significant, not surprisingly, are often the most devastating, such as the 2011 Tohoku, Japan earthquake and tsunami. The goal of this poster is to give a brief overview of the occurrence of tsunamis and then focus specifically on several significant tsunamis. There are various criteria to determine the most significant tsunamis: the number of deaths, amount of damage, maximum runup height, had a major impact on tsunami science or policy, etc. As a result, descriptions will include some of the most costly (2011 Tohoku, Japan), the most deadly (2004 Sumatra, 1883 Krakatau), and the highest runup ever observed (1958 Lituya Bay, Alaska). The discovery of the Cascadia subduction zone as the source of the 1700 Japanese "Orphan" tsunami and a future tsunami threat to the U.S. northwest coast, contributed to the decision to form the U.S. National Tsunami Hazard Mitigation Program. The great Lisbon earthquake of 1755 marked the beginning of the modern era of seismology. Knowledge gained from the 1964 Alaska earthquake and tsunami helped confirm the theory of plate tectonics. The 1946 Alaska, 1952 Kuril Islands, 1960 Chile, 1964 Alaska, and the 2004 Banda Aceh, tsunamis all resulted in warning centers or systems being established.The data descriptions on this poster were extracted from NOAA's National Geophysical Data Center (NGDC) global historical tsunami database. Additional information about these tsunamis, as well as water level data can be found by accessing the NGDC website www.ngdc.noaa.gov/hazard/
Rebollo-Mesa, Irene; Hudziak, James J.; Willemsen, Gonneke; Boomsma, Dorret I.
2012-01-01
The influence of non-additive genetic influences on personality traits has been increasingly reported in adult populations. Less is known, however, with respect to younger samples. In this study, we examine additive and non-additive genetic contributions to the personality trait of extraversion in 1,689 Dutch twin pairs, 1,505 mothers and 1,637 fathers of the twins. The twins were on average 15.5 years (range 12–18 years). To increase statistical power to detect non-additive genetic influences, data on extraversion were also collected in parents and simultaneously analyzed. Genetic modeling procedures incorporating age as a potential modifier of heritability showed significant influences of additive (20–23%) and non-additive genetic factors (31–33%) in addition to unshared environment (46–48%) for adolescents and for their parents. The additive genetic component was slightly and positively related to age. No significant sex differences were found for either extraversion means or for the magnitude of the genetic and environmental influences. There was no evidence of non-random mating for extraversion in the parental generation. Results show that in addition to additive genetic influences, extraversion in adolescents is influenced by non-additive genetic factors. PMID:18240014
Developments in Statistical Education.
ERIC Educational Resources Information Center
Kapadia, Ramesh
1980-01-01
The current status of statistics education at the secondary level is reviewed, with particular attention focused on the various instructional programs in England. A description and preliminary evaluation of the Schools Council Project on Statistical Education is included. (MP)
Mathematical and statistical analysis
NASA Technical Reports Server (NTRS)
Houston, A. Glen
1988-01-01
The goal of the mathematical and statistical analysis component of RICIS is to research, develop, and evaluate mathematical and statistical techniques for aerospace technology applications. Specific research areas of interest include modeling, simulation, experiment design, reliability assessment, and numerical analysis.
ERIC Educational Resources Information Center
Bopp, Richard E.; Van Der Laan, Sharon J.
1985-01-01
Presents a search strategy for locating time-series or cross-sectional statistical data in published sources which was designed for undergraduate students who require 30 units of data for five separate variables in a statistical model. Instructional context and the broader applicability of the search strategy for general statistical research is…
ERIC Educational Resources Information Center
Strasser, Nora
2007-01-01
Avoiding statistical mistakes is important for educators at all levels. Basic concepts will help you to avoid making mistakes using statistics and to look at data with a critical eye. Statistical data is used at educational institutions for many purposes. It can be used to support budget requests, changes in educational philosophy, changes to…
ERIC Educational Resources Information Center
Lenard, Christopher; McCarthy, Sally; Mills, Terence
2014-01-01
There are many different aspects of statistics. Statistics involves mathematics, computing, and applications to almost every field of endeavour. Each aspect provides an opportunity to spark someone's interest in the subject. In this paper we discuss some ethical aspects of statistics, and describe how an introduction to ethics has been…
Statistical quality management
NASA Astrophysics Data System (ADS)
Vanderlaan, Paul
1992-10-01
Some aspects of statistical quality management are discussed. Quality has to be defined as a concrete, measurable quantity. The concepts of Total Quality Management (TQM), Statistical Process Control (SPC), and inspection are explained. In most cases SPC is better than inspection. It can be concluded that statistics has great possibilities in the field of TQM.
Machtay; Glatstein
1998-01-01
have shown overall survivals superior to age-matched controls). It is fallacious and illogical to compare nonrandomized series of observation to those of aggressive therapy. In addition to the above problem, the use of DSS introduces another potential issue which we will call the bias of cause-of-death-interpretation. All statistical endpoints (e.g., response rates, local-regional control, freedom from brain metastases), except OS, are known to depend heavily on the methods used to define the endpoint and are often subject to significant interobserver variability. There is no reason to believe that this problem does not occasionally occur with respect to defining a death as due to the index cancer or to intercurrent disease, even though this issue has been poorly studied. In many oncologic situations-for example, metastatic lung cancer-this form of bias does not exist. In some situations, such as head and neck cancer, this could be an intermediate problem (Was that lethal chest tumor a second primary or a metastasis?.Would the fatal aspiration pneumonia have occurred if he still had a tongue?.And what about Mr. B. described above?). In some situations, particularly relatively "good prognosis" neoplasms, this could be a substantial problem, particularly if the adjudication of whether or not a death is cancer-related is performed solely by researchers who have an "interest" in demonstrating a good DSS. What we are most concerned about with this form of bias relates to recent series on observation, such as in early prostate cancer. It is interesting to note that although only 10% of the "observed" patients die from prostate cancer, many develop distant metastases by 10 years (approximately 40% among patients with intermediate grade tumors). Thus, it is implied that many prostate cancer metastases are usually not of themselves lethal, which is a misconception to anyone experienced in taking care of prostate cancer patients. This is inconsistent with U.S. studies of
Golombick, Terry; Diamond, Terrence H; Manoharan, Arumugam; Ramakrishna, Rajeev
2016-06-01
Hypothesis Prior studies on patients with early B-cell lymphoid malignancies suggest that early intervention with curcumin may lead to delay in progressive disease and prolonged survival. These patients are characterized by increased susceptibility to infections. Rice bran arabinoxylan (Ribraxx) has been shown to have immunostimulatory, anti-inflammatory, and proapoptotic effects. We postulated that addition of Ribraxx to curcumin therapy may be of benefit. Study design Monoclonal gammopathy of undetermined significance (MGUS)/smoldering multiple myeloma (SMM) or stage 0/1 chronic lymphocytic leukemia (CLL) patients who had been on oral curcumin therapy for a period of 6 months or more were administered both curcumin (as Curcuforte) and Ribraxx. Methods Ten MGUS/SMM patients and 10 patients with stage 0/1 CLL were administered 6 g of curcumin and 2 g Ribraxx daily. Blood samples were collected at baseline and at 2-month intervals for a period of 6 months, and various markers were monitored. MGUS/SMM patients included full blood count (FBC); paraprotein; free light chains/ratio; C-reactive protein (CRP)and erythrocyte sedimentation rate (ESR); B2 microglobulin and immunological markers. Markers monitored for stage 0/1 CLL were FBC, CRP and ESR, and immunological markers. Results Of 10 MGUS/SMM patients,5 (50%) were neutropenic at baseline, and the Curcuforte/Ribraxx combination therapy showed an increased neutrophil count, varying between 10% and 90% among 8 of the 10 (80%) MGUS/SMM patients. An additional benefit of the combination therapy was the potent effect in reducing the raised ESR in 4 (44%) of the MGUS/SMM patients. Conclusion Addition of Ribraxx to curcumin therapy may be of benefit to patients with early-stage B-cell lymphoid malignancies. PMID:27154182
p-value approximations for spatial scan statistics using extreme value distributions.
Jung, Inkyung; Park, Goeun
2015-02-10
Spatial scan statistics are widely applied to identify spatial clusters in geographic disease surveillance. To evaluate the statistical significance of detected clusters, Monte Carlo hypothesis testing is often used because the null distribution of spatial scan statistics is not known. A drawback of the method is that we have to increase the number of replications to obtain accurate p-values. Gumbel-based p-value approximations for spatial scan statistics have recently been proposed and evaluated for Poisson and Bernoulli models. In this study, we examine the use of a generalized extreme value distribution to approximate the null distribution of spatial scan statistics as well as the Gumbel distribution. Through simulation, p-value approximations using extreme value distributions for spatial scan statistics are assessed for multinomial and ordinal models in addition to Poisson and Bernoulli models. PMID:25345856
Projections of Education Statistics to 2007.
ERIC Educational Resources Information Center
Gerald, Debra E.; Hussar, William J.
"Projections of Education Statistics to 2007" is the 26th report in a series begun in 1964 that revises projections annually to show statistics on elementary and secondary schools and institutions of higher education at the national level. Included are projections for enrollment, graduates, classroom teachers, and expenditures. In addition, this…
Florida Library Directory with Statistics, 1998.
ERIC Educational Resources Information Center
Florida Dept. of State, Tallahassee. Div. of Library and Information Services.
This 49th annual Florida Library directory with statistics edition includes listings for over 1,000 libraries of all types in Florida, with contact named, phone numbers, addresses, and e-mail and web addresses. In addition, there is a section of library statistics, showing data on the use, resources, and financial condition of Florida's libraries.…
Exploring Correlation Coefficients with Golf Statistics
ERIC Educational Resources Information Center
Quinn, Robert J
2006-01-01
This article explores the relationships between several pairs of statistics kept on professional golfers on the PGA tour. Specifically, two measures related to the player's ability to drive the ball are compared as are two measures related to the player's ability to putt. An additional analysis is made between one statistic related to putting and…
Leadership statistics in random structures
NASA Astrophysics Data System (ADS)
Ben-Naim, E.; Krapivsky, P. L.
2004-01-01
The largest component ("the leader") in evolving random structures often exhibits universal statistical properties. This phenomenon is demonstrated analytically for two ubiquitous structures: random trees and random graphs. In both cases, lead changes are rare as the average number of lead changes increases quadratically with logarithm of the system size. As a function of time, the number of lead changes is self-similar. Additionally, the probability that no lead change ever occurs decays exponentially with the average number of lead changes.
Use of statistical procedures in Brazilian and international dental journals.
Ambrosano, Gláucia Maria Bovi; Reis, André Figueiredo; Giannini, Marcelo; Pereira, Antônio Carlos
2004-01-01
A descriptive survey was performed in order to assess the statistical content and quality of Brazilian and international dental journals, and compare their evolution throughout the last decades. The authors identified the reporting and accuracy of statistical techniques in 1000 papers published from 1970 to 2000 in seven dental journals: three Brazilian (Brazilian Dental Journal, Revista de Odontologia da Universidade de Sao Paulo and Revista de Odontologia da UNESP) and four international journals (Journal of the American Dental Association, Journal of Dental Research, Caries Research and Journal of Periodontology). Papers were divided into two time periods: from 1970 to 1989, and from 1990 to 2000. A slight increase in the number of articles that presented some form of statistical technique was noticed for Brazilian journals (from 61.0 to 66.7%), whereas for international journals, a significant increase was observed (65.8 to 92.6%). In addition, a decrease in the number of statistical errors was verified. The most commonly used statistical tests as well as the most frequent errors found in dental journals were assessed. Hopefully, this investigation will encourage dental educators to better plan the teaching of biostatistics, and to improve the statistical quality of submitted manuscripts.
NASA Astrophysics Data System (ADS)
Holmes, Jon L.
2000-06-01
IP-number access. Current subscriptions can be upgraded to IP-number access at little additional cost. We are pleased to be able to offer to institutions and libraries this convenient mode of access to subscriber only resources at JCE Online. JCE Online Usage Statistics We are continually amazed by the activity at JCE Online. So far, the year 2000 has shown a marked increase. Given the phenomenal overall growth of the Internet, perhaps our surprise is not warranted. However, during the months of January and February 2000, over 38,000 visitors requested over 275,000 pages. This is a monthly increase of over 33% from the October-December 1999 levels. It is good to know that people are visiting, but we would very much like to know what you would most like to see at JCE Online. Please send your suggestions to JCEOnline@chem.wisc.edu. For those who are interested, JCE Online year-to-date statistics are available. Biographical Snapshots of Famous Chemists: Mission Statement Feature Editor: Barbara Burke Chemistry Department, California State Polytechnic University-Pomona, Pomona, CA 91768 phone: 909/869-3664 fax: 909/869-4616 email: baburke@csupomona.edu The primary goal of this JCE Internet column is to provide information about chemists who have made important contributions to chemistry. For each chemist, there is a short biographical "snapshot" that provides basic information about the person's chemical work, gender, ethnicity, and cultural background. Each snapshot includes links to related websites and to a biobibliographic database. The database provides references for the individual and can be searched through key words listed at the end of each snapshot. All students, not just science majors, need to understand science as it really is: an exciting, challenging, human, and creative way of learning about our natural world. Investigating the life experiences of chemists can provide a means for students to gain a more realistic view of chemistry. In addition students
Additional renal arteries: incidence and morphometry.
Satyapal, K S; Haffejee, A A; Singh, B; Ramsaroop, L; Robbs, J V; Kalideen, J M
2001-01-01
Advances in surgical and uro-radiological techniques dictate a reappraisal and definition of renal arterial variations. This retrospective study aimed at establishing the incidence of additional renal arteries. Two subsets were analysed viz.: a) Clinical series--130 renal angiograms performed on renal transplant donors, 32 cadaver kidneys used in renal transplantation b) Cadaveric series--74 en-bloc morphologically normal kidney pairs. The sex and race distribution was: males 140, females 96; African 84, Indian 91, White 43 and "Coloured" 18, respectively. Incidence of first and second additional arteries were respectively, 23.2% (R: 18.6%; L: 27.6%) and 4.5% (R: 4.7%; L: 4.4%). Additional arteries occurred more frequently on the left (L: 32.0%; R: 23.3%). The incidence bilaterally was 10.2% (first additional arteries, only). The sex and race incidence (first and second additional) was: males, 28.0%, 5.1%; females, 16.4%, 3.8% and African 31.1%, 5.4%; Indian 13.5%, 4.5%; White 30.9%, 4.4% and "Coloured" 18.5%, 0%; respectively. Significant differences in the incidence of first additional arteries were noted between sex and race. The morphometry of additional renal arteries were lengths (cm) of first and second additional renal arteries: 4.5 and 3.8 (right), 4.9 and 3.7 (left); diameters: 0.4 and 0.3 (right), 0.3 and 0.3 (left). Detailed morphometry of sex and race were also recorded. No statistically significant differences were noted. Our results of the incidence of additional renal arteries of 27.7% compared favourably to that reported in the literature (weighted mean 28.1%). The study is unique in recording detailed morphometry of these vessels. Careful techniques in the identification of this anatomical variation is important since it impacts on renal transplantation surgery, vascular operations for renal artery stenosis, reno-vascular hypertension, Takayasu's disease, renal trauma and uro-radiological procedures.
Tannery, Nancy Hrinya; Silverman, Deborah L; Epstein, Barbara A
2002-01-01
Online use statistics can provide libraries with a tool to be used when developing an online collection of resources. Statistics can provide information on overall use of a collection, individual print and electronic journal use, and collection use by specific user populations. They can also be used to determine the number of user licenses to purchase. This paper focuses on the issue of use statistics made available for one collection of online resources.
Statistical distribution sampling
NASA Technical Reports Server (NTRS)
Johnson, E. S.
1975-01-01
Determining the distribution of statistics by sampling was investigated. Characteristic functions, the quadratic regression problem, and the differential equations for the characteristic functions are analyzed.
Statistical prediction of cyclostationary processes
Kim, K.Y.
2000-03-15
Considered in this study is a cyclostationary generalization of an EOF-based prediction method. While linear statistical prediction methods are typically optimal in the sense that prediction error variance is minimal within the assumption of stationarity, there is some room for improved performance since many physical processes are not stationary. For instance, El Nino is known to be strongly phase locked with the seasonal cycle, which suggests nonstationarity of the El Nino statistics. Many geophysical and climatological processes may be termed cyclostationary since their statistics show strong cyclicity instead of stationarity. Therefore, developed in this study is a cyclostationary prediction method. Test results demonstrate that performance of prediction methods can be improved significantly by accounting for the cyclostationarity of underlying processes. The improvement comes from an accurate rendition of covariance structure both in space and time.
Statistical limitations in functional neuroimaging. II. Signal detection and statistical inference.
Petersson, K M; Nichols, T E; Poline, J B; Holmes, A P
1999-01-01
The field of functional neuroimaging (FNI) methodology has developed into a mature but evolving area of knowledge and its applications have been extensive. A general problem in the analysis of FNI data is finding a signal embedded in noise. This is sometimes called signal detection. Signal detection theory focuses in general on issues relating to the optimization of conditions for separating the signal from noise. When methods from probability theory and mathematical statistics are directly applied in this procedure it is also called statistical inference. In this paper we briefly discuss some aspects of signal detection theory relevant to FNI and, in addition, some common approaches to statistical inference used in FNI. Low-pass filtering in relation to functional-anatomical variability and some effects of filtering on signal detection of interest to FNI are discussed. Also, some general aspects of hypothesis testing and statistical inference are discussed. This includes the need for characterizing the signal in data when the null hypothesis is rejected, the problem of multiple comparisons that is central to FNI data analysis, omnibus tests and some issues related to statistical power in the context of FNI. In turn, random field, scale space, non-parametric and Monte Carlo approaches are reviewed, representing the most common approaches to statistical inference used in FNI. Complementary to these issues an overview and discussion of non-inferential descriptive methods, common statistical models and the problem of model selection is given in a companion paper. In general, model selection is an important prelude to subsequent statistical inference. The emphasis in both papers is on the assumptions and inherent limitations of the methods presented. Most of the methods described here generally serve their purposes well when the inherent assumptions and limitations are taken into account. Significant differences in results between different methods are most apparent in
Statistical Mechanics of Zooplankton.
Hinow, Peter; Nihongi, Ai; Strickler, J Rudi
2015-01-01
Statistical mechanics provides the link between microscopic properties of many-particle systems and macroscopic properties such as pressure and temperature. Observations of similar "microscopic" quantities exist for the motion of zooplankton, as well as many species of other social animals. Herein, we propose to take average squared velocities as the definition of the "ecological temperature" of a population under different conditions on nutrients, light, oxygen and others. We test the usefulness of this definition on observations of the crustacean zooplankton Daphnia pulicaria. In one set of experiments, D. pulicaria is infested with the pathogen Vibrio cholerae, the causative agent of cholera. We find that infested D. pulicaria under light exposure have a significantly greater ecological temperature, which puts them at a greater risk of detection by visual predators. In a second set of experiments, we observe D. pulicaria in cold and warm water, and in darkness and under light exposure. Overall, our ecological temperature is a good discriminator of the crustacean's swimming behavior.
Statistical Mechanics of Zooplankton
Hinow, Peter; Nihongi, Ai; Strickler, J. Rudi
2015-01-01
Statistical mechanics provides the link between microscopic properties of many-particle systems and macroscopic properties such as pressure and temperature. Observations of similar “microscopic” quantities exist for the motion of zooplankton, as well as many species of other social animals. Herein, we propose to take average squared velocities as the definition of the “ecological temperature” of a population under different conditions on nutrients, light, oxygen and others. We test the usefulness of this definition on observations of the crustacean zooplankton Daphnia pulicaria. In one set of experiments, D. pulicaria is infested with the pathogen Vibrio cholerae, the causative agent of cholera. We find that infested D. pulicaria under light exposure have a significantly greater ecological temperature, which puts them at a greater risk of detection by visual predators. In a second set of experiments, we observe D. pulicaria in cold and warm water, and in darkness and under light exposure. Overall, our ecological temperature is a good discriminator of the crustacean’s swimming behavior. PMID:26270537
Illustrating the practice of statistics
Hamada, Christina A; Hamada, Michael S
2009-01-01
The practice of statistics involves analyzing data and planning data collection schemes to answer scientific questions. Issues often arise with the data that must be dealt with and can lead to new procedures. In analyzing data, these issues can sometimes be addressed through the statistical models that are developed. Simulation can also be helpful in evaluating a new procedure. Moreover, simulation coupled with optimization can be used to plan a data collection scheme. The practice of statistics as just described is much more than just using a statistical package. In analyzing the data, it involves understanding the scientific problem and incorporating the scientist's knowledge. In modeling the data, it involves understanding how the data were collected and accounting for limitations of the data where possible. Moreover, the modeling is likely to be iterative by considering a series of models and evaluating the fit of these models. Designing a data collection scheme involves understanding the scientist's goal and staying within hislher budget in terms of time and the available resources. Consequently, a practicing statistician is faced with such tasks and requires skills and tools to do them quickly. We have written this article for students to provide a glimpse of the practice of statistics. To illustrate the practice of statistics, we consider a problem motivated by some precipitation data that our relative, Masaru Hamada, collected some years ago. We describe his rain gauge observational study in Section 2. We describe modeling and an initial analysis of the precipitation data in Section 3. In Section 4, we consider alternative analyses that address potential issues with the precipitation data. In Section 5, we consider the impact of incorporating additional infonnation. We design a data collection scheme to illustrate the use of simulation and optimization in Section 6. We conclude this article in Section 7 with a discussion.
Multidimensional Visual Statistical Learning
ERIC Educational Resources Information Center
Turk-Browne, Nicholas B.; Isola, Phillip J.; Scholl, Brian J.; Treat, Teresa A.
2008-01-01
Recent studies of visual statistical learning (VSL) have demonstrated that statistical regularities in sequences of visual stimuli can be automatically extracted, even without intent or awareness. Despite much work on this topic, however, several fundamental questions remain about the nature of VSL. In particular, previous experiments have not…
Croarkin, M. Carroll
2001-01-01
For more than 50 years, the Statistical Engineering Division (SED) has been instrumental in the success of a broad spectrum of metrology projects at NBS/NIST. This paper highlights fundamental contributions of NBS/NIST statisticians to statistics and to measurement science and technology. Published methods developed by SED staff, especially during the early years, endure as cornerstones of statistics not only in metrology and standards applications, but as data-analytic resources used across all disciplines. The history of statistics at NBS/NIST began with the formation of what is now the SED. Examples from the first five decades of the SED illustrate the critical role of the division in the successful resolution of a few of the highly visible, and sometimes controversial, statistical studies of national importance. A review of the history of major early publications of the division on statistical methods, design of experiments, and error analysis and uncertainty is followed by a survey of several thematic areas. The accompanying examples illustrate the importance of SED in the history of statistics, measurements and standards: calibration and measurement assurance, interlaboratory tests, development of measurement methods, Standard Reference Materials, statistical computing, and dissemination of measurement technology. A brief look forward sketches the expanding opportunity and demand for SED statisticians created by current trends in research and development at NIST. PMID:27500023
Explorations in Statistics: Regression
ERIC Educational Resources Information Center
Curran-Everett, Douglas
2011-01-01
Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This seventh installment of "Explorations in Statistics" explores regression, a technique that estimates the nature of the relationship between two things for which we may only surmise a mechanistic or predictive connection.…
Reform in Statistical Education
ERIC Educational Resources Information Center
Huck, Schuyler W.
2007-01-01
Two questions are considered in this article: (a) What should professionals in school psychology do in an effort to stay current with developments in applied statistics? (b) What should they do with their existing knowledge to move from surface understanding of statistics to deep understanding? Written for school psychologists who have completed…
Demonstrating Poisson Statistics.
ERIC Educational Resources Information Center
Vetterling, William T.
1980-01-01
Describes an apparatus that offers a very lucid demonstration of Poisson statistics as applied to electrical currents, and the manner in which such statistics account for shot noise when applied to macroscopic currents. The experiment described is intended for undergraduate physics students. (HM)
Statistical Summaries: Public Institutions.
ERIC Educational Resources Information Center
Virginia State Council of Higher Education, Richmond.
This document, presents a statistical portrait of the Virginia's 17 public higher education institutions. Data provided include: enrollment figures (broken down in categories such as sex, residency, full- and part-time status, residence, ethnicity, age, and level of postsecondary education); FTE figures; admissions statistics (such as number…
Explorations in Statistics: Power
ERIC Educational Resources Information Center
Curran-Everett, Douglas
2010-01-01
Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This fifth installment of "Explorations in Statistics" revisits power, a concept fundamental to the test of a null hypothesis. Power is the probability that we reject the null hypothesis when it is false. Four things affect…
ERIC Educational Resources Information Center
Huizingh, Eelko K. R. E.
2007-01-01
Accessibly written and easy to use, "Applied Statistics Using SPSS" is an all-in-one self-study guide to SPSS and do-it-yourself guide to statistics. What is unique about Eelko Huizingh's approach is that this book is based around the needs of undergraduate students embarking on their own research project, and its self-help style is designed to…
ERIC Educational Resources Information Center
Council of Ontario Universities, Toronto.
Summary statistics on application and registration patterns of applicants wishing to pursue full-time study in first-year places in Ontario universities (for the fall of 1987) are given. Data on registrations were received indirectly from the universities as part of their annual submission of USIS/UAR enrollment data to Statistics Canada and MCU.…
Introduction to Statistical Physics
NASA Astrophysics Data System (ADS)
Casquilho, João Paulo; Ivo Cortez Teixeira, Paulo
2014-12-01
Preface; 1. Random walks; 2. Review of thermodynamics; 3. The postulates of statistical physics. Thermodynamic equilibrium; 4. Statistical thermodynamics – developments and applications; 5. The classical ideal gas; 6. The quantum ideal gas; 7. Magnetism; 8. The Ising model; 9. Liquid crystals; 10. Phase transitions and critical phenomena; 11. Irreversible processes; Appendixes; Index.
Deconstructing Statistical Analysis
ERIC Educational Resources Information Center
Snell, Joel
2014-01-01
Using a very complex statistical analysis and research method for the sake of enhancing the prestige of an article or making a new product or service legitimate needs to be monitored and questioned for accuracy. 1) The more complicated the statistical analysis, and research the fewer the number of learned readers can understand it. This adds a…
ERIC Educational Resources Information Center
Hodgson, Ted; Andersen, Lyle; Robison-Cox, Jim; Jones, Clain
2004-01-01
Water quality experiments, especially the use of macroinvertebrates as indicators of water quality, offer an ideal context for connecting statistics and science. In the STAR program for secondary students and teachers, water quality experiments were also used as a context for teaching statistics. In this article, we trace one activity that uses…
Understanding Undergraduate Statistical Anxiety
ERIC Educational Resources Information Center
McKim, Courtney
2014-01-01
The purpose of this study was to understand undergraduate students' views of statistics. Results reveal that students with less anxiety have a higher interest in statistics and also believe in their ability to perform well in the course. Also students who have a more positive attitude about the class tend to have a higher belief in their…
Explorations in Statistics: Correlation
ERIC Educational Resources Information Center
Curran-Everett, Douglas
2010-01-01
Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This sixth installment of "Explorations in Statistics" explores correlation, a familiar technique that estimates the magnitude of a straight-line relationship between two variables. Correlation is meaningful only when the…
Do perfume additives termed human pheromones warrant being termed pheromones?
Winman, Anders
2004-09-30
Two studies of the effects of perfume additives, termed human pheromones by the authors, have conveyed the message that these substances can promote an increase in human sociosexual behaviour [Physiol. Behav. 75 (2003) R1; Arch. Sex. Behav. 27 (1998) R2]. The present paper presents an extended analysis of this data. It is shown that in neither study is there a statistically significant increase in any of the sociosexual behaviours for the experimental groups. In the control groups of both studies, there are, however, moderate but statistically significant decreases in the corresponding behaviour. Most notably, there is no support in data for the claim that the substances increase the attractiveness of the wearers of the substances to the other sex. It is concluded that more research using matched homogenous groups of participants is needed.
Tsallis statistics and neurodegenerative disorders
NASA Astrophysics Data System (ADS)
Iliopoulos, Aggelos C.; Tsolaki, Magdalini; Aifantis, Elias C.
2016-08-01
In this paper, we perform statistical analysis of time series deriving from four neurodegenerative disorders, namely epilepsy, amyotrophic lateral sclerosis (ALS), Parkinson's disease (PD), Huntington's disease (HD). The time series are concerned with electroencephalograms (EEGs) of healthy and epileptic states, as well as gait dynamics (in particular stride intervals) of the ALS, PD and HDs. We study data concerning one subject for each neurodegenerative disorder and one healthy control. The analysis is based on Tsallis non-extensive statistical mechanics and in particular on the estimation of Tsallis q-triplet, namely {qstat, qsen, qrel}. The deviation of Tsallis q-triplet from unity indicates non-Gaussian statistics and long-range dependencies for all time series considered. In addition, the results reveal the efficiency of Tsallis statistics in capturing differences in brain dynamics between healthy and epileptic states, as well as differences between ALS, PD, HDs from healthy control subjects. The results indicate that estimations of Tsallis q-indices could be used as possible biomarkers, along with others, for improving classification and prediction of epileptic seizures, as well as for studying the gait complex dynamics of various diseases providing new insights into severity, medications and fall risk, improving therapeutic interventions.
Robot Trajectories Comparison: A Statistical Approach
Ansuategui, A.; Arruti, A.; Susperregi, L.; Yurramendi, Y.; Jauregi, E.; Lazkano, E.; Sierra, B.
2014-01-01
The task of planning a collision-free trajectory from a start to a goal position is fundamental for an autonomous mobile robot. Although path planning has been extensively investigated since the beginning of robotics, there is no agreement on how to measure the performance of a motion algorithm. This paper presents a new approach to perform robot trajectories comparison that could be applied to any kind of trajectories and in both simulated and real environments. Given an initial set of features, it automatically selects the most significant ones and performs a statistical comparison using them. Additionally, a graphical data visualization named polygraph which helps to better understand the obtained results is provided. The proposed method has been applied, as an example, to compare two different motion planners, FM2 and WaveFront, using different environments, robots, and local planners. PMID:25525618
Robot trajectories comparison: a statistical approach.
Ansuategui, A; Arruti, A; Susperregi, L; Yurramendi, Y; Jauregi, E; Lazkano, E; Sierra, B
2014-01-01
The task of planning a collision-free trajectory from a start to a goal position is fundamental for an autonomous mobile robot. Although path planning has been extensively investigated since the beginning of robotics, there is no agreement on how to measure the performance of a motion algorithm. This paper presents a new approach to perform robot trajectories comparison that could be applied to any kind of trajectories and in both simulated and real environments. Given an initial set of features, it automatically selects the most significant ones and performs a statistical comparison using them. Additionally, a graphical data visualization named polygraph which helps to better understand the obtained results is provided. The proposed method has been applied, as an example, to compare two different motion planners, FM(2) and WaveFront, using different environments, robots, and local planners. PMID:25525618
Robot trajectories comparison: a statistical approach.
Ansuategui, A; Arruti, A; Susperregi, L; Yurramendi, Y; Jauregi, E; Lazkano, E; Sierra, B
2014-01-01
The task of planning a collision-free trajectory from a start to a goal position is fundamental for an autonomous mobile robot. Although path planning has been extensively investigated since the beginning of robotics, there is no agreement on how to measure the performance of a motion algorithm. This paper presents a new approach to perform robot trajectories comparison that could be applied to any kind of trajectories and in both simulated and real environments. Given an initial set of features, it automatically selects the most significant ones and performs a statistical comparison using them. Additionally, a graphical data visualization named polygraph which helps to better understand the obtained results is provided. The proposed method has been applied, as an example, to compare two different motion planners, FM(2) and WaveFront, using different environments, robots, and local planners.
Statistical methods for material characterization and qualification
Hunn, John D; Kercher, Andrew K
2005-01-01
This document describes a suite of statistical methods that can be used to infer lot parameters from the data obtained from inspection/testing of random samples taken from that lot. Some of these methods will be needed to perform the statistical acceptance tests required by the Advanced Gas Reactor Fuel Development and Qualification (AGR) Program. Special focus has been placed on proper interpretation of acceptance criteria and unambiguous methods of reporting the statistical results. In addition, modified statistical methods are described that can provide valuable measures of quality for different lots of material. This document has been written for use as a reference and a guide for performing these statistical calculations. Examples of each method are provided. Uncertainty analysis (e.g., measurement uncertainty due to instrumental bias) is not included in this document, but should be considered when reporting statistical results.
Statistical Methods for Material Characterization and Qualification
Kercher, A.K.
2005-04-01
This document describes a suite of statistical methods that can be used to infer lot parameters from the data obtained from inspection/testing of random samples taken from that lot. Some of these methods will be needed to perform the statistical acceptance tests required by the Advanced Gas Reactor Fuel Development and Qualification (AGR) Program. Special focus has been placed on proper interpretation of acceptance criteria and unambiguous methods of reporting the statistical results. In addition, modified statistical methods are described that can provide valuable measures of quality for different lots of material. This document has been written for use as a reference and a guide for performing these statistical calculations. Examples of each method are provided. Uncertainty analysis (e.g., measurement uncertainty due to instrumental bias) is not included in this document, but should be considered when reporting statistical results.
LED champing: statistically blessed?
Wang, Zhuo
2015-06-10
LED champing (smart mixing of individual LEDs to match the desired color and lumens) and color mixing strategies have been widely used to maintain the color consistency of light engines. Light engines with champed LEDs can easily achieve the color consistency of a couple MacAdam steps with widely distributed LEDs to begin with. From a statistical point of view, the distributions for the color coordinates and the flux after champing are studied. The related statistical parameters are derived, which facilitate process improvements such as Six Sigma and are instrumental to statistical quality control for mass productions. PMID:26192863
Winters, Ryan; Winters, Andrew; Amedee, Ronald G.
2010-01-01
The Accreditation Council for Graduate Medical Education sets forth a number of required educational topics that must be addressed in residency and fellowship programs. We sought to provide a primer on some of the important basic statistical concepts to consider when examining the medical literature. It is not essential to understand the exact workings and methodology of every statistical test encountered, but it is necessary to understand selected concepts such as parametric and nonparametric tests, correlation, and numerical versus categorical data. This working knowledge will allow you to spot obvious irregularities in statistical analyses that you encounter. PMID:21603381
NASA Technical Reports Server (NTRS)
Young, M.; Koslovsky, M.; Schaefer, Caroline M.; Feiveson, A. H.
2017-01-01
Back by popular demand, the JSC Biostatistics Laboratory and LSAH statisticians are offering an opportunity to discuss your statistical challenges and needs. Take the opportunity to meet the individuals offering expert statistical support to the JSC community. Join us for an informal conversation about any questions you may have encountered with issues of experimental design, analysis, or data visualization. Get answers to common questions about sample size, repeated measures, statistical assumptions, missing data, multiple testing, time-to-event data, and when to trust the results of your analyses.
Colon-Berlingeri, Migdalisel; Burrowes, Patricia A
2011-01-01
Incorporation of mathematics into biology curricula is critical to underscore for undergraduate students the relevance of mathematics to most fields of biology and the usefulness of developing quantitative process skills demanded in modern biology. At our institution, we have made significant changes to better integrate mathematics into the undergraduate biology curriculum. The curricular revision included changes in the suggested course sequence, addition of statistics and precalculus as prerequisites to core science courses, and incorporating interdisciplinary (math-biology) learning activities in genetics and zoology courses. In this article, we describe the activities developed for these two courses and the assessment tools used to measure the learning that took place with respect to biology and statistics. We distinguished the effectiveness of these learning opportunities in helping students improve their understanding of the math and statistical concepts addressed and, more importantly, their ability to apply them to solve a biological problem. We also identified areas that need emphasis in both biology and mathematics courses. In light of our observations, we recommend best practices that biology and mathematics academic departments can implement to train undergraduates for the demands of modern biology.
Colon-Berlingeri, Migdalisel; Burrowes, Patricia A.
2011-01-01
Incorporation of mathematics into biology curricula is critical to underscore for undergraduate students the relevance of mathematics to most fields of biology and the usefulness of developing quantitative process skills demanded in modern biology. At our institution, we have made significant changes to better integrate mathematics into the undergraduate biology curriculum. The curricular revision included changes in the suggested course sequence, addition of statistics and precalculus as prerequisites to core science courses, and incorporating interdisciplinary (math–biology) learning activities in genetics and zoology courses. In this article, we describe the activities developed for these two courses and the assessment tools used to measure the learning that took place with respect to biology and statistics. We distinguished the effectiveness of these learning opportunities in helping students improve their understanding of the math and statistical concepts addressed and, more importantly, their ability to apply them to solve a biological problem. We also identified areas that need emphasis in both biology and mathematics courses. In light of our observations, we recommend best practices that biology and mathematics academic departments can implement to train undergraduates for the demands of modern biology. PMID:21885822
Lessons from Inferentialism for Statistics Education
ERIC Educational Resources Information Center
Bakker, Arthur; Derry, Jan
2011-01-01
This theoretical paper relates recent interest in informal statistical inference (ISI) to the semantic theory termed inferentialism, a significant development in contemporary philosophy, which places inference at the heart of human knowing. This theory assists epistemological reflection on challenges in statistics education encountered when…
Petroleum statistics in France
De Saint Germain, H.; Lamiraux, C.
1995-08-01
33 oil companies, including Elf, Exxon, Agip, Conoco as well as Coparex, Enron, Hadson, Midland, Hunt, Canyon and Union Texas are present in oil and gas exploration and production in France. The production of oil and gas in France amounts to some 60,000 bopd of oil and 350 MMcfpd of marketed natural gas each year, which still accounts for 3.5% and 10% for French domestic needs, respectively. To date, 166 fields have been discovered, representing a total reserve of 3 billion bbl of crude oil and 13 trillion cf of raw gas. These fields are concentrated in two major onshore sedimentary basins of Mesozoic age, which are the Aquitaine basin and the Paris basin. The Aquitaine basin should be subdivided into two distinct domains: The Parentis basin where the largest field Parentis was discovered in 1954 with still production of about 3700 bopd of oil and where Les Arbouslers field, discovered at the end of 1991, is currently producing about 10,000 bopd of oil. The northern Pyrenees and their foreland, where the Lacq field, discovered in 1951, has produced about 7.7 tcf of gas since 1957, and is still producing 138 MMcfpd. In the Paris basin, the two large oil fields are Villeperclue discovered in 1982 by Triton and Total, and Chaunoy, discovered in 1983 by Essorep, which are still producing about 10,000 and 15,000 bopd, respectively. The last significantly sized discovery occurred in 1990 with Itteville by Elf Aquitaine which is currently producing 4,200 bopd. The poster shows statistical data related to the past 20 years of oil and gas exploration and production in France.
NASA Astrophysics Data System (ADS)
Richfield, Jon; bookfeller
2016-07-01
In reply to Ralph Kenna and Pádraig Mac Carron's feature article “Maths meets myths” in which they describe how they are using techniques from statistical physics to characterize the societies depicted in ancient Icelandic sagas.
... facts and statistics here include brain and central nervous system tumors (including spinal cord, pituitary and pineal gland ... U.S. living with a primary brain and central nervous system tumor. This year, nearly 17,000 people will ...
NASA Astrophysics Data System (ADS)
Grégoire, G.
2016-05-01
This chapter is devoted to two objectives. The first one is to answer the request expressed by attendees of the first Astrostatistics School (Annecy, October 2013) to be provided with an elementary vademecum of statistics that would facilitate understanding of the given courses. In this spirit we recall very basic notions, that is definitions and properties that we think sufficient to benefit from courses given in the Astrostatistical School. Thus we give briefly definitions and elementary properties on random variables and vectors, distributions, estimation and tests, maximum likelihood methodology. We intend to present basic ideas in a hopefully comprehensible way. We do not try to give a rigorous presentation, and due to the place devoted to this chapter, can cover only a rather limited field of statistics. The second aim is to focus on some statistical tools that are useful in classification: basic introduction to Bayesian statistics, maximum likelihood methodology, Gaussian vectors and Gaussian mixture models.
... and Statistics Recommend on Facebook Tweet Share Compartir Plague in the United States Plague was first introduced ... per year in the United States: 1900-2012. Plague Worldwide Plague epidemics have occurred in Africa, Asia, ...
Cooperative Learning in Statistics.
ERIC Educational Resources Information Center
Keeler, Carolyn M.; And Others
1994-01-01
Formal use of cooperative learning techniques proved effective in improving student performance and retention in a freshman level statistics course. Lectures interspersed with group activities proved effective in increasing conceptual understanding and overall class performance. (11 references) (Author)
Purposeful Statistical Investigations
ERIC Educational Resources Information Center
Day, Lorraine
2014-01-01
Lorraine Day provides us with a great range of statistical investigations using various resources such as maths300 and TinkerPlots. Each of the investigations link mathematics to students' lives and provide engaging and meaningful contexts for mathematical inquiry.
Tuberculosis Data and Statistics
... Organization Chart Advisory Groups Federal TB Task Force Data and Statistics Language: English Español (Spanish) Recommend on ... United States publication. PDF [6 MB] Interactive TB Data Tool Online Tuberculosis Information System (OTIS) OTIS is ...
Understanding Solar Flare Statistics
NASA Astrophysics Data System (ADS)
Wheatland, M. S.
2005-12-01
A review is presented of work aimed at understanding solar flare statistics, with emphasis on the well known flare power-law size distribution. Although avalanche models are perhaps the favoured model to describe flare statistics, their physical basis is unclear, and they are divorced from developing ideas in large-scale reconnection theory. An alternative model, aimed at reconciling large-scale reconnection models with solar flare statistics, is revisited. The solar flare waiting-time distribution has also attracted recent attention. Observed waiting-time distributions are described, together with what they might tell us about the flare phenomenon. Finally, a practical application of flare statistics to flare prediction is described in detail, including the results of a year of automated (web-based) predictions from the method.
T1 VSAT Fade Compensation Statistical Results
NASA Technical Reports Server (NTRS)
Johnson, Sandra K.; Acosta, Roberto; Ugweje, Oke
2000-01-01
New satellite communication systems are steadily seeking to use higher frequency bands to accommodate the requirements for additional capacity. At these higher frequencies, propagation impairments that did not significantly affect the signal at lower frequencies begin to have considerable impact. In Ka-band. the next logical commercial frequency band to be used for satellite communication, attenuation of the signal due to rain is a primary concern. An experimental satellite built by NASA, the Advanced Communication Technology Satellite (ACTS). launched in September 1993, is the first U.S. communication satellite operating in the Ka-band. In addition to higher carrier frequencies, a number of other new technologies, including on-board baseband processing. multiple beam antennas, and rain fade detection and compensation techniques, were designed into the ACTS. Verification experiments have been conducted since the launch to characterize the new technologies. The focus of this paper is to characterize the method used by the ACTS TI Very Small Aperture Terminal (TI VSAT) ground stations in detecting the presence of fade in the communication signal and to adaptively compensate for it by the addition of burst rate reduction and forward error correction. Measured data obtained from the ACTS program was used to validate the compensation technique. A software process was developed and demonstrated to statistically characterize the increased availability achieved by the compensation techniques in terms of the bit error rate time enhancement factor. Several improvements to the ACTS technique are discussed and possible implementations for future Ka band system are offered.
Oakland, J.S.
1986-01-01
Addressing the increasing importance for firms to have a thorough knowledge of statistically based quality control procedures, this book presents the fundamentals of statistical process control (SPC) in a non-mathematical, practical way. It provides real-life examples and data drawn from a wide variety of industries. The foundations of good quality management and process control, and control of conformance and consistency during production are given. Offers clear guidance to those who wish to understand and implement modern SPC techniques.
Statistical Mechanics of Infinite Gravitating Systems
NASA Astrophysics Data System (ADS)
Saslaw, William C.
2008-01-01
The cosmological many-body problem was stated over 300 years ago, but its solution is quite recent and still incomplete. Imagine an infinite expanding universe essentially containing a very large number of objects moving in response to their mutual gravitational forces. What will be the spatial and velocity distributions of these objects and how will they evolve? This question fascinates on many levels. Though inherently non-linear, it turns out to be one of the few analytically solvable problems of statistical mechanics with long range forces. The partition function can be calculated. From this all the thermodynamic properties of the system can be obtained for the grand canonical ensemble. They confirm results derived independently directly from the first and second laws of thermodynamics. The behavior of infinite gravitating systems is quite different from their finite relations such as star clusters. Infinite gravitating systems have regimes of negative specific heat, an unusual type of phase transition, and a very close relation to the observed large-scale structure of our universe. This last feature provides an additional astronomical motivation, especially since the statistical mechanics may be generalized to include effects of dark matter haloes around galaxies. Previously the cosmological many-body problem has mostly been studied using the BBGKY hierarchy (not so suitable in the non-linear regime) and by direct computer integrations of the objects' orbits. The statistical mechanics agrees with and substantially extends these earlier results. Most astrophysicists had previously thought that a statistical thermodynamic approach would not be applicable because: a) many-body gravitational systems have no rigorous equilibrium state, b) the unshielded nature of the long-range force would cause the partition function to diverge on large scales, and c) point masses would produce divergences on small scales. However, deeper considerations show that these are not
NASA Astrophysics Data System (ADS)
Kardar, Mehran
2006-06-01
While many scientists are familiar with fractals, fewer are familiar with the concepts of scale-invariance and universality which underly the ubiquity of their shapes. These properties may emerge from the collective behaviour of simple fundamental constituents, and are studied using statistical field theories. Based on lectures for a course in statistical mechanics taught by Professor Kardar at Massachusetts Institute of Technology, this textbook demonstrates how such theories are formulated and studied. Perturbation theory, exact solutions, renormalization groups, and other tools are employed to demonstrate the emergence of scale invariance and universality, and the non-equilibrium dynamics of interfaces and directed paths in random media are discussed. Ideal for advanced graduate courses in statistical physics, it contains an integrated set of problems, with solutions to selected problems at the end of the book. A complete set of solutions is available to lecturers on a password protected website at www.cambridge.org/9780521873413. Based on lecture notes from a course on Statistical Mechanics taught by the author at MIT Contains 65 exercises, with solutions to selected problems Features a thorough introduction to the methods of Statistical Field theory Ideal for graduate courses in Statistical Physics
Statistical Physics of Particles
NASA Astrophysics Data System (ADS)
Kardar, Mehran
2006-06-01
Statistical physics has its origins in attempts to describe the thermal properties of matter in terms of its constituent particles, and has played a fundamental role in the development of quantum mechanics. Based on lectures for a course in statistical mechanics taught by Professor Kardar at Massachusetts Institute of Technology, this textbook introduces the central concepts and tools of statistical physics. It contains a chapter on probability and related issues such as the central limit theorem and information theory, and covers interacting particles, with an extensive description of the van der Waals equation and its derivation by mean field approximation. It also contains an integrated set of problems, with solutions to selected problems at the end of the book. It will be invaluable for graduate and advanced undergraduate courses in statistical physics. A complete set of solutions is available to lecturers on a password protected website at www.cambridge.org/9780521873420. Based on lecture notes from a course on Statistical Mechanics taught by the author at MIT Contains 89 exercises, with solutions to selected problems Contains chapters on probability and interacting particles Ideal for graduate courses in Statistical Mechanics
NASA Astrophysics Data System (ADS)
Cook, Samuel A.; Fukawa-Connelly, Timothy
2016-02-01
Studies have shown that at the end of an introductory statistics course, students struggle with building block concepts, such as mean and standard deviation, and rely on procedural understandings of the concepts. This study aims to investigate the understandings entering freshman of a department of mathematics and statistics (including mathematics education), students who are presumably better prepared in terms of mathematics and statistics than the average university student, have of introductory statistics. This case study found that these students enter college with common statistical misunderstandings, lack of knowledge, and idiosyncratic collections of correct statistical knowledge. Moreover, they also have a wide range of beliefs about their knowledge with some of the students who believe that they have the strongest knowledge also having significant misconceptions. More attention to these statistical building blocks may be required in a university introduction statistics course.
Tools for Basic Statistical Analysis
NASA Technical Reports Server (NTRS)
Luz, Paul L.
2005-01-01
Statistical Analysis Toolset is a collection of eight Microsoft Excel spreadsheet programs, each of which performs calculations pertaining to an aspect of statistical analysis. These programs present input and output data in user-friendly, menu-driven formats, with automatic execution. The following types of calculations are performed: Descriptive statistics are computed for a set of data x(i) (i = 1, 2, 3 . . . ) entered by the user. Normal Distribution Estimates will calculate the statistical value that corresponds to cumulative probability values, given a sample mean and standard deviation of the normal distribution. Normal Distribution from two Data Points will extend and generate a cumulative normal distribution for the user, given two data points and their associated probability values. Two programs perform two-way analysis of variance (ANOVA) with no replication or generalized ANOVA for two factors with four levels and three repetitions. Linear Regression-ANOVA will curvefit data to the linear equation y=f(x) and will do an ANOVA to check its significance.
Predicting Success in Psychological Statistics Courses.
Lester, David
2016-06-01
Many students perform poorly in courses on psychological statistics, and it is useful to be able to predict which students will have difficulties. In a study of 93 undergraduates enrolled in Statistical Methods (18 men, 75 women; M age = 22.0 years, SD = 5.1), performance was significantly associated with sex (female students performed better) and proficiency in algebra in a linear regression analysis. Anxiety about statistics was not associated with course performance, indicating that basic mathematical skills are the best correlate for performance in statistics courses and can usefully be used to stream students into classes by ability. PMID:27273557
Predicting Success in Psychological Statistics Courses.
Lester, David
2016-06-01
Many students perform poorly in courses on psychological statistics, and it is useful to be able to predict which students will have difficulties. In a study of 93 undergraduates enrolled in Statistical Methods (18 men, 75 women; M age = 22.0 years, SD = 5.1), performance was significantly associated with sex (female students performed better) and proficiency in algebra in a linear regression analysis. Anxiety about statistics was not associated with course performance, indicating that basic mathematical skills are the best correlate for performance in statistics courses and can usefully be used to stream students into classes by ability.
[Food additives and healthiness].
Heinonen, Marina
2014-01-01
Additives are used for improving food structure or preventing its spoilage, for example. Many substances used as additives are also naturally present in food. The safety of additives is evaluated according to commonly agreed principles. If high concentrations of an additive cause adverse health effects for humans, a limit of acceptable daily intake (ADI) is set for it. An additive is a risk only when ADI is exceeded. The healthiness of food is measured on the basis of nutrient density and scientifically proven effects.
Statistical properties of Chinese phonemic networks
NASA Astrophysics Data System (ADS)
Yu, Shuiyuan; Liu, Haitao; Xu, Chunshan
2011-04-01
The study of properties of speech sound systems is of great significance in understanding the human cognitive mechanism and the working principles of speech sound systems. Some properties of speech sound systems, such as the listener-oriented feature and the talker-oriented feature, have been unveiled with the statistical study of phonemes in human languages and the research of the interrelations between human articulatory gestures and the corresponding acoustic parameters. With all the phonemes of speech sound systems treated as a coherent whole, our research, which focuses on the dynamic properties of speech sound systems in operation, investigates some statistical parameters of Chinese phoneme networks based on real text and dictionaries. The findings are as follows: phonemic networks have high connectivity degrees and short average distances; the degrees obey normal distribution and the weighted degrees obey power law distribution; vowels enjoy higher priority than consonants in the actual operation of speech sound systems; the phonemic networks have high robustness against targeted attacks and random errors. In addition, for investigating the structural properties of a speech sound system, a statistical study of dictionaries is conducted, which shows the higher frequency of shorter words and syllables and the tendency that the longer a word is, the shorter the syllables composing it are. From these structural properties and dynamic properties one can derive the following conclusion: the static structure of a speech sound system tends to promote communication efficiency and save articulation effort while the dynamic operation of this system gives preference to reliable transmission and easy recognition. In short, a speech sound system is an effective, efficient and reliable communication system optimized in many aspects.
Nonstationary statistical theory for multipactor
Anza, S.; Vicente, C.; Gil, J.
2010-06-15
This work presents a new and general approach to the real dynamics of the multipactor process: the nonstationary statistical multipactor theory. The nonstationary theory removes the stationarity assumption of the classical theory and, as a consequence, it is able to adequately model electron exponential growth as well as absorption processes, above and below the multipactor breakdown level. In addition, it considers both double-surface and single-surface interactions constituting a full framework for nonresonant polyphase multipactor analysis. This work formulates the new theory and validates it with numerical and experimental results with excellent agreement.
Statistical properties of convex clustering
Tan, Kean Ming; Witten, Daniela
2016-01-01
In this manuscript, we study the statistical properties of convex clustering. We establish that convex clustering is closely related to single linkage hierarchical clustering and k-means clustering. In addition, we derive the range of the tuning parameter for convex clustering that yields a non-trivial solution. We also provide an unbiased estimator of the degrees of freedom, and provide a finite sample bound for the prediction error for convex clustering. We compare convex clustering to some traditional clustering methods in simulation studies.
NASA Pocket Statistics: 1997 Edition
NASA Technical Reports Server (NTRS)
1997-01-01
POCKET STATISTICS is published by the NATIONAL AERONAUTICS AND SPACE ADMINISTRATION (NASA). Included in each edition is Administrative and Organizational information, summaries of Space Flight Activity including the NASA Major Launch Record, Aeronautics and Space Transportation and NASA Procurement, Financial and Workforce data. The NASA Major Launch Record includes all launches of Scout class and larger vehicles. Vehicle and spacecraft development flights are also included in the Major Launch Record. Shuttle missions are counted as one launch and one payload, where free flying payloads are not involved. All Satellites deployed from the cargo bay of the Shuttle and placed in a separate orbit or trajectory are counted as an additional payload.
ERIC Educational Resources Information Center
Green, Jennifer L.; Blankenship, Erin E.
2013-01-01
We developed an introductory statistics course for pre-service elementary teachers. In this paper, we describe the goals and structure of the course, as well as the assessments we implemented. Additionally, we use example course work to demonstrate pre-service teachers' progress both in learning statistics and as novice teachers. Overall, the…
Improving MEME via a two-tiered significance analysis
Tanaka, Emi; Bailey, Timothy L.; Keich, Uri
2014-01-01
Motivation: With over 9000 unique users recorded in the first half of 2013, MEME is one of the most popular motif-finding tools available. Reliable estimates of the statistical significance of motifs can greatly increase the usefulness of any motif finder. By analogy, it is difficult to imagine evaluating a BLAST result without its accompanying E-value. Currently MEME evaluates its EM-generated candidate motifs using an extension of BLAST’s E-value to the motif-finding context. Although we previously indicated the drawbacks of MEME’s current significance evaluation, we did not offer a practical substitute suited for its needs, especially because MEME also relies on the E-value internally to rank competing candidate motifs. Results: Here we offer a two-tiered significance analysis that can replace the E-value in selecting the best candidate motif and in evaluating its overall statistical significance. We show that our new approach could substantially improve MEME’s motif-finding performance and would also provide the user with a reliable significance analysis. In addition, for large input sets, our new approach is in fact faster than the currently implemented E-value analysis. Contact: uri.keich@sydney.edu.au or emi.tanaka@sydney.edu.au Supplementary information: Supplementary data are available at Bioinformatics online. PMID:24665130
SANABRIA, FEDERICO; KILLEEN, PETER R.
2008-01-01
Despite being under challenge for the past 50 years, null hypothesis significance testing (NHST) remains dominant in the scientific field for want of viable alternatives. NHST, along with its significance level p, is inadequate for most of the uses to which it is put, a flaw that is of particular interest to educational practitioners who too often must use it to sanctify their research. In this article, we review the failure of NHST and propose prep, the probability of replicating an effect, as a more useful statistic for evaluating research and aiding practical decision making. PMID:19122766
Statistical Physics of Fracture
Alava, Mikko; Nukala, Phani K; Zapperi, Stefano
2006-05-01
Disorder and long-range interactions are two of the key components that make material failure an interesting playfield for the application of statistical mechanics. The cornerstone in this respect has been lattice models of the fracture in which a network of elastic beams, bonds, or electrical fuses with random failure thresholds are subject to an increasing external load. These models describe on a qualitative level the failure processes of real, brittle, or quasi-brittle materials. This has been particularly important in solving the classical engineering problems of material strength: the size dependence of maximum stress and its sample-to-sample statistical fluctuations. At the same time, lattice models pose many new fundamental questions in statistical physics, such as the relation between fracture and phase transitions. Experimental results point out to the existence of an intriguing crackling noise in the acoustic emission and of self-affine fractals in the crack surface morphology. Recent advances in computer power have enabled considerable progress in the understanding of such models. Among these partly still controversial issues, are the scaling and size-effects in material strength and accumulated damage, the statistics of avalanches or bursts of microfailures, and the morphology of the crack surface. Here we present an overview of the results obtained with lattice models for fracture, highlighting the relations with statistical physics theories and more conventional fracture mechanics approaches.
Helping Alleviate Statistical Anxiety with Computer Aided Statistical Classes
ERIC Educational Resources Information Center
Stickels, John W.; Dobbs, Rhonda R.
2007-01-01
This study, Helping Alleviate Statistical Anxiety with Computer Aided Statistics Classes, investigated whether undergraduate students' anxiety about statistics changed when statistics is taught using computers compared to the traditional method. Two groups of students were questioned concerning their anxiety about statistics. One group was taught…
Statistical learning and selective inference
Taylor, Jonathan; Tibshirani, Robert J.
2015-01-01
We describe the problem of “selective inference.” This addresses the following challenge: Having mined a set of data to find potential associations, how do we properly assess the strength of these associations? The fact that we have “cherry-picked”—searched for the strongest associations—means that we must set a higher bar for declaring significant the associations that we see. This challenge becomes more important in the era of big data and complex statistical modeling. The cherry tree (dataset) can be very large and the tools for cherry picking (statistical learning methods) are now very sophisticated. We describe some recent new developments in selective inference and illustrate their use in forward stepwise regression, the lasso, and principal components analysis. PMID:26100887
Statistical mechanics of economics I
NASA Astrophysics Data System (ADS)
Kusmartsev, F. V.
2011-02-01
We show that statistical mechanics is useful in the description of financial crisis and economics. Taking a large amount of instant snapshots of a market over an interval of time we construct their ensembles and study their statistical interference. This results in a probability description of the market and gives capital, money, income, wealth and debt distributions, which in the most cases takes the form of the Bose-Einstein distribution. In addition, statistical mechanics provides the main market equations and laws which govern the correlations between the amount of money, debt, product, prices and number of retailers. We applied the found relations to a study of the evolution of the economics in USA between the years 1996 to 2008 and observe that over that time the income of a major population is well described by the Bose-Einstein distribution which parameters are different for each year. Each financial crisis corresponds to a peak in the absolute activity coefficient. The analysis correctly indicates the past crises and predicts the future one.
Clinical significance of chimerism.
Abuelo, Dianne
2009-05-15
Twins have been previously classified as either monozygotic or dizygotic. In recent years, fascinating, non-traditional mechanisms of twinning have been uncovered. We define chimerism versus mosaicism, touch on chimerism in the animal world, and explain timing of chimerism in humans. In addition, we discuss when to suspect chimerism in patients, and how to proceed with diagnostic evaluation and confirmation.
Statistical mechanics of complex networks
NASA Astrophysics Data System (ADS)
Albert, Reka Zsuzsanna
2001-07-01
The emergence of order in natural systems is a constant source of inspiration for both physical and biological sciences. While the spatial order characterizing for example the crystals has been the basis of many advances in contemporary physics, most complex systems in nature do not offer such high degree of order. Many of these systems form complex networks whose nodes are the elements of the system and edges represent the interactions between them. Traditionally complex networks have been described by the random graph theory founded in 1959 by Paul Erdoḧs and Alfréd Rényi. One of the defining features of random graphs is that they are statistically homogeneous, and their degree distribution (characterizing the spread in the number of edges starting from a node) is a Poisson distribution. In contrast, recent empirical studies, including the work of our group, indicate that the topology of real networks is much richer than that of random graphs. In particular, the degree distribution of real networks is a power-law, indicating a heterogeneous topology in which the majority of the nodes have a small degree, but there is a significant fraction of highly connected nodes that play an important role in the connectivity of the network. The scale-free topology of real networks has very important consequences on their functioning. For example, we have discovered that scale-free networks are extremely resilient to the random disruption of their nodes. On the other hand, the selective removal of the nodes with highest degree induces a rapid breakdown of the network to isolated subparts that cannot communicate with each other. The non-trivial scaling of the degree distribution of real networks is also an indication of their assembly and evolution. Indeed, our modeling studies have shown us that there are general principles governing the evolution of networks. Most networks start from a small seed and grow by the addition of new nodes which attach to the nodes already in
XMM-Newton publication statistics
NASA Astrophysics Data System (ADS)
Ness, J.-U.; Parmar, A. N.; Valencic, L. A.; Smith, R.; Loiseau, N.; Salama, A.; Ehle, M.; Schartel, N.
2014-02-01
We assessed the scientific productivity of XMM-Newton by examining XMM-Newton publications and data usage statistics. We analyse 3272 refereed papers, published until the end of 2012, that directly use XMM-Newton data. The SAO/NASA Astrophysics Data System (ADS) was used to provide additional information on each paper including the number of citations. For each paper, the XMM-Newton observation identifiers and instruments used to provide the scientific results were determined. The identifiers were used to access the XMM-{Newton} Science Archive (XSA) to provide detailed information on the observations themselves and on the original proposals. The information obtained from these sources was then combined to allow the scientific productivity of the mission to be assessed. Since around three years after the launch of XMM-Newton there have been around 300 refereed papers per year that directly use XMM-Newton data. After more than 13 years in operation, this rate shows no evidence that it is decreasing. Since 2002, around 100 scientists per year become lead authors for the first time on a refereed paper which directly uses XMM-Newton data. Each refereed XMM-Newton paper receives around four citations per year in the first few years with a long-term citation rate of three citations per year, more than five years after publication. About half of the articles citing XMM-Newton articles are not primarily X-ray observational papers. The distribution of elapsed time between observations taken under the Guest Observer programme and first article peaks at 2 years with a possible second peak at 3.25 years. Observations taken under the Target of Opportunity programme are published significantly faster, after one year on average. The fraction of science time taken until the end of 2009 that has been used in at least one article is {˜ 90} %. Most observations were used more than once, yielding on average a factor of two in usage on available observing time per year. About 20 % of
NASA Astrophysics Data System (ADS)
Inomata, Akira
1997-03-01
To understand possible physical consequences of quantum deformation, we investigate statistical behaviors of a quon gas. The quon is an object which obeys the minimally deformed commutator (or q-mutator): a a† - q a†a=1 with -1≤ q≤ 1. Although q=1 and q=-1 appear to correspond respectively to boson and fermion statistics, it is not easy to create a gas which unifies the boson gas and the fermion gas. We present a model which is able to interpolates between the two limits. The quon gas shows the Bose-Einstein condensation near the Boson limit in two dimensions.
NASA Astrophysics Data System (ADS)
2014-02-01
When promoting the value of their research or procuring funding, researchers often need to explain the significance of their work to the community -- something that can be just as tricky as the research itself.
Statistical aspects of solar flares
NASA Technical Reports Server (NTRS)
Wilson, Robert M.
1987-01-01
A survey of the statistical properties of 850 H alpha solar flares during 1975 is presented. Comparison of the results found here with those reported elsewhere for different epochs is accomplished. Distributions of rise time, decay time, and duration are given, as are the mean, mode, median, and 90th percentile values. Proportions by selected groupings are also determined. For flares in general, mean values for rise time, decay time, and duration are 5.2 + or - 0.4 min, and 18.1 + or 1.1 min, respectively. Subflares, accounting for nearly 90 percent of the flares, had mean values lower than those found for flares of H alpha importance greater than 1, and the differences are statistically significant. Likewise, flares of bright and normal relative brightness have mean values of decay time and duration that are significantly longer than those computed for faint flares, and mass-motion related flares are significantly longer than non-mass-motion related flares. Seventy-three percent of the mass-motion related flares are categorized as being a two-ribbon flare and/or being accompanied by a high-speed dark filament. Slow rise time flares (rise time greater than 5 min) have a mean value for duration that is significantly longer than that computed for fast rise time flares, and long-lived duration flares (duration greater than 18 min) have a mean value for rise time that is significantly longer than that computed for short-lived duration flares, suggesting a positive linear relationship between rise time and duration for flares. Monthly occurrence rates for flares in general and by group are found to be linearly related in a positive sense to monthly sunspot number. Statistical testing reveals the association between sunspot number and numbers of flares to be significant at the 95 percent level of confidence, and the t statistic for slope is significant at greater than 99 percent level of confidence. Dependent upon the specific fit, between 58 percent and 94 percent of
Statistical insight: a review.
Vardell, Emily; Garcia-Barcena, Yanira
2012-01-01
Statistical Insight is a database that offers the ability to search across multiple sources of data, including the federal government, private organizations, research centers, and international intergovernmental organizations in one search. Two sample searches on the same topic, a basic and an advanced, were conducted to evaluate the database.
Pilot Class Testing: Statistics.
ERIC Educational Resources Information Center
Washington Univ., Seattle. Washington Foreign Language Program.
Statistics derived from test score data from the pilot classes participating in the Washington Foreign Language Program are presented in tables in this report. An index accompanies the tables, itemizing the classes by level (FLES, middle, and high school), grade test, language skill, and school. MLA-Coop test performances for each class were…
Statistical Reasoning over Lunch
ERIC Educational Resources Information Center
Selmer, Sarah J.; Bolyard, Johnna J.; Rye, James A.
2011-01-01
Students in the 21st century are exposed daily to a staggering amount of numerically infused media. In this era of abundant numeric data, students must be able to engage in sound statistical reasoning when making life decisions after exposure to varied information. The context of nutrition can be used to engage upper elementary and middle school…
Selected Outdoor Recreation Statistics.
ERIC Educational Resources Information Center
Bureau of Outdoor Recreation (Dept. of Interior), Washington, DC.
In this recreational information report, 96 tables are compiled from Bureau of Outdoor Recreation programs and surveys, other governmental agencies, and private sources. Eight sections comprise the document: (1) The Bureau of Outdoor Recreation, (2) Federal Assistance to Recreation, (3) Recreation Surveys for Planning, (4) Selected Statistics of…
ASURV: Astronomical SURVival Statistics
NASA Astrophysics Data System (ADS)
Feigelson, E. D.; Nelson, P. I.; Isobe, T.; LaValley, M.
2014-06-01
ASURV (Astronomical SURVival Statistics) provides astronomy survival analysis for right- and left-censored data including the maximum-likelihood Kaplan-Meier estimator and several univariate two-sample tests, bivariate correlation measures, and linear regressions. ASURV is written in FORTRAN 77, and is stand-alone and does not call any specialized libraries.
Statistics for Learning Genetics
ERIC Educational Resources Information Center
Charles, Abigail Sheena
2012-01-01
This study investigated the knowledge and skills that biology students may need to help them understand statistics/mathematics as it applies to genetics. The data are based on analyses of current representative genetics texts, practicing genetics professors' perspectives, and more directly, students' perceptions of, and performance in,…
Spitball Scatterplots in Statistics
ERIC Educational Resources Information Center
Wagaman, John C.
2012-01-01
This paper describes an active learning idea that I have used in my applied statistics class as a first lesson in correlation and regression. Students propel spitballs from various standing distances from the target and use the recorded data to determine if the spitball accuracy is associated with standing distance and review the algebra of lines…
Geopositional Statistical Methods
NASA Technical Reports Server (NTRS)
Ross, Kenton
2006-01-01
RMSE based methods distort circular error estimates (up to 50% overestimation). The empirical approach is the only statistically unbiased estimator offered. Ager modification to Shultz approach is nearly unbiased, but cumbersome. All methods hover around 20% uncertainty (@ 95% confidence) for low geopositional bias error estimates. This requires careful consideration in assessment of higher accuracy products.
ERIC Educational Resources Information Center
Akram, Muhammad; Siddiqui, Asim Jamal; Yasmeen, Farah
2004-01-01
In order to learn the concept of statistical techniques one needs to run real experiments that generate reliable data. In practice, the data from some well-defined process or system is very costly and time consuming. It is difficult to run real experiments during the teaching period in the university. To overcome these difficulties, statisticians…
Education Statistics Quarterly, 2003.
ERIC Educational Resources Information Center
Marenus, Barbara; Burns, Shelley; Fowler, William; Greene, Wilma; Knepper, Paula; Kolstad, Andrew; McMillen Seastrom, Marilyn; Scott, Leslie
2003-01-01
This publication provides a comprehensive overview of work done across all parts of the National Center for Education Statistics (NCES). Each issue contains short publications, summaries, and descriptions that cover all NCES publications and data products released in a 3-month period. Each issue also contains a message from the NCES on a timely…
Analogies for Understanding Statistics
ERIC Educational Resources Information Center
Hocquette, Jean-Francois
2004-01-01
This article describes a simple way to explain the limitations of statistics to scientists and students to avoid the publication of misleading conclusions. Biologists examine their results extremely critically and carefully choose the appropriate analytic methods depending on their scientific objectives. However, no such close attention is usually…
Polyimide processing additives
NASA Technical Reports Server (NTRS)
Fletcher, James C. (Inventor); Pratt, J. Richard (Inventor); St.clair, Terry L. (Inventor); Stoakley, Diane M. (Inventor); Burks, Harold D. (Inventor)
1992-01-01
A process for preparing polyimides having enhanced melt flow properties is described. The process consists of heating a mixture of a high molecular weight poly-(amic acid) or polyimide with a low molecular weight amic acid or imide additive in the range of 0.05 to 15 percent by weight of additive. The polyimide powders so obtained show improved processability, as evidenced by lower melt viscosity by capillary rheometry. Likewise, films prepared from mixtures of polymers with additives show improved processability with earlier onset of stretching by TMA.
Polyimide processing additives
NASA Technical Reports Server (NTRS)
Pratt, J. Richard (Inventor); St.clair, Terry L. (Inventor); Stoakley, Diane M. (Inventor); Burks, Harold D. (Inventor)
1993-01-01
A process for preparing polyimides having enhanced melt flow properties is described. The process consists of heating a mixture of a high molecular weight poly-(amic acid) or polyimide with a low molecular weight amic acid or imide additive in the range of 0.05 to 15 percent by weight of the additive. The polyimide powders so obtained show improved processability, as evidenced by lower melt viscosity by capillary rheometry. Likewise, films prepared from mixtures of polymers with additives show improved processability with earlier onset of stretching by TMA.
Additional Types of Neuropathy
... A A Listen En Español Additional Types of Neuropathy Charcot's Joint Charcot's Joint, also called neuropathic arthropathy, ... can stop bone destruction and aid healing. Cranial Neuropathy Cranial neuropathy affects the 12 pairs of nerves ...
Food Additives and Hyperkinesis
ERIC Educational Resources Information Center
Wender, Ester H.
1977-01-01
The hypothesis that food additives are causally associated with hyperkinesis and learning disabilities in children is reviewed, and available data are summarized. Available from: American Medical Association 535 North Dearborn Street Chicago, Illinois 60610. (JG)
Lundby, W.
1993-06-29
A method is described of controlling, reducing or eliminating, ozone and related smog resulting from photochemical reactions between ozone and automotive or industrial gases comprising the addition of iodine or compounds of iodine to hydrocarbon-base fuels prior to or during combustion in an amount of about 1 part iodine per 240 to 10,000,000 parts fuel, by weight, to be accomplished by: (a) the addition of these inhibitors during or after the refining or manufacturing process of liquid fuels; (b) the production of these inhibitors for addition into fuel tanks, such as automotive or industrial tanks; or (c) the addition of these inhibitors into combustion chambers of equipment utilizing solid fuels for the purpose of reducing ozone.
NASA Technical Reports Server (NTRS)
Black, D. C.
1986-01-01
The significance of brown dwarfs for resolving some major problems in astronomy is discussed. The importance of brown dwarfs for models of star formation by fragmentation of molecular clouds and for obtaining independent measurements of the ages of stars in binary systems is addressed. The relationship of brown dwarfs to planets is considered.
Statistics, Uncertainty, and Transmitted Variation
Wendelberger, Joanne Roth
2014-11-05
The field of Statistics provides methods for modeling and understanding data and making decisions in the presence of uncertainty. When examining response functions, variation present in the input variables will be transmitted via the response function to the output variables. This phenomenon can potentially have significant impacts on the uncertainty associated with results from subsequent analysis. This presentation will examine the concept of transmitted variation, its impact on designed experiments, and a method for identifying and estimating sources of transmitted variation in certain settings.
NASA Astrophysics Data System (ADS)
Maccone, C.
In this paper is provided the statistical generalization of the Fermi paradox. The statistics of habitable planets may be based on a set of ten (and possibly more) astrobiological requirements first pointed out by Stephen H. Dole in his book Habitable planets for man (1964). The statistical generalization of the original and by now too simplistic Dole equation is provided by replacing a product of ten positive numbers by the product of ten positive random variables. This is denoted the SEH, an acronym standing for “Statistical Equation for Habitables”. The proof in this paper is based on the Central Limit Theorem (CLT) of Statistics, stating that the sum of any number of independent random variables, each of which may be ARBITRARILY distributed, approaches a Gaussian (i.e. normal) random variable (Lyapunov form of the CLT). It is then shown that: 1. The new random variable NHab, yielding the number of habitables (i.e. habitable planets) in the Galaxy, follows the log- normal distribution. By construction, the mean value of this log-normal distribution is the total number of habitable planets as given by the statistical Dole equation. 2. The ten (or more) astrobiological factors are now positive random variables. The probability distribution of each random variable may be arbitrary. The CLT in the so-called Lyapunov or Lindeberg forms (that both do not assume the factors to be identically distributed) allows for that. In other words, the CLT "translates" into the SEH by allowing an arbitrary probability distribution for each factor. This is both astrobiologically realistic and useful for any further investigations. 3. By applying the SEH it is shown that the (average) distance between any two nearby habitable planets in the Galaxy may be shown to be inversely proportional to the cubic root of NHab. This distance is denoted by new random variable D. The relevant probability density function is derived, which was named the "Maccone distribution" by Paul Davies in
The Statistical Drake Equation
NASA Astrophysics Data System (ADS)
Maccone, Claudio
2010-12-01
We provide the statistical generalization of the Drake equation. From a simple product of seven positive numbers, the Drake equation is now turned into the product of seven positive random variables. We call this "the Statistical Drake Equation". The mathematical consequences of this transformation are then derived. The proof of our results is based on the Central Limit Theorem (CLT) of Statistics. In loose terms, the CLT states that the sum of any number of independent random variables, each of which may be ARBITRARILY distributed, approaches a Gaussian (i.e. normal) random variable. This is called the Lyapunov Form of the CLT, or the Lindeberg Form of the CLT, depending on the mathematical constraints assumed on the third moments of the various probability distributions. In conclusion, we show that: The new random variable N, yielding the number of communicating civilizations in the Galaxy, follows the LOGNORMAL distribution. Then, as a consequence, the mean value of this lognormal distribution is the ordinary N in the Drake equation. The standard deviation, mode, and all the moments of this lognormal N are also found. The seven factors in the ordinary Drake equation now become seven positive random variables. The probability distribution of each random variable may be ARBITRARY. The CLT in the so-called Lyapunov or Lindeberg forms (that both do not assume the factors to be identically distributed) allows for that. In other words, the CLT "translates" into our statistical Drake equation by allowing an arbitrary probability distribution for each factor. This is both physically realistic and practically very useful, of course. An application of our statistical Drake equation then follows. The (average) DISTANCE between any two neighboring and communicating civilizations in the Galaxy may be shown to be inversely proportional to the cubic root of N. Then, in our approach, this distance becomes a new random variable. We derive the relevant probability density
Nock, Richard; Nielsen, Frank
2004-11-01
This paper explores a statistical basis for a process often described in computer vision: image segmentation by region merging following a particular order in the choice of regions. We exhibit a particular blend of algorithmics and statistics whose segmentation error is, as we show, limited from both the qualitative and quantitative standpoints. This approach can be efficiently approximated in linear time/space, leading to a fast segmentation algorithm tailored to processing images described using most common numerical pixel attribute spaces. The conceptual simplicity of the approach makes it simple to modify and cope with hard noise corruption, handle occlusion, authorize the control of the segmentation scale, and process unconventional data such as spherical images. Experiments on gray-level and color images, obtained with a short readily available C-code, display the quality of the segmentations obtained.
Modeling cosmic void statistics
NASA Astrophysics Data System (ADS)
Hamaus, Nico; Sutter, P. M.; Wandelt, Benjamin D.
2016-10-01
Understanding the internal structure and spatial distribution of cosmic voids is crucial when considering them as probes of cosmology. We present recent advances in modeling void density- and velocity-profiles in real space, as well as void two-point statistics in redshift space, by examining voids identified via the watershed transform in state-of-the-art ΛCDM n-body simulations and mock galaxy catalogs. The simple and universal characteristics that emerge from these statistics indicate the self-similarity of large-scale structure and suggest cosmic voids to be among the most pristine objects to consider for future studies on the nature of dark energy, dark matter and modified gravity.
Statistical evaluation of forecasts
NASA Astrophysics Data System (ADS)
Mader, Malenka; Mader, Wolfgang; Gluckman, Bruce J.; Timmer, Jens; Schelter, Björn
2014-08-01
Reliable forecasts of extreme but rare events, such as earthquakes, financial crashes, and epileptic seizures, would render interventions and precautions possible. Therefore, forecasting methods have been developed which intend to raise an alarm if an extreme event is about to occur. In order to statistically validate the performance of a prediction system, it must be compared to the performance of a random predictor, which raises alarms independent of the events. Such a random predictor can be obtained by bootstrapping or analytically. We propose an analytic statistical framework which, in contrast to conventional methods, allows for validating independently the sensitivity and specificity of a forecasting method. Moreover, our method accounts for the periods during which an event has to remain absent or occur after a respective forecast.
Tougher Addition Polyimides Containing Siloxane
NASA Technical Reports Server (NTRS)
St. Clair, T. L.; Maudgal, S.
1986-01-01
Laminates show increased impact resistances and other desirable mechanical properties. Bismaleamic acid extended by reaction of diaminosiloxane with maleic anhydride in 1:1 molar ratio, followed by reaction with half this molar ratio of aromatic dianhydride. Bismaleamic acid also extended by reaction of diaminosiloxane with maleic anhydride in 1:2 molar ratio, followed by reaction with half this molar ratio of aromatic diamine (Michael-addition reaction). Impact resistances improved over those of unmodified bismaleimide, showing significant increase in toughness. Aromatic addition polyimides developed as both matrix and adhesive resins for applications on future aircraft and spacecraft.
Journey Through Statistical Mechanics
NASA Astrophysics Data System (ADS)
Yang, C. N.
2013-05-01
My first involvement with statistical mechanics and the many body problem was when I was a student at The National Southwest Associated University in Kunming during the war. At that time Professor Wang Zhu-Xi had just come back from Cambridge, England, where he was a student of Fowler, and his thesis was on phase transitions, a hot topic at that time, and still a very hot topic today...
Statistical Methods in Cosmology
NASA Astrophysics Data System (ADS)
Verde, L.
2010-03-01
The advent of large data-set in cosmology has meant that in the past 10 or 20 years our knowledge and understanding of the Universe has changed not only quantitatively but also, and most importantly, qualitatively. Cosmologists rely on data where a host of useful information is enclosed, but is encoded in a non-trivial way. The challenges in extracting this information must be overcome to make the most of a large experimental effort. Even after having converged to a standard cosmological model (the LCDM model) we should keep in mind that this model is described by 10 or more physical parameters and if we want to study deviations from it, the number of parameters is even larger. Dealing with such a high dimensional parameter space and finding parameters constraints is a challenge on itself. Cosmologists want to be able to compare and combine different data sets both for testing for possible disagreements (which could indicate new physics) and for improving parameter determinations. Finally, cosmologists in many cases want to find out, before actually doing the experiment, how much one would be able to learn from it. For all these reasons, sophisiticated statistical techniques are being employed in cosmology, and it has become crucial to know some statistical background to understand recent literature in the field. I will introduce some statistical tools that any cosmologist should know about in order to be able to understand recently published results from the analysis of cosmological data sets. I will not present a complete and rigorous introduction to statistics as there are several good books which are reported in the references. The reader should refer to those.
NASA Astrophysics Data System (ADS)
Talkner, Peter
2003-07-01
The statistical properties of the transitions of a discrete Markov process are investigated in terms of entrance times. A simple formula for their density is given and used to measure the synchronization of a process with a periodic driving force. For the McNamara-Wiesenfeld model of stochastic resonance we find parameter regions in which the transition frequency of the process is locked with the frequency of the external driving.
1979 DOE statistical symposium
Gardiner, D.A.; Truett T.
1980-09-01
The 1979 DOE Statistical Symposium was the fifth in the series of annual symposia designed to bring together statisticians and other interested parties who are actively engaged in helping to solve the nation's energy problems. The program included presentations of technical papers centered around exploration and disposal of nuclear fuel, general energy-related topics, and health-related issues, and workshops on model evaluation, risk analysis, analysis of large data sets, and resource estimation.
Hockey sticks, principal components, and spurious significance
NASA Astrophysics Data System (ADS)
McIntyre, Stephen; McKitrick, Ross
2005-02-01
The ``hockey stick'' shaped temperature reconstruction of Mann et al. (1998, 1999) has been widely applied. However it has not been previously noted in print that, prior to their principal components (PCs) analysis on tree ring networks, they carried out an unusual data transformation which strongly affects the resulting PCs. Their method, when tested on persistent red noise, nearly always produces a hockey stick shaped first principal component (PC1) and overstates the first eigenvalue. In the controversial 15th century period, the MBH98 method effectively selects only one species (bristlecone pine) into the critical North American PC1, making it implausible to describe it as the ``dominant pattern of variance''. Through Monte Carlo analysis, we show that MBH98 benchmarks for significance of the Reduction of Error (RE) statistic are substantially under-stated and, using a range of cross-validation statistics, we show that the MBH98 15th century reconstruction lacks statistical significance.
Latest statistics on cardiovascular disease in Australia.
Waters, Anne-Marie; Trinh, Lany; Chau, Theresa; Bourchier, Michael; Moon, Lynelle
2013-06-01
The results presented herein summarize the most up-to-date cardiovascular statistics available at this time in Australia. The analysis presented here is based on and extends results published in two Australian Institute of Health and Welfare (AIHW) reports, namely Cardiovascular disease: Australian facts 2011 and the cardiovascular disease (CVD) section of Australia's Health 2012. Despite significant improvements in the cardiovascular health of Australians in recent decades, CVD continues to impose a heavy burden on Australians in terms of illness, disability and premature death. Direct health care expenditure for CVD exceeds that for any other disease group. The most recent national data have been analysed to describe patterns and trends in CVD hospitalization and death rates, with additional analysis by Indigenous status, remoteness and socioeconomic group. The incidence of and case-fatality from major coronary events has also been examined. Although CVD death rates have declined steadily in Australia since the late 1960s, CVD still accounts for a larger proportion of deaths (33% in 2009) than any other disease group. Worryingly, the rate at which the coronary heart disease death rate has been falling in recent years has slowed in younger (35-54 years) age groups. Between 1998-99 and 2009-10, the overall rate of hospitalizations for CVD fell by 13%, with declines observed for most major CVDs. In conclusion, CVD disease remains a significant health problem in Australia despite decreasing death and hospitalization rates. PMID:23517328
Guta, Madalin; Butucea, Cristina
2010-10-15
The notion of a U-statistic for an n-tuple of identical quantum systems is introduced in analogy to the classical (commutative) case: given a self-adjoint 'kernel' K acting on (C{sup d}){sup '}x{sup r} with r
Statistical Inference at Work: Statistical Process Control as an Example
ERIC Educational Resources Information Center
Bakker, Arthur; Kent, Phillip; Derry, Jan; Noss, Richard; Hoyles, Celia
2008-01-01
To characterise statistical inference in the workplace this paper compares a prototypical type of statistical inference at work, statistical process control (SPC), with a type of statistical inference that is better known in educational settings, hypothesis testing. Although there are some similarities between the reasoning structure involved in…
Phenylethynyl Containing Reactive Additives
NASA Technical Reports Server (NTRS)
Connell, John W. (Inventor); Smith, Joseph G., Jr. (Inventor); Hergenrother, Paul M. (Inventor)
2002-01-01
Phenylethynyl containing reactive additives were prepared from aromatic diamine, containing phenylethvnvl groups and various ratios of phthalic anhydride and 4-phenylethynviphthalic anhydride in glacial acetic acid to form the imide in one step or in N-methyl-2-pvrrolidinone to form the amide acid intermediate. The reactive additives were mixed in various amounts (10% to 90%) with oligomers containing either terminal or pendent phenylethynyl groups (or both) to reduce the melt viscosity and thereby enhance processability. Upon thermal cure, the additives react and become chemically incorporated into the matrix and effect an increase in crosslink density relative to that of the host resin. This resultant increase in crosslink density has advantageous consequences on the cured resin properties such as higher glass transition temperature and higher modulus as compared to that of the host resin.
ERIC Educational Resources Information Center
Chan, Shiau Wei; Ismail, Zaleha
2014-01-01
The focus of assessment in statistics has gradually shifted from traditional assessment towards alternative assessment where more attention has been paid to the core statistical concepts such as center, variability, and distribution. In spite of this, there are comparatively few assessments that combine the significant three types of statistical…
Environmental restoration and statistics: Issues and needs
Gilbert, R.O.
1991-10-01
Statisticians have a vital role to play in environmental restoration (ER) activities. One facet of that role is to point out where additional work is needed to develop statistical sampling plans and data analyses that meet the needs of ER. This paper is an attempt to show where statistics fits into the ER process. The statistician, as member of the ER planning team, works collaboratively with the team to develop the site characterization sampling design, so that data of the quality and quantity required by the specified data quality objectives (DQOs) are obtained. At the same time, the statistician works with the rest of the planning team to design and implement, when appropriate, the observational approach to streamline the ER process and reduce costs. The statistician will also provide the expertise needed to select or develop appropriate tools for statistical analysis that are suited for problems that are common to waste-site data. These data problems include highly heterogeneous waste forms, large variability in concentrations over space, correlated data, data that do not have a normal (Gaussian) distribution, and measurements below detection limits. Other problems include environmental transport and risk models that yield highly uncertain predictions, and the need to effectively communicate to the public highly technical information, such as sampling plans, site characterization data, statistical analysis results, and risk estimates. Even though some statistical analysis methods are available off the shelf'' for use in ER, these problems require the development of additional statistical tools, as discussed in this paper. 29 refs.
Factors related to student performance in statistics courses in Lebanon
NASA Astrophysics Data System (ADS)
Naccache, Hiba Salim
The purpose of the present study was to identify factors that may contribute to business students in Lebanese universities having difficulty in introductory and advanced statistics courses. Two statistics courses are required for business majors at Lebanese universities. Students are not obliged to be enrolled in any math courses prior to taking statistics courses. Drawing on recent educational research, this dissertation attempted to identify the relationship between (1) students’ scores on Lebanese university math admissions tests; (2) students’ scores on a test of very basic mathematical concepts; (3) students’ scores on the survey of attitude toward statistics (SATS); (4) course performance as measured by students’ final scores in the course; and (5) their scores on the final exam. Data were collected from 561 students enrolled in multiple sections of two courses: 307 students in the introductory statistics course and 260 in the advanced statistics course in seven campuses across Lebanon over one semester. The multiple regressions results revealed four significant relationships at the introductory level: between students’ scores on the math quiz with their (1) final exam scores; (2) their final averages; (3) the Cognitive subscale of the SATS with their final exam scores; and (4) their final averages. These four significant relationships were also found at the advanced level. In addition, two more significant relationships were found between students’ final average and the two subscales of Effort (5) and Affect (6). No relationship was found between students’ scores on the admission math tests and both their final exam scores and their final averages in both the introductory and advanced level courses. On the other hand, there was no relationship between students’ scores on Lebanese admissions tests and their final achievement. Although these results were consistent across course formats and instructors, they may encourage Lebanese universities
Statistical considerations in design of spacelab experiments
NASA Technical Reports Server (NTRS)
Robinson, J.
1978-01-01
After making an analysis of experimental error sources, statistical models were developed for the design and analysis of potential Space Shuttle experiments. Guidelines for statistical significance and/or confidence limits of expected results were also included. The models were then tested out on the following proposed Space Shuttle biomedical experiments: (1) bone density by computer tomography; (2) basal metabolism; and (3) total body water. Analysis of those results and therefore of the models proved inconclusive due to the lack of previous research data and statistical values. However, the models were seen as possible guides to making some predictions and decisions.
Dinov, Ivo D; Sanchez, Juana; Christou, Nicolas
2008-01-01
Technology-based instruction represents a new recent pedagogical paradigm that is rooted in the realization that new generations are much more comfortable with, and excited about, new technologies. The rapid technological advancement over the past decade has fueled an enormous demand for the integration of modern networking, informational and computational tools with classical pedagogical instruments. Consequently, teaching with technology typically involves utilizing a variety of IT and multimedia resources for online learning, course management, electronic course materials, and novel tools of communication, engagement, experimental, critical thinking and assessment.The NSF-funded Statistics Online Computational Resource (SOCR) provides a number of interactive tools for enhancing instruction in various undergraduate and graduate courses in probability and statistics. These resources include online instructional materials, statistical calculators, interactive graphical user interfaces, computational and simulation applets, tools for data analysis and visualization. The tools provided as part of SOCR include conceptual simulations and statistical computing interfaces, which are designed to bridge between the introductory and the more advanced computational and applied probability and statistics courses. In this manuscript, we describe our designs for utilizing SOCR technology in instruction in a recent study. In addition, present the results of the effectiveness of using SOCR tools at two different course intensity levels on three outcome measures: exam scores, student satisfaction and choice of technology to complete assignments. Learning styles assessment was completed at baseline. We have used three very different designs for three different undergraduate classes. Each course included a treatment group, using the SOCR resources, and a control group, using classical instruction techniques. Our findings include marginal effects of the SOCR treatment per individual
Multifunctional fuel additives
Baillargeon, D.J.; Cardis, A.B.; Heck, D.B.
1991-03-26
This paper discusses a composition comprising a major amount of a liquid hydrocarbyl fuel and a minor low-temperature flow properties improving amount of an additive product of the reaction of a suitable diol and product of a benzophenone tetracarboxylic dianhydride and a long-chain hydrocarbyl aminoalcohol.
Technology Transfer Automated Retrieval System (TEKTRAN)
Fully biobased lubricants are those formulated using all biobased ingredients, i.e. biobased base oils and biobased additives. Such formulations provide the maximum environmental, safety, and economic benefits expected from a biobased product. Currently, there are a number of biobased base oils that...
Axiomatic nonextensive statistics at NICA energies
NASA Astrophysics Data System (ADS)
Nasser Tawfik, Abdel
2016-08-01
We discuss the possibility of implementing axiomatic nonextensive statistics, where it is conjectured that the phase-space volume determines the (non)extensive entropy, on the particle production at NICA energies. Both Boltzmann-Gibbs and Tsallis statistics are very special cases of this generic (non)extensivity. We conclude that the lattice thermodynamics is ab initio extensive and additive and thus the nonextensive approaches including Tsallis statistics categorically are not matching with them, while the particle production, for instance the particle ratios at various center-of-mass energies, is likely a nonextensive process but certainly not of Tsallis type. The resulting freezeout parameters, the temperature and the chemical potentials, are approximately compatible with the ones deduced from Boltzmann-Gibbs statistics.
Meier, Ann; Lam, David
2012-01-01
Census microdata are ideal for developing statistical literacy of university students. Access, particularly to internationally comparable microdata, has been a significant obstacle. The IPUMS-International project offers a uniform solution to providing access for policy analysts, researchers, and students to integrated microdata and metadata, while protecting statistical confidentiality. Eighty-five official statistical agencies have endorsed IPUMS-I dissemination principles and entrusted microdata for 249 censuses to the project. From June 2010, 159 integrated samples, representing 55 countries and totaling over 325 million person records, are available at no cost to researchers and their students. The database is being expanded with the addition of samples for 5–10 countries per year as well as samples for the 2010 round of censuses. This paper illustrates two approaches to using IPUMS-I census microdata in the university curriculum to promote statistical literacy among undergraduates. PMID:25279022
Anthropological significance of phenylketonuria.
Saugstad, L F
1975-01-01
The highest incidence rates of phenylketonuria (PKU) have been observed in Ireland and Scotlant. Parents heterozygous for PKU in Norway differ significantly from the general population in the Rhesus, Kell and PGM systems. The parents investigated showed an excess of Rh negative, Kell plus and PGM type 1 individuals, which makes them similar to the present populations in Ireland and Scotlant. It is postulated that the heterozygotes for PKU in Norway are descended from a completely assimilated sub-population of Celtic origin, who came or were brought here, 1ooo years ago. Bronze objects of Western European (Scottish, Irish) origin, found in Viking graves widely distributed in Norway, have been taken as evidence of Vikings returning with loot (including a number of Celts) from Western Viking settlements. The continuity of residence since the Viking age in most habitable parts of Norway, and what seems to be a nearly complete regional relationship between the sites where Viking graves contain western imported objects and the birthplaces of grandparents of PKUs identified in Norway, lend further support to the hypothesis that the heterozygotes for PKU in Norway are descended from a completely assimilated subpopulation. The remarkable resemblance between Iceland and Ireland, in respect of several genetic markers (including the Rhesus, PGM and Kell systems), is considered to be an expression of a similar proportion of people of Celtic origin in each of the two countries. Their identical, high incidence rates of PKU are regarded as further evidence of this. The significant decline in the incidence of PKU when one passes from Ireland, Scotland and Iceland, to Denmark and on to Norway and Sweden, is therefore explained as being related to a reduction in the proportion of inhabitants of Celtic extraction in the respective populations.
Censored data treatment using additional information in intelligent medical systems
NASA Astrophysics Data System (ADS)
Zenkova, Z. N.
2015-11-01
Statistical procedures are a very important and significant part of modern intelligent medical systems. They are used for proceeding, mining and analysis of different types of the data about patients and their diseases; help to make various decisions, regarding the diagnosis, treatment, medication or surgery, etc. In many cases the data can be censored or incomplete. It is a well-known fact that censorship considerably reduces the efficiency of statistical procedures. In this paper the author makes a brief review of the approaches which allow improvement of the procedures using additional information, and describes a modified estimation of an unknown cumulative distribution function involving additional information about a quantile which is known exactly. The additional information is used by applying a projection of a classical estimator to a set of estimators with certain properties. The Kaplan-Meier estimator is considered as an estimator of the unknown cumulative distribution function, the properties of the modified estimator are investigated for a case of a single right censorship by means of simulations.
Statistical design for microwave systems
NASA Technical Reports Server (NTRS)
Cooke, Roland; Purviance, John
1991-01-01
This paper presents an introduction to statistical system design. Basic ideas needed to understand statistical design and a method for implementing statistical design are presented. The nonlinear characteristics of the system amplifiers and mixers are accounted for in the given examples. The specification of group delay, signal-to-noise ratio and output power are considered in these statistical designs.
Experimental Mathematics and Computational Statistics
Bailey, David H.; Borwein, Jonathan M.
2009-04-30
The field of statistics has long been noted for techniques to detect patterns and regularities in numerical data. In this article we explore connections between statistics and the emerging field of 'experimental mathematics'. These includes both applications of experimental mathematics in statistics, as well as statistical methods applied to computational mathematics.
Who Needs Statistics? | Poster
You may know the feeling. You have collected a lot of new data on an important experiment. Now you are faced with multiple groups of data, a sea of numbers, and a deadline for submitting your paper to a peer-reviewed journal. And you are not sure which data are relevant, or even the best way to present them. The statisticians at Data Management Services (DMS) know how to help. This small group of experts provides a wide array of statistical and mathematical consulting services to the scientific community at NCI at Frederick and NCI-Bethesda.
Statistical physics and ecology
NASA Astrophysics Data System (ADS)
Volkov, Igor
This work addresses the applications of the methods of statistical physics to problems in population ecology. A theoretical framework based on stochastic Markov processes for the unified neutral theory of biodiversity is presented and an analytical solution for the distribution of the relative species abundance distribution both in the large meta-community and in the small local community is obtained. It is shown that the framework of the current neutral theory in ecology can be easily generalized to incorporate symmetric density dependence. An analytically tractable model is studied that provides an accurate description of beta-diversity and exhibits novel scaling behavior that leads to links between ecological measures such as relative species abundance and the species area relationship. We develop a simple framework that incorporates the Janzen-Connell, dispersal and immigration effects and leads to a description of the distribution of relative species abundance, the equilibrium species richness, beta-diversity and the species area relationship, in good accord with data. Also it is shown that an ecosystem can be mapped into an unconventional statistical ensemble and is quite generally tuned in the vicinity of a phase transition where bio-diversity and the use of resources are optimized. We also perform a detailed study of the unconventional statistical ensemble, in which, unlike in physics, the total number of particles and the energy are not fixed but bounded. We show that the temperature and the chemical potential play a dual role: they determine the average energy and the population of the levels in the system and at the same time they act as an imbalance between the energy and population ceilings and the corresponding average values. Different types of statistics (Boltzmann, Bose-Einstein, Fermi-Dirac and one corresponding to the description of a simple ecosystem) are considered. In all cases, we show that the systems may undergo a first or a second order
International petroleum statistics report
1995-10-01
The International Petroleum Statistics Report is a monthly publication that provides current international oil data. This report presents data on international oil production, demand, imports, exports and stocks. The report has four sections. Section 1 contains time series data on world oil production, and on oil demand and stocks in the Organization for Economic Cooperation and Development (OECD). Section 2 presents an oil supply/demand balance for the world, in quarterly intervals for the most recent two years. Section 3 presents data on oil imports by OECD countries. Section 4 presents annual time series data on world oil production and oil stocks, demand, and trade in OECD countries.
NASA Technical Reports Server (NTRS)
Freilich, M. H.; Pawka, S. S.
1987-01-01
The statistics of Sxy estimates derived from orthogonal-component measurements are examined. Based on results of Goodman (1957), the probability density function (pdf) for Sxy(f) estimates is derived, and a closed-form solution for arbitrary moments of the distribution is obtained. Characteristic functions are used to derive the exact pdf of Sxy(tot). In practice, a simple Gaussian approximation is found to be highly accurate even for relatively few degrees of freedom. Implications for experiment design are discussed, and a maximum-likelihood estimator for a posterior estimation is outlined.
Statistical Reviewers Improve Reporting in Biomedical Articles: A Randomized Trial
Cobo, Erik; Selva-O'Callagham, Albert; Ribera, Josep-Maria; Cardellach, Francesc; Dominguez, Ruth; Vilardell, Miquel
2007-01-01
Background Although peer review is widely considered to be the most credible way of selecting manuscripts and improving the quality of accepted papers in scientific journals, there is little evidence to support its use. Our aim was to estimate the effects on manuscript quality of either adding a statistical peer reviewer or suggesting the use of checklists such as CONSORT or STARD to clinical reviewers or both. Methodology and Principal Findings Interventions were defined as 1) the addition of a statistical reviewer to the clinical peer review process, and 2) suggesting reporting guidelines to reviewers; with “no statistical expert” and “no checklist” as controls. The two interventions were crossed in a 2×2 balanced factorial design including original research articles consecutively selected, between May 2004 and March 2005, by the Medicina Clinica (Barc) editorial committee. We randomized manuscripts to minimize differences in terms of baseline quality and type of study (intervention, longitudinal, cross-sectional, others). Sample-size calculations indicated that 100 papers provide an 80% power to test a 55% standardized difference. We specified the main outcome as the increment in quality of papers as measured on the Goodman Scale. Two blinded evaluators rated the quality of manuscripts at initial submission and final post peer review version. Of the 327 manuscripts submitted to the journal, 131 were accepted for further review, and 129 were randomized. Of those, 14 that were lost to follow-up showed no differences in initial quality to the followed-up papers. Hence, 115 were included in the main analysis, with 16 rejected for publication after peer review. 21 (18.3%) of the 115 included papers were interventions, 46 (40.0%) were longitudinal designs, 28 (24.3%) cross-sectional and 20 (17.4%) others. The 16 (13.9%) rejected papers had a significantly lower initial score on the overall Goodman scale than accepted papers (difference 15.0, 95% CI: 4.6–24
Vinyl capped addition polyimides
NASA Technical Reports Server (NTRS)
Vannucci, Raymond D. (Inventor); Malarik, Diane C. (Inventor); Delvigs, Peter (Inventor)
1991-01-01
Polyimide resins (PMR) are generally useful where high strength and temperature capabilities are required (at temperatures up to about 700 F). Polyimide resins are particularly useful in applications such as jet engine compressor components, for example, blades, vanes, air seals, air splitters, and engine casing parts. Aromatic vinyl capped addition polyimides are obtained by reacting a diamine, an ester of tetracarboxylic acid, and an aromatic vinyl compound. Low void materials with improved oxidative stability when exposed to 700 F air may be fabricated as fiber reinforced high molecular weight capped polyimide composites. The aromatic vinyl capped polyimides are provided with a more aromatic nature and are more thermally stable than highly aliphatic, norbornenyl-type end-capped polyimides employed in PMR resins. The substitution of aromatic vinyl end-caps for norbornenyl end-caps in addition polyimides results in polymers with improved oxidative stability.
Tackifier for addition polyimides
NASA Technical Reports Server (NTRS)
Butler, J. M.; St.clair, T. L.
1980-01-01
A modification to the addition polyimide, LaRC-160, was prepared to improve tack and drape and increase prepeg out-time. The essentially solventless, high viscosity laminating resin is synthesized from low cost liquid monomers. The modified version takes advantage of a reactive, liquid plasticizer which is used in place of solvent and helps solve a major problem of maintaining good prepeg tack and drape, or the ability of the prepeg to adhere to adjacent plies and conform to a desired shape during the lay up process. This alternate solventless approach allows both longer life of the polymer prepeg and the processing of low void laminates. This approach appears to be applicable to all addition polyimide systems.
Electrophilic addition of astatine
Norseev, Yu.V.; Vasaros, L.; Nhan, D.D.; Huan, N.K.
1988-03-01
It has been shown for the first time that astatine is capable of undergoing addition reactions to unsaturated hydrocarbons. A new compound of astatine, viz., ethylene astatohydrin, has been obtained, and its retention numbers of squalane, Apiezon, and tricresyl phosphate have been found. The influence of various factors on the formation of ethylene astatohydrin has been studied. It has been concluded on the basis of the results obtained that the univalent cations of astatine in an acidic medium is protonated hypoastatous acid.
Projections of Education Statistics to 2009.
ERIC Educational Resources Information Center
Gerald, Debra E.; Hussar, William J.
This report includes statistics on elementary and secondary schools and institutions of higher education at the national level. Included are projections for enrollment, graduates, classroom teachers, and expenditures to the year 2008. In addition, the report includes projections of enrollment in public elementary and secondary schools and high…
Florida Library Directory with Statistics, 1997.
ERIC Educational Resources Information Center
Florida Dept. of State, Tallahassee. Div. of Library and Information Services.
This 48th annual edition includes listings for over 1,000 libraries of all types in Florida, with contact names, phone numbers, addresses, and e-mail and web addresses. In addition, there is a section of library statistics, showing data on the use, resources, and financial condition of Florida's libraries. The first section consists of listings…
Functional Generalized Additive Models.
McLean, Mathew W; Hooker, Giles; Staicu, Ana-Maria; Scheipl, Fabian; Ruppert, David
2014-01-01
We introduce the functional generalized additive model (FGAM), a novel regression model for association studies between a scalar response and a functional predictor. We model the link-transformed mean response as the integral with respect to t of F{X(t), t} where F(·,·) is an unknown regression function and X(t) is a functional covariate. Rather than having an additive model in a finite number of principal components as in Müller and Yao (2008), our model incorporates the functional predictor directly and thus our model can be viewed as the natural functional extension of generalized additive models. We estimate F(·,·) using tensor-product B-splines with roughness penalties. A pointwise quantile transformation of the functional predictor is also considered to ensure each tensor-product B-spline has observed data on its support. The methods are evaluated using simulated data and their predictive performance is compared with other competing scalar-on-function regression alternatives. We illustrate the usefulness of our approach through an application to brain tractography, where X(t) is a signal from diffusion tensor imaging at position, t, along a tract in the brain. In one example, the response is disease-status (case or control) and in a second example, it is the score on a cognitive test. R code for performing the simulations and fitting the FGAM can be found in supplemental materials available online.
Fragile entanglement statistics
NASA Astrophysics Data System (ADS)
Brody, Dorje C.; Hughston, Lane P.; Meier, David M.
2015-10-01
If X and Y are independent, Y and Z are independent, and so are X and Z, one might be tempted to conclude that X, Y, and Z are independent. But it has long been known in classical probability theory that, intuitive as it may seem, this is not true in general. In quantum mechanics one can ask whether analogous statistics can emerge for configurations of particles in certain types of entangled states. The explicit construction of such states, along with the specification of suitable sets of observables that have the purported statistical properties, is not entirely straightforward. We show that an example of such a configuration arises in the case of an N-particle GHZ state, and we are able to identify a family of observables with the property that the associated measurement outcomes are independent for any choice of 2,3,\\ldots ,N-1 of the particles, even though the measurement outcomes for all N particles are not independent. Although such states are highly entangled, the entanglement turns out to be ‘fragile’, i.e. the associated density matrix has the property that if one traces out the freedom associated with even a single particle, the resulting reduced density matrix is separable.
Statistical clumped isotope signatures.
Röckmann, T; Popa, M E; Krol, M C; Hofmann, M E G
2016-08-18
High precision measurements of molecules containing more than one heavy isotope may provide novel constraints on element cycles in nature. These so-called clumped isotope signatures are reported relative to the random (stochastic) distribution of heavy isotopes over all available isotopocules of a molecule, which is the conventional reference. When multiple indistinguishable atoms of the same element are present in a molecule, this reference is calculated from the bulk (≈average) isotopic composition of the involved atoms. We show here that this referencing convention leads to apparent negative clumped isotope anomalies (anti-clumping) when the indistinguishable atoms originate from isotopically different populations. Such statistical clumped isotope anomalies must occur in any system where two or more indistinguishable atoms of the same element, but with different isotopic composition, combine in a molecule. The size of the anti-clumping signal is closely related to the difference of the initial isotope ratios of the indistinguishable atoms that have combined. Therefore, a measured statistical clumped isotope anomaly, relative to an expected (e.g. thermodynamical) clumped isotope composition, may allow assessment of the heterogeneity of the isotopic pools of atoms that are the substrate for formation of molecules.
International petroleum statistics report
1997-05-01
The International Petroleum Statistics Report is a monthly publication that provides current international oil data. This report is published for the use of Members of Congress, Federal agencies, State agencies, industry, and the general public. Publication of this report is in keeping with responsibilities given the Energy Information Administration in Public Law 95-91. The International Petroleum Statistics Report presents data on international oil production, demand, imports, and stocks. The report has four sections. Section 1 contains time series data on world oil production, and on oil demand and stocks in the Organization for Economic Cooperation and Development (OECD). This section contains annual data beginning in 1985, and monthly data for the most recent two years. Section 2 presents an oil supply/demand balance for the world. This balance is presented in quarterly intervals for the most recent two years. Section 3 presents data on oil imports by OECD countries. This section contains annual data for the most recent year, quarterly data for the most recent two quarters, and monthly data for the most recent twelve months. Section 4 presents annual time series data on world oil production and oil stocks, demand, and trade in OECD countries. World oil production and OECD demand data are for the years 1970 through 1995; OECD stocks from 1973 through 1995; and OECD trade from 1985 through 1995.
Statistical clumped isotope signatures
Röckmann, T.; Popa, M. E.; Krol, M. C.; Hofmann, M. E. G.
2016-01-01
High precision measurements of molecules containing more than one heavy isotope may provide novel constraints on element cycles in nature. These so-called clumped isotope signatures are reported relative to the random (stochastic) distribution of heavy isotopes over all available isotopocules of a molecule, which is the conventional reference. When multiple indistinguishable atoms of the same element are present in a molecule, this reference is calculated from the bulk (≈average) isotopic composition of the involved atoms. We show here that this referencing convention leads to apparent negative clumped isotope anomalies (anti-clumping) when the indistinguishable atoms originate from isotopically different populations. Such statistical clumped isotope anomalies must occur in any system where two or more indistinguishable atoms of the same element, but with different isotopic composition, combine in a molecule. The size of the anti-clumping signal is closely related to the difference of the initial isotope ratios of the indistinguishable atoms that have combined. Therefore, a measured statistical clumped isotope anomaly, relative to an expected (e.g. thermodynamical) clumped isotope composition, may allow assessment of the heterogeneity of the isotopic pools of atoms that are the substrate for formation of molecules. PMID:27535168
Statistical clumped isotope signatures
NASA Astrophysics Data System (ADS)
Röckmann, T.; Popa, M. E.; Krol, M. C.; Hofmann, M. E. G.
2016-08-01
High precision measurements of molecules containing more than one heavy isotope may provide novel constraints on element cycles in nature. These so-called clumped isotope signatures are reported relative to the random (stochastic) distribution of heavy isotopes over all available isotopocules of a molecule, which is the conventional reference. When multiple indistinguishable atoms of the same element are present in a molecule, this reference is calculated from the bulk (≈average) isotopic composition of the involved atoms. We show here that this referencing convention leads to apparent negative clumped isotope anomalies (anti-clumping) when the indistinguishable atoms originate from isotopically different populations. Such statistical clumped isotope anomalies must occur in any system where two or more indistinguishable atoms of the same element, but with different isotopic composition, combine in a molecule. The size of the anti-clumping signal is closely related to the difference of the initial isotope ratios of the indistinguishable atoms that have combined. Therefore, a measured statistical clumped isotope anomaly, relative to an expected (e.g. thermodynamical) clumped isotope composition, may allow assessment of the heterogeneity of the isotopic pools of atoms that are the substrate for formation of molecules.
Statistical clumped isotope signatures.
Röckmann, T; Popa, M E; Krol, M C; Hofmann, M E G
2016-01-01
High precision measurements of molecules containing more than one heavy isotope may provide novel constraints on element cycles in nature. These so-called clumped isotope signatures are reported relative to the random (stochastic) distribution of heavy isotopes over all available isotopocules of a molecule, which is the conventional reference. When multiple indistinguishable atoms of the same element are present in a molecule, this reference is calculated from the bulk (≈average) isotopic composition of the involved atoms. We show here that this referencing convention leads to apparent negative clumped isotope anomalies (anti-clumping) when the indistinguishable atoms originate from isotopically different populations. Such statistical clumped isotope anomalies must occur in any system where two or more indistinguishable atoms of the same element, but with different isotopic composition, combine in a molecule. The size of the anti-clumping signal is closely related to the difference of the initial isotope ratios of the indistinguishable atoms that have combined. Therefore, a measured statistical clumped isotope anomaly, relative to an expected (e.g. thermodynamical) clumped isotope composition, may allow assessment of the heterogeneity of the isotopic pools of atoms that are the substrate for formation of molecules. PMID:27535168
Sufficient Statistics: an Example
NASA Technical Reports Server (NTRS)
Quirein, J.
1973-01-01
The feature selection problem is considered resulting from the transformation x = Bz where B is a k by n matrix of rank k and k is or = to n. Such a transformation can be considered to reduce the dimension of each observation vector z, and in general, such a transformation results in a loss of information. In terms of the divergence, this information loss is expressed by the fact that the average divergence D sub B computed using variable x is less than or equal to the average divergence D computed using variable z. If D sub B = D, then B is said to be a sufficient statistic for the average divergence D. If B is a sufficient statistic for the average divergence, then it can be shown that the probability of misclassification computed using variable x (of dimension k is or = to n) is equal to the probability of misclassification computed using variable z. Also included is what is believed to be a new proof of the well known fact that D is or = to D sub B. Using the techniques necessary to prove the above fact, it is shown that the Brattacharyya distance as measured by variable x is less than or equal to the Brattacharyya distance as measured by variable z.
Fungi producing significant mycotoxins.
2012-01-01
Mycotoxins are secondary metabolites of microfungi that are known to cause sickness or death in humans or animals. Although many such toxic metabolites are known, it is generally agreed that only a few are significant in causing disease: aflatoxins, fumonisins, ochratoxin A, deoxynivalenol, zearalenone, and ergot alkaloids. These toxins are produced by just a few species from the common genera Aspergillus, Penicillium, Fusarium, and Claviceps. All Aspergillus and Penicillium species either are commensals, growing in crops without obvious signs of pathogenicity, or invade crops after harvest and produce toxins during drying and storage. In contrast, the important Fusarium and Claviceps species infect crops before harvest. The most important Aspergillus species, occurring in warmer climates, are A. flavus and A. parasiticus, which produce aflatoxins in maize, groundnuts, tree nuts, and, less frequently, other commodities. The main ochratoxin A producers, A. ochraceus and A. carbonarius, commonly occur in grapes, dried vine fruits, wine, and coffee. Penicillium verrucosum also produces ochratoxin A but occurs only in cool temperate climates, where it infects small grains. F. verticillioides is ubiquitous in maize, with an endophytic nature, and produces fumonisins, which are generally more prevalent when crops are under drought stress or suffer excessive insect damage. It has recently been shown that Aspergillus niger also produces fumonisins, and several commodities may be affected. F. graminearum, which is the major producer of deoxynivalenol and zearalenone, is pathogenic on maize, wheat, and barley and produces these toxins whenever it infects these grains before harvest. Also included is a short section on Claviceps purpurea, which produces sclerotia among the seeds in grasses, including wheat, barley, and triticale. The main thrust of the chapter contains information on the identification of these fungi and their morphological characteristics, as well as factors
ERIC Educational Resources Information Center
Perepiczka, Michelle; Chandler, Nichelle; Becerra, Michael
2011-01-01
Statistics plays an integral role in graduate programs. However, numerous intra- and interpersonal factors may lead to successful completion of needed coursework in this area. The authors examined the extent of the relationship between self-efficacy to learn statistics and statistics anxiety, attitude towards statistics, and social support of 166…
Nonlinear Statistical Modeling of Speech
NASA Astrophysics Data System (ADS)
Srinivasan, S.; Ma, T.; May, D.; Lazarou, G.; Picone, J.
2009-12-01
Contemporary approaches to speech and speaker recognition decompose the problem into four components: feature extraction, acoustic modeling, language modeling and search. Statistical signal processing is an integral part of each of these components, and Bayes Rule is used to merge these components into a single optimal choice. Acoustic models typically use hidden Markov models based on Gaussian mixture models for state output probabilities. This popular approach suffers from an inherent assumption of linearity in speech signal dynamics. Language models often employ a variety of maximum entropy techniques, but can employ many of the same statistical techniques used for acoustic models. In this paper, we focus on introducing nonlinear statistical models to the feature extraction and acoustic modeling problems as a first step towards speech and speaker recognition systems based on notions of chaos and strange attractors. Our goal in this work is to improve the generalization and robustness properties of a speech recognition system. Three nonlinear invariants are proposed for feature extraction: Lyapunov exponents, correlation fractal dimension, and correlation entropy. We demonstrate an 11% relative improvement on speech recorded under noise-free conditions, but show a comparable degradation occurs for mismatched training conditions on noisy speech. We conjecture that the degradation is due to difficulties in estimating invariants reliably from noisy data. To circumvent these problems, we introduce two dynamic models to the acoustic modeling problem: (1) a linear dynamic model (LDM) that uses a state space-like formulation to explicitly model the evolution of hidden states using an autoregressive process, and (2) a data-dependent mixture of autoregressive (MixAR) models. Results show that LDM and MixAR models can achieve comparable performance with HMM systems while using significantly fewer parameters. Currently we are developing Bayesian parameter estimation and
Should College Algebra be a Prerequisite for Taking Psychology Statistics?
ERIC Educational Resources Information Center
Sibulkin, Amy E.; Butler, J. S.
2008-01-01
In order to consider whether a course in college algebra should be a prerequisite for taking psychology statistics, we recorded students' grades in elementary psychology statistics and in college algebra at a 4-year university. Students who earned credit in algebra prior to enrolling in statistics for the first time had a significantly higher mean…
A Statistics Curriculum for the Undergraduate Chemistry Major
ERIC Educational Resources Information Center
Schlotter, Nicholas E.
2013-01-01
Our ability to statistically analyze data has grown significantly with the maturing of computer hardware and software. However, the evolution of our statistics capabilities has taken place without a corresponding evolution in the curriculum for the undergraduate chemistry major. Most faculty understands the need for a statistical educational…
A Tablet-PC Software Application for Statistics Classes
ERIC Educational Resources Information Center
Probst, Alexandre C.
2014-01-01
A significant deficiency in the area of introductory statistics education exists: Student performance on standardized assessments after a full semester statistics course is poor and students report a very low desire to learn statistics. Research on the current generation of students indicates an affinity for technology and for multitasking.…
"t" for Two: Using Mnemonics to Teach Statistics
ERIC Educational Resources Information Center
Stalder, Daniel R.; Olson, Elizabeth A.
2011-01-01
This article provides a list of statistical mnemonics for instructor use. This article also reports on the potential for such mnemonics to help students learn, enjoy, and become less apprehensive about statistics. Undergraduates from two sections of a psychology statistics course rated 8 of 11 mnemonics as significantly memorable and helpful in…
A statistical development of entropy for the introductory physics course
NASA Astrophysics Data System (ADS)
Schoepf, David C.
2002-02-01
Many introductory physics texts introduce the statistical basis for the definition of entropy in addition to the Clausius definition, ΔS=q/T. We use a model based on equally spaced energy levels to present a way that the statistical definition of entropy can be developed at the introductory level. In addition to motivating the statistical definition of entropy, we also develop statistical arguments to answer the following questions: (i) Why does a system approach a state of maximum number of microstates? (ii) What is the equilibrium distribution of particles? (iii) What is the statistical basis of temperature? (iv) What is the statistical basis for the direction of spontaneous energy transfer? Finally, a correspondence between the statistical and the classical Clausius definitions of entropy is made.
Innovative trend significance test and applications
NASA Astrophysics Data System (ADS)
Şen, Zekai
2015-11-01
Hydro-climatological time series might embed characteristics of past changes concerning climate variability in terms of shifts, cyclic fluctuations, and more significantly in the form of trends. Identification of such features from the available records is one of the prime tasks of hydrologists, climatologists, applied statisticians, or experts in related topics. Although there are different trend identification and significance tests in the literature, they require restrictive assumptions, which may not be existent in the structure of hydro-climatological time series. In this paper, a method is suggested with statistical significance test for trend identification in an innovative manner. This method has non-parametric basis without any restrictive assumption, and its application is rather simple with the concept of sub-series comparisons that are extracted from the main time series. The method provides privilege for selection of sub-temporal half periods for the comparison and, finally, generates trend on objective and quantitative manners. The necessary statistical equations are derived for innovative trend identification and statistical significance test application. The application of the proposed methodology is suggested for three time series from different parts of the world including Southern New Jersey annual temperature, Danube River annual discharge, and Tigris River Diyarbakir meteorology station annual total rainfall records. Each record has significant trend with increasing type in the New Jersey case, whereas in other two cases, decreasing trends exist.
NASA Astrophysics Data System (ADS)
Talkner, Peter
2003-03-01
The statistical properties of discrete Markov processes are investigated in terms of entrance times. Simple relations are given for their density and higher order distributions. These quantities are used for introducing a generalized Rice phase and for characterizing the synchronization of a process with an external driving force. For the McNamara Wiesenfeld model of stochastic resonance parameter regions (spanned by the noise strength, driving frequency and strength) are identified in which the process is locked with the frequency of the external driving and in which the diffusion of the Rice phase becomes minimal. At the same time the Fano factor of the number of entrances per period of the driving force has a minimum.
Dienes, J.K.
1983-01-01
An alternative to the use of plasticity theory to characterize the inelastic behavior of solids is to represent the flaws by statistical methods. We have taken such an approach to study fragmentation because it offers a number of advantages. Foremost among these is that, by considering the effects of flaws, it becomes possible to address the underlying physics directly. For example, we have been able to explain why rocks exhibit large strain-rate effects (a consequence of the finite growth rate of cracks), why a spherical explosive imbedded in oil shale produces a cavity with a nearly square section (opening of bedding cracks) and why propellants may detonate following low-speed impact (a consequence of frictional hot spots).
Conditional statistical model building
NASA Astrophysics Data System (ADS)
Hansen, Mads Fogtmann; Hansen, Michael Sass; Larsen, Rasmus
2008-03-01
We present a new statistical deformation model suited for parameterized grids with different resolutions. Our method models the covariances between multiple grid levels explicitly, and allows for very efficient fitting of the model to data on multiple scales. The model is validated on a data set consisting of 62 annotated MR images of Corpus Callosum. One fifth of the data set was used as a training set, which was non-rigidly registered to each other without a shape prior. From the non-rigidly registered training set a shape prior was constructed by performing principal component analysis on each grid level and using the results to construct a conditional shape model, conditioning the finer parameters with the coarser grid levels. The remaining shapes were registered with the constructed shape prior. The dice measures for the registration without prior and the registration with a prior were 0.875 +/- 0.042 and 0.8615 +/- 0.051, respectively.
Statistical design controversy
Evans, L.S.; Hendrey, G.R.; Thompson, K.H.
1985-02-01
This article was in response to criticisms received by Evans, Hendrey, and Thompson that their article was biased because of omissions and misrepresentations. The authors contend that experimental designs having only one plot per treatment ''were, from the outset, not capable of differentiating between treatment effects and field-position effects,'' remains valid and is supported by decades of agronomic research. Several men, Irving, Troiano, and McCune thought of the article as a review of all studies of acidic rain effects on soybeans. It was not. The article was written over the concern of the comparisons which were being made among studies which purport to evaluate effects of acid deposition on field-grown crops, and implicitly assumes that all of the studies are of equal scientific value. They are not. Only experimental approaches that are well-focused and designed with appropriate agronomic and statistical procedures should be used for credible regional and national assessments of crop inventories. 12 references.
Rossell, David
2016-01-01
Big Data brings unprecedented power to address scientific, economic and societal issues, but also amplifies the possibility of certain pitfalls. These include using purely data-driven approaches that disregard understanding the phenomenon under study, aiming at a dynamically moving target, ignoring critical data collection issues, summarizing or preprocessing the data inadequately and mistaking noise for signal. We review some success stories and illustrate how statistical principles can help obtain more reliable information from data. We also touch upon current challenges that require active methodological research, such as strategies for efficient computation, integration of heterogeneous data, extending the underlying theory to increasingly complex questions and, perhaps most importantly, training a new generation of scientists to develop and deploy these strategies. PMID:27722040
Statistical physics ""Beyond equilibrium
Ecke, Robert E
2009-01-01
The scientific challenges of the 21st century will increasingly involve competing interactions, geometric frustration, spatial and temporal intrinsic inhomogeneity, nanoscale structures, and interactions spanning many scales. We will focus on a broad class of emerging problems that will require new tools in non-equilibrium statistical physics and that will find application in new material functionality, in predicting complex spatial dynamics, and in understanding novel states of matter. Our work will encompass materials under extreme conditions involving elastic/plastic deformation, competing interactions, intrinsic inhomogeneity, frustration in condensed matter systems, scaling phenomena in disordered materials from glasses to granular matter, quantum chemistry applied to nano-scale materials, soft-matter materials, and spatio-temporal properties of both ordinary and complex fluids.
Statistically determined nickel cadmium performance relationships
NASA Technical Reports Server (NTRS)
Gross, Sidney
1987-01-01
A statistical analysis was performed on sealed nickel cadmium cell manufacturing data and cell matching data. The cells subjected to the analysis were 30 Ah sealed Ni/Cd cells, made by General Electric. A total of 213 data parameters was investigated, including such information as plate thickness, amount of electrolyte added, weight of active material, positive and negative capacity, and charge-discharge behavior. Statistical analyses were made to determine possible correlations between test events. The data show many departures from normal distribution. Product consistency from one lot to another is an important attribute for aerospace applications. It is clear from these examples that there are some significant differences between lots. Statistical analyses are seen to be an excellent way to spot those differences. Also, it is now proven beyond doubt that battery testing is one of the leading causes of statistics.
Wide Wide World of Statistics: International Statistics on the Internet.
ERIC Educational Resources Information Center
Foudy, Geraldine
2000-01-01
Explains how to find statistics on the Internet, especially international statistics. Discusses advantages over print sources, including convenience, currency of information, cost effectiveness, and value-added formatting; sources of international statistics; United Nations agencies; search engines and power searching; and evaluating sources. (LRW)
Understanding Statistics and Statistics Education: A Chinese Perspective
ERIC Educational Resources Information Center
Shi, Ning-Zhong; He, Xuming; Tao, Jian
2009-01-01
In recent years, statistics education in China has made great strides. However, there still exists a fairly large gap with the advanced levels of statistics education in more developed countries. In this paper, we identify some existing problems in statistics education in Chinese schools and make some proposals as to how they may be overcome. We…
Statistical Literacy: Developing a Youth and Adult Education Statistical Project
ERIC Educational Resources Information Center
Conti, Keli Cristina; Lucchesi de Carvalho, Dione
2014-01-01
This article focuses on the notion of literacy--general and statistical--in the analysis of data from a fieldwork research project carried out as part of a master's degree that investigated the teaching and learning of statistics in adult education mathematics classes. We describe the statistical context of the project that involved the…
Foundations of Statistical Mechanics in Space Plasmas
NASA Astrophysics Data System (ADS)
Livadiotis, G.
2014-12-01
Systems at thermal equilibrium have their distribution function of particle velocities stabilized into a Maxwell distribution, which is connected with the classical framework of Boltzmann-Gibbs (BG) statistical mechanics. However, Maxwell distributions are rare in space plasmas; the vast majority of these plasmas reside at stationary states out of thermal equilibrium, which are described by kappa distributions. Kappa distributions do not embody BG statistics, but instead, they are connected with the generalized statistical framework of non-extensive statistical mechanics that offers a solid theoretical basis for describing particle systems like collisionless space plasmas. Through the statistical formulation of kappa distributions, basic thermodynamic variables like the temperature, thermal pressure, and entropy, become physically meaningful and determinable, similarly to their classical BG description at thermal equilibrium. In addition, useful formulations of kappa distributions were developed in order to describe multi-particle distributions, and particle systems with a non-zero potential energy. Finally, the variety of kappa distribution formulations and the proven tools of non-extensive statistical mechanics have been successfully applied to a numerous space plasmas throughout the heliosphere, from the inner heliosphere (e.g., the solar wind and planetary magnetospheres) to the outer heliosphere (e.g., the inner heliosheath) and beyond.
NASA Technical Reports Server (NTRS)
1999-01-01
Mainstream Engineering Corporation was awarded Phase I and Phase II contracts from Goddard Space Flight Center's Small Business Innovation Research (SBIR) program in early 1990. With support from the SBIR program, Mainstream Engineering Corporation has developed a unique low cost additive, QwikBoost (TM), that increases the performance of air conditioners, heat pumps, refrigerators, and freezers. Because of the energy and environmental benefits of QwikBoost, Mainstream received the Tibbetts Award at a White House Ceremony on October 16, 1997. QwikBoost was introduced at the 1998 International Air Conditioning, Heating, and Refrigeration Exposition. QwikBoost is packaged in a handy 3-ounce can (pressurized with R-134a) and will be available for automotive air conditioning systems in summer 1998.
NASA Technical Reports Server (NTRS)
Kalvinskas, J. J.; Mueller, W. A.; Ingham, J. D. (Inventor)
1980-01-01
The additive is for a raw sewage treatment process of the type where settling tanks are used for the purpose of permitting the suspended matter in the raw sewage to be settled as well as to permit adsorption of the dissolved contaminants in the water of the sewage. The sludge, which settles down to the bottom of the settling tank is extracted, pyrolyzed and activated to form activated carbon and ash which is mixed with the sewage prior to its introduction into the settling tank. The sludge does not provide all of the activated carbon and ash required for adequate treatment of the raw sewage. It is necessary to add carbon to the process and instead of expensive commercial carbon, coal is used to provide the carbon supplement.
Perspectives on Additive Manufacturing
NASA Astrophysics Data System (ADS)
Bourell, David L.
2016-07-01
Additive manufacturing (AM) has skyrocketed in visibility commercially and in the public sector. This article describes the development of this field from early layered manufacturing approaches of photosculpture, topography, and material deposition. Certain precursors to modern AM processes are also briefly described. The growth of the field over the last 30 years is presented. Included is the standard delineation of AM technologies into seven broad categories. The economics of AM part generation is considered, and the impacts of the economics on application sectors are described. On the basis of current trends, the future outlook will include a convergence of AM fabricators, mass-produced AM fabricators, enabling of topology optimization designs, and specialization in the AM legal arena. Long-term developments with huge impact are organ printing and volume-based printing.
NASA Astrophysics Data System (ADS)
Agrawal, Jyoti; Frampton, Paul H.; Jack Ng, Y.; Nishino, Hitoshi; Yasuda, Osamu
1991-03-01
An extension of the standard model is proposed. The gauge group is SU(2) X ⊗ SU(3) C ⊗ SU(2) S ⊗ U(1) Q, where all gauge symmetries are unbroken. The colour and electric charge are combined with SU(2) S which becomes strongly coupled at approximately 500 GeV and binds preons to form fermionic and vector bound states. The usual quarks and leptons are singlets under SU(2) X but additional fermions, called sarks. transform under it and the electroweak group. The present model explains why no more than three light quark-lepton families can exist. Neutral sark baryons, called narks, are candidates for the cosmological dark matter having the characteristics designed for WIMPS. Further phenomenological implications of sarks are analyzed i including electron-positron annihilation. Z 0 decay, flavor-changing neutral currents. baryon-number non-conservation, sarkonium and the neutron electric dipole moment.
Statistical Modeling for Radiation Hardness Assurance
NASA Technical Reports Server (NTRS)
Ladbury, Raymond L.
2014-01-01
We cover the models and statistics associated with single event effects (and total ionizing dose), why we need them, and how to use them: What models are used, what errors exist in real test data, and what the model allows us to say about the DUT will be discussed. In addition, how to use other sources of data such as historical, heritage, and similar part and how to apply experience, physics, and expert opinion to the analysis will be covered. Also included will be concepts of Bayesian statistics, data fitting, and bounding rates.
The Statistical Consulting Center for Astronomy (SCCA)
NASA Technical Reports Server (NTRS)
Akritas, Michael
2001-01-01
The process by which raw astronomical data acquisition is transformed into scientifically meaningful results and interpretation typically involves many statistical steps. Traditional astronomy limits itself to a narrow range of old and familiar statistical methods: means and standard deviations; least-squares methods like chi(sup 2) minimization; and simple nonparametric procedures such as the Kolmogorov-Smirnov tests. These tools are often inadequate for the complex problems and datasets under investigations, and recent years have witnessed an increased usage of maximum-likelihood, survival analysis, multivariate analysis, wavelet and advanced time-series methods. The Statistical Consulting Center for Astronomy (SCCA) assisted astronomers with the use of sophisticated tools, and to match these tools with specific problems. The SCCA operated with two professors of statistics and a professor of astronomy working together. Questions were received by e-mail, and were discussed in detail with the questioner. Summaries of those questions and answers leading to new approaches were posted on the Web (www.state.psu.edu/ mga/SCCA). In addition to serving individual astronomers, the SCCA established a Web site for general use that provides hypertext links to selected on-line public-domain statistical software and services. The StatCodes site (www.astro.psu.edu/statcodes) provides over 200 links in the areas of: Bayesian statistics; censored and truncated data; correlation and regression, density estimation and smoothing, general statistics packages and information; image analysis; interactive Web tools; multivariate analysis; multivariate clustering and classification; nonparametric analysis; software written by astronomers; spatial statistics; statistical distributions; time series analysis; and visualization tools. StatCodes has received a remarkable high and constant hit rate of 250 hits/week (over 10,000/year) since its inception in mid-1997. It is of interest to
Heart Disease and Stroke Statistics
... Nutrition (PDF) Obesity (PDF) Peripheral Artery Disease (PDF) ... statistics, please contact the American Heart Association National Center, Office of Science & Medicine at statistics@heart.org . Please direct all ...
Muscular Dystrophy: Data and Statistics
... Statistics Recommend on Facebook Tweet Share Compartir MD STAR net Data and Statistics The following data and ... research [ Read Article ] For more information on MD STAR net see Research and Tracking . Key Findings Feature ...
Statistical methods in physical mapping
Nelson, D.O.
1995-05-01
One of the great success stories of modern molecular genetics has been the ability of biologists to isolate and characterize the genes responsible for serious inherited diseases like fragile X syndrome, cystic fibrosis and myotonic muscular dystrophy. This dissertation concentrates on constructing high-resolution physical maps. It demonstrates how probabilistic modeling and statistical analysis can aid molecular geneticists in the tasks of planning, execution, and evaluation of physical maps of chromosomes and large chromosomal regions. The dissertation is divided into six chapters. Chapter 1 provides an introduction to the field of physical mapping, describing the role of physical mapping in gene isolation and ill past efforts at mapping chromosomal regions. The next two chapters review and extend known results on predicting progress in large mapping projects. Such predictions help project planners decide between various approaches and tactics for mapping large regions of the human genome. Chapter 2 shows how probability models have been used in the past to predict progress in mapping projects. Chapter 3 presents new results, based on stationary point process theory, for progress measures for mapping projects based on directed mapping strategies. Chapter 4 describes in detail the construction of all initial high-resolution physical map for human chromosome 19. This chapter introduces the probability and statistical models involved in map construction in the context of a large, ongoing physical mapping project. Chapter 5 concentrates on one such model, the trinomial model. This chapter contains new results on the large-sample behavior of this model, including distributional results, asymptotic moments, and detection error rates. In addition, it contains an optimality result concerning experimental procedures based on the trinomial model. The last chapter explores unsolved problems and describes future work.
Statistical assessment of biosimilar products.
Chow, Shein-Chung; Liu, Jen-Pei
2010-01-01
Biological products or medicines are therapeutic agents that are produced using a living system or organism. Access to these life-saving biological products is limited because of their expensive costs. Patents on the early biological products will soon expire in the next few years. This allows other biopharmaceutical/biotech companies to manufacture the generic versions of the biological products, which are referred to as follow-on biological products by the U.S. Food and Drug Administration (FDA) or as biosimilar medicinal products by the European Medicine Agency (EMEA) of the European Union (EU). Competition of cost-effective follow-on biological products with equivalent efficacy and safety can cut down the costs and hence increase patients' access to the much-needed biological pharmaceuticals. Unlike for the conventional pharmaceuticals of small molecules, the complexity and heterogeneity of the molecular structure, complicated manufacturing process, different analytical methods, and possibility of severe immunogenicity reactions make evaluation of equivalence (similarity) between the biosimilar products and their corresponding innovator product a great challenge for both the scientific community and regulatory agencies. In this paper, we provide an overview of the current regulatory requirements for approval of biosimilar products. A review of current criteria for evaluation of bioequivalence for the traditional chemical generic products is provided. A detailed description of the differences between the biosimilar and chemical generic products is given with respect to size and structure, immunogenicity, product quality attributed, and manufacturing processes. In addition, statistical considerations including design criteria, fundamental biosimilar assumptions, and statistical methods are proposed. The possibility of using genomic data in evaluation of biosimilar products is also explored. PMID:20077246
Thoughts About Theories and Statistics.
Fawcett, Jacqueline
2015-07-01
The purpose of this essay is to share my ideas about the connection between theories and statistics. The essay content reflects my concerns about some researchers' and readers' apparent lack of clarity about what constitutes appropriate statistical testing and conclusions about the empirical adequacy of theories. The reciprocal relation between theories and statistics is emphasized and the conclusion is that statistics without direction from theory is no more than a hobby.
Individualized additional instruction for calculus
NASA Astrophysics Data System (ADS)
Takata, Ken
2010-10-01
College students enrolling in the calculus sequence have a wide variance in their preparation and abilities, yet they are usually taught from the same lecture. We describe another pedagogical model of Individualized Additional Instruction (IAI) that assesses each student frequently and prescribes further instruction and homework based on the student's performance. Our study compares two calculus classes, one taught with mandatory remedial IAI and the other without. The class with mandatory remedial IAI did significantly better on comprehensive multiple-choice exams, participated more frequently in classroom discussion and showed greater interest in theorem-proving and other advanced topics.
The Mozart Effect: Additional Data.
Hughes, John R.
2002-04-01
After the review of the Mozart effect was published in this journal (Hughes JR. Epilepsy Behav 2001;2:369-417), additional data from the music of Haydn and Liszt have been analyzed that may account for the decrease in seizure activity originally reported during Mozart music. Even with these added data Mozart music continued to score significantly higher than the selections from the other six composers in one of the important characteristics of this music, namely, the repetition of the melody. However Haydn's values were second highest among Mozart, J. S. Bach, Wagner, Beethoven, Chopin, and Liszt.
Castle, Toen; Sussman, Daniel M.; Tanis, Michael; Kamien, Randall D.
2016-01-01
Kirigami uses bending, folding, cutting, and pasting to create complex three-dimensional (3D) structures from a flat sheet. In the case of lattice kirigami, this cutting and rejoining introduces defects into an underlying 2D lattice in the form of points of nonzero Gaussian curvature. A set of simple rules was previously used to generate a wide variety of stepped structures; we now pare back these rules to their minimum. This allows us to describe a set of techniques that unify a wide variety of cut-and-paste actions under the rubric of lattice kirigami, including adding new material and rejoining material across arbitrary cuts in the sheet. We also explore the use of more complex lattices and the different structures that consequently arise. Regardless of the choice of lattice, creating complex structures may require multiple overlapping kirigami cuts, where subsequent cuts are not performed on a locally flat lattice. Our additive kirigami method describes such cuts, providing a simple methodology and a set of techniques to build a huge variety of complex 3D shapes. PMID:27679822
Castle, Toen; Sussman, Daniel M.; Tanis, Michael; Kamien, Randall D.
2016-01-01
Kirigami uses bending, folding, cutting, and pasting to create complex three-dimensional (3D) structures from a flat sheet. In the case of lattice kirigami, this cutting and rejoining introduces defects into an underlying 2D lattice in the form of points of nonzero Gaussian curvature. A set of simple rules was previously used to generate a wide variety of stepped structures; we now pare back these rules to their minimum. This allows us to describe a set of techniques that unify a wide variety of cut-and-paste actions under the rubric of lattice kirigami, including adding new material and rejoining material across arbitrary cuts in the sheet. We also explore the use of more complex lattices and the different structures that consequently arise. Regardless of the choice of lattice, creating complex structures may require multiple overlapping kirigami cuts, where subsequent cuts are not performed on a locally flat lattice. Our additive kirigami method describes such cuts, providing a simple methodology and a set of techniques to build a huge variety of complex 3D shapes.
Springer Handbook of Engineering Statistics
NASA Astrophysics Data System (ADS)
Pham, Hoang
The Springer Handbook of Engineering Statistics gathers together the full range of statistical techniques required by engineers from all fields to gain sensible statistical feedback on how their processes or products are functioning and to give them realistic predictions of how these could be improved.
Statistical log analysis made practical
Mitchell, W.K.; Nelson, R.J. )
1991-06-01
This paper discusses the advantages of a statistical approach to log analysis. Statistical techniques use inverse methods to calculate formation parameters. The use of statistical techniques has been limited, however, by the complexity of the mathematics and lengthy computer time required to minimize traditionally used nonlinear equations.
Invention Activities Support Statistical Reasoning
ERIC Educational Resources Information Center
Smith, Carmen Petrick; Kenlan, Kris
2016-01-01
Students' experiences with statistics and data analysis in middle school are often limited to little more than making and interpreting graphs. Although students may develop fluency in statistical procedures and vocabulary, they frequently lack the skills necessary to apply statistical reasoning in situations other than clear-cut textbook examples.…
Explorations in Statistics: the Bootstrap
ERIC Educational Resources Information Center
Curran-Everett, Douglas
2009-01-01
Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This fourth installment of Explorations in Statistics explores the bootstrap. The bootstrap gives us an empirical approach to estimate the theoretical variability among possible values of a sample statistic such as the…
Teaching Statistics Online Using "Excel"
ERIC Educational Resources Information Center
Jerome, Lawrence
2011-01-01
As anyone who has taught or taken a statistics course knows, statistical calculations can be tedious and error-prone, with the details of a calculation sometimes distracting students from understanding the larger concepts. Traditional statistics courses typically use scientific calculators, which can relieve some of the tedium and errors but…
Statistics Anxiety and Instructor Immediacy
ERIC Educational Resources Information Center
Williams, Amanda S.
2010-01-01
The purpose of this study was to investigate the relationship between instructor immediacy and statistics anxiety. It was predicted that students receiving immediacy would report lower levels of statistics anxiety. Using a pretest-posttest-control group design, immediacy was measured using the Instructor Immediacy scale. Statistics anxiety was…
Statistics: It's in the Numbers!
ERIC Educational Resources Information Center
Deal, Mary M.; Deal, Walter F., III
2007-01-01
Mathematics and statistics play important roles in peoples' lives today. A day hardly passes that they are not bombarded with many different kinds of statistics. As consumers they see statistical information as they surf the web, watch television, listen to their satellite radios, or even read the nutrition facts panel on a cereal box in the…
Statistics of indistinguishable particles.
Wittig, Curt
2009-07-01
The wave function of a system containing identical particles takes into account the relationship between a particle's intrinsic spin and its statistical property. Specifically, the exchange of two identical particles having odd-half-integer spin results in the wave function changing sign, whereas the exchange of two identical particles having integer spin is accompanied by no such sign change. This is embodied in a term (-1)(2s), which has the value +1 for integer s (bosons), and -1 for odd-half-integer s (fermions), where s is the particle spin. All of this is well-known. In the nonrelativistic limit, a detailed consideration of the exchange of two identical particles shows that exchange is accompanied by a 2pi reorientation that yields the (-1)(2s) term. The same bookkeeping is applicable to the relativistic case described by the proper orthochronous Lorentz group, because any proper orthochronous Lorentz transformation can be expressed as the product of spatial rotations and a boost along the direction of motion. PMID:19552474
International petroleum statistics report
1996-05-01
The International Petroleum Statistics Report presents data on international oil production, demand, imports, exports, and stocks. The report has four sections. Section 1 contains time series data on world oil production, and on oil demand and stocks in the Organization for Economic Cooperation and Development (OECD). This section contains annual data beginning in 1985, and monthly data for the most recent two years. Section 2 presents an oil supply/demand balance for the world. This balance is presented in quarterly intervals for the most recent two years. Section 3 presents data on oil imports by OECD countries. This section contains annual data for the most recent year, quarterly data for the most recent two quarters, and monthly data for the most recent twelve months. Section 4 presents annual time series data on world oil production and oil stocks, demand, and trade in OECD countries. World oil production and OECD demand data are for the years 1970 through 1995; OECD stocks from 1973 through 1995; and OECD trade from 1084 through 1994.
International petroleum statistics report
1995-11-01
The International Petroleum Statistics Report presents data on international oil production, demand, imports, exports, and stocks. The report has four sections. Section 1 contains time series data on world oil production, and on oil demand and stocks in the Organization for Economic Cooperation and Development (OECD). This section contains annual data beginning in 1985, and monthly data for the most recent two years. Section 2 presents an oil supply/demand balance for the world. This balance is presented in quarterly intervals for the most recent two years. Section 3 presents data on oil imports by OECD countries. This section contains annual data for the most recent year, quarterly data for the most recent two quarters, and monthly data for the most recent twelve months. Section 4 presents annual time series data on world oil production and oil stocks, demand, and trade in OECD countries. World oil production and OECD demand data are for the years 1970 through 1994; OECD stocks from 1973 through 1994; and OECD trade from 1984 through 1994.
International petroleum statistics report
1995-07-27
The International Petroleum Statistics Report presents data on international oil production, demand, imports, and exports, and stocks. The report has four sections. Section 1 contains time series data on world oil production, and on oil demand and stocks in the Organization for Economic Cooperation and Development (OECD). This section contains annual data beginning in 1985, and monthly data for the most recent two years. Section 2 presents an oil supply/demand balance for the world. This balance is presented in quarterly intervals for the most recent two years. Section 3 presents data on oil imports by OECD countries. This section contains annual data for the most recent year, quarterly data for the most recent two quarters, and monthly data for the most recent twelve months. Section 4 presents annual time series data on world oil production and oil stocks, demand, and trade in OECD countries. World oil production and OECD demand data are for the years 1970 through 1994; OECD stocks from 1973 through 1994; and OECD trade from 1984 through 1994.
Topics in statistical mechanics
Elser, V.
1984-05-01
This thesis deals with four independent topics in statistical mechanics: (1) the dimer problem is solved exactly for a hexagonal lattice with general boundary using a known generating function from the theory of partitions. It is shown that the leading term in the entropy depends on the shape of the boundary; (2) continuum models of percolation and self-avoiding walks are introduced with the property that their series expansions are sums over linear graphs with intrinsic combinatorial weights and explicit dimension dependence; (3) a constrained SOS model is used to describe the edge of a simple cubic crystal. Low and high temperature results are derived as well as the detailed behavior near the crystal facet; (4) the microscopic model of the lambda-transition involving atomic permutation cycles is reexamined. In particular, a new derivation of the two-component field theory model of the critical behavior is presented. Results for a lattice model originally proposed by Kikuchi are extended with a high temperature series expansion and Monte Carlo simulation. 30 references.
Statistical mechanics of nucleosomes
NASA Astrophysics Data System (ADS)
Chereji, Razvan V.
Eukaryotic cells contain long DNA molecules (about two meters for a human cell) which are tightly packed inside the micrometric nuclei. Nucleosomes are the basic packaging unit of the DNA which allows this millionfold compactification. A longstanding puzzle is to understand the principles which allow cells to both organize their genomes into chromatin fibers in the crowded space of their nuclei, and also to keep the DNA accessible to many factors and enzymes. With the nucleosomes covering about three quarters of the DNA, their positions are essential because these influence which genes can be regulated by the transcription factors and which cannot. We study physical models which predict the genome-wide organization of the nucleosomes and also the relevant energies which dictate this organization. In the last five years, the study of chromatin knew many important advances. In particular, in the field of nucleosome positioning, new techniques of identifying nucleosomes and the competing DNA-binding factors appeared, as chemical mapping with hydroxyl radicals, ChIP-exo, among others, the resolution of the nucleosome maps increased by using paired-end sequencing, and the price of sequencing an entire genome decreased. We present a rigorous statistical mechanics model which is able to explain the recent experimental results by taking into account nucleosome unwrapping, competition between different DNA-binding proteins, and both the interaction between histones and DNA, and between neighboring histones. We show a series of predictions of our new model, all in agreement with the experimental observations.
International petroleum statistics report
1997-07-01
The International Petroleum Statistics Report is a monthly publication that provides current international data. The report presents data on international oil production, demand, imports, and stocks. The report has four sections. Section 1 contains time series data on world oil production, and on oil demand and stocks in the Organization for Economic Cooperation and Development (OECD). This section contains annual data beginning in 1985, and monthly data for the most recent two years. Section 2 presents an oil supply/demand balance for the world. This balance is presented in quarterly intervals for the most recent two years. Section 3 presents data on oil imports by OECD countries. This section contains annual data for the most recent year, quarterly data for the most recent two quarters, and monthly data for the most recent 12 months. Section 4 presents annual time series data on world oil production and oil stocks, demand, and trade in OECD countries. World oil production and OECD demand data are for the years 1970 through 1996; OECD stocks from 1973 through 1996; and OECD trade from 1986 through 1996.
International petroleum statistics report
1996-10-01
The International Petroleum Statistics Report presents data on international oil production, demand, imports, and stocks. The report has four sections. Section 1 contains time series data on world oil production, and on oil demand and stocks in the Organization for Economic Cooperation and Development (OECD). This section contains annual data beginning in 1985, and monthly data for the most recent two years. Section 2 presents an oil supply/demand balance for the world. This balance is presented in quarterly intervals for the most recent two years. Section 3 presents data on oil imports by OECD countries. This section contains annual data for the most recent year, quarterly data for the most recent two quarters, and monthly data for the most recent twelve months. Section 4 presents annual time series data on world oil production and oil stocks, demand, and trade in OECD countries. Word oil production and OECD demand data are for the years 1970 through 1995; OECD stocks from 1973 through 1995; and OECD trade from 1985 through 1995.
A statistical mechanical problem?
Costa, Tommaso; Ferraro, Mario
2014-01-01
The problem of deriving the processes of perception and cognition or the modes of behavior from states of the brain appears to be unsolvable in view of the huge numbers of elements involved. However, neural activities are not random, nor independent, but constrained to form spatio-temporal patterns, and thanks to these restrictions, which in turn are due to connections among neurons, the problem can at least be approached. The situation is similar to what happens in large physical ensembles, where global behaviors are derived by microscopic properties. Despite the obvious differences between neural and physical systems a statistical mechanics approach is almost inescapable, since dynamics of the brain as a whole are clearly determined by the outputs of single neurons. In this paper it will be shown how, starting from very simple systems, connectivity engenders levels of increasing complexity in the functions of the brain depending on specific constraints. Correspondingly levels of explanations must take into account the fundamental role of constraints and assign at each level proper model structures and variables, that, on one hand, emerge from outputs of the lower levels, and yet are specific, in that they ignore irrelevant details. PMID:25228891
2008-01-01
There is an increasing need for students in the biological sciences to build a strong foundation in quantitative approaches to data analyses. Although most science, engineering, and math field majors are required to take at least one statistics course, statistical analysis is poorly integrated into undergraduate biology course work, particularly at the lower-division level. Elements of statistics were incorporated into an introductory biology course, including a review of statistics concepts and opportunity for students to perform statistical analysis in a biological context. Learning gains were measured with an 11-item statistics learning survey instrument developed for the course. Students showed a statistically significant 25% (p < 0.005) increase in statistics knowledge after completing introductory biology. Students improved their scores on the survey after completing introductory biology, even if they had previously completed an introductory statistics course (9%, improvement p < 0.005). Students retested 1 yr after completing introductory biology showed no loss of their statistics knowledge as measured by this instrument, suggesting that the use of statistics in biology course work may aid long-term retention of statistics knowledge. No statistically significant differences in learning were detected between male and female students in the study. PMID:18765754
Bhhatarai, Barun; Garg, Rajni; Gramatica, Paola
2010-07-12
Two parallel approaches for quantitative structure-activity relationships (QSAR) are predominant in literature, one guided by mechanistic methods (including read-across) and another by the use of statistical methods. To bridge the gap between these two approaches and to verify their main differences, a comparative study of mechanistically relevant and statistically relevant QSAR models, developed on a case study of 158 cycloalkyl-pyranones, biologically active on inhibition (Ki ) of HIV protease, was performed. Firstly, Multiple Linear Regression (MLR) based models were developed starting from a limited amount of molecular descriptors which were widely proven to have mechanistic interpretation. Then robust and predictive MLR models were developed on the same set using two different statistical approaches unbiased of input descriptors. Development of models based on Statistical I method was guided by stepwise addition of descriptors while Genetic Algorithm based selection of descriptors was used for the Statistical II. Internal validation, the standard error of the estimate, and Fisher's significance test were performed for both the statistical models. In addition, external validation was performed for Statistical II model, and Applicability Domain was verified as normally practiced in this approach. The relationships between the activity and the important descriptors selected in all the models were analyzed and compared. It is concluded that, despite the different type and number of input descriptors, and the applied descriptor selection tools or the algorithms used for developing the final model, the mechanistical and statistical approach are comparable to each other in terms of quality and also for mechanistic interpretability of modelling descriptors. Agreement can be observed between these two approaches and the better result could be a consensus prediction from both the models.
Self-Contained Statistical Analysis of Gene Sets
Cannon, Judy L.; Ricoy, Ulises M.; Johnson, Christopher
2016-01-01
Microarrays are a powerful tool for studying differential gene expression. However, lists of many differentially expressed genes are often generated, and unraveling meaningful biological processes from the lists can be challenging. For this reason, investigators have sought to quantify the statistical probability of compiled gene sets rather than individual genes. The gene sets typically are organized around a biological theme or pathway. We compute correlations between different gene set tests and elect to use Fisher’s self-contained method for gene set analysis. We improve Fisher’s differential expression analysis of a gene set by limiting the p-value of an individual gene within the gene set to prevent a small percentage of genes from determining the statistical significance of the entire set. In addition, we also compute dependencies among genes within the set to determine which genes are statistically linked. The method is applied to T-ALL (T-lineage Acute Lymphoblastic Leukemia) to identify differentially expressed gene sets between T-ALL and normal patients and T-ALL and AML (Acute Myeloid Leukemia) patients. PMID:27711232
Statistical analysis of single-trial Granger causality spectra.
Brovelli, Andrea
2012-01-01
Granger causality analysis is becoming central for the analysis of interactions between neural populations and oscillatory networks. However, it is currently unclear whether single-trial estimates of Granger causality spectra can be used reliably to assess directional influence. We addressed this issue by combining single-trial Granger causality spectra with statistical inference based on general linear models. The approach was assessed on synthetic and neurophysiological data. Synthetic bivariate data was generated using two autoregressive processes with unidirectional coupling. We simulated two hypothetical experimental conditions: the first mimicked a constant and unidirectional coupling, whereas the second modelled a linear increase in coupling across trials. The statistical analysis of single-trial Granger causality spectra, based on t-tests and linear regression, successfully recovered the underlying pattern of directional influence. In addition, we characterised the minimum number of trials and coupling strengths required for significant detection of directionality. Finally, we demonstrated the relevance for neurophysiology by analysing two local field potentials (LFPs) simultaneously recorded from the prefrontal and premotor cortices of a macaque monkey performing a conditional visuomotor task. Our results suggest that the combination of single-trial Granger causality spectra and statistical inference provides a valuable tool for the analysis of large-scale cortical networks and brain connectivity.
The sensitivity analysis of the economic and economic statistical designs of the synthetic X¯ chart
NASA Astrophysics Data System (ADS)
Yeong, Wai Chung; Khoo, Michael Boon Chong; Chong, Jia Kit; Lim, Shun Jinn; Teoh, Wei Lin
2014-12-01
The economic and economic statistical designs allow the practitioner to implement the control chart in an economically optimal manner. For the economic design, the optimal chart parameters are obtained to minimize the cost, while for the economic statistical design, additional constraints in terms of the average run length is imposed. However, these designs involve the estimation of quite a number of input parameters. Some of these input parameters are difficult to estimate accurately. Thus, a sensitivity analysis is required in order to identify which parameters need to be estimated accurately, and which requires just a rough estimation. This study focuses on the significance of 11 input parameters toward the optimal cost and average run lengths of the synthetic ¯X chart. The significant input parameters are identified through a two-level fractional factorial design, which allows interaction effects to be identified. An analysis of variance is performed to obtain the P-values by using the Minitab software. The significant input parameters and interactions on the optimal cost and average run lengths are identified based on a 5% significance level. The results of this study show that the input parameters which are significant towards the economic design may not be significant for the economic statistical design, and vice versa. This study also shows that there are quite a number of significant interaction effects, which may mask the significance of the main effects.
Exact significance test for Markov order
NASA Astrophysics Data System (ADS)
Pethel, S. D.; Hahs, D. W.
2014-02-01
We describe an exact significance test of the null hypothesis that a Markov chain is nth order. The procedure utilizes surrogate data to yield an exact test statistic distribution valid for any sample size. Surrogate data are generated using a novel algorithm that guarantees, per shot, a uniform sampling from the set of sequences that exactly match the nth order properties of the observed data. Using the test, the Markov order of Tel Aviv rainfall data is examined.
Expression and prognostic significance of unique ULBPs in pancreatic cancer
Chen, Jiong; Zhu, Xing-Xing; Xu, Hong; Fang, Heng-Zhong; Zhao, Jin-Qian
2016-01-01
Background Pancreatic cancer is one of the most lethal cancers worldwide, due to the lack of efficient therapy and difficulty in early diagnosis. ULBPs have been shown to behave as important protectors with prognostic significance in various cancers. Materials and methods Immunohistochemistry and enzyme-linked immunosorbent assays were used to explore the expression of ULBPs in cancer tissue and in serum, while survival analysis was used to evaluate the subsequent clinical value of ULBPs. Results Statistics showed that high expression of membrane ULBP1 was a good biomarker of overall survival (18 months vs 13 months), and a high level of soluble ULBP2 was deemed an independent poor indicator for both overall survival (P<0.001) and disease-free survival (P<0.001). Conclusion ULBP1 provides additional information for early diagnosis, and soluble ULBP2 can be used as a novel tumor marker to evaluate the risk of pancreatic cancer patients. PMID:27621649
Expression and prognostic significance of unique ULBPs in pancreatic cancer
Chen, Jiong; Zhu, Xing-Xing; Xu, Hong; Fang, Heng-Zhong; Zhao, Jin-Qian
2016-01-01
Background Pancreatic cancer is one of the most lethal cancers worldwide, due to the lack of efficient therapy and difficulty in early diagnosis. ULBPs have been shown to behave as important protectors with prognostic significance in various cancers. Materials and methods Immunohistochemistry and enzyme-linked immunosorbent assays were used to explore the expression of ULBPs in cancer tissue and in serum, while survival analysis was used to evaluate the subsequent clinical value of ULBPs. Results Statistics showed that high expression of membrane ULBP1 was a good biomarker of overall survival (18 months vs 13 months), and a high level of soluble ULBP2 was deemed an independent poor indicator for both overall survival (P<0.001) and disease-free survival (P<0.001). Conclusion ULBP1 provides additional information for early diagnosis, and soluble ULBP2 can be used as a novel tumor marker to evaluate the risk of pancreatic cancer patients.
Multivariate statistical analysis of environmental monitoring data
Ross, D.L.
1997-11-01
EPA requires statistical procedures to determine whether soil or ground water adjacent to or below waste units is contaminated. These statistical procedures are often based on comparisons between two sets of data: one representing background conditions, and one representing site conditions. Since statistical requirements were originally promulgated in the 1980s, EPA has made several improvements and modifications. There are, however, problems which remain. One problem is that the regulations do not require a minimum probability that contaminated sites will be correctly identified. Another problems is that the effect of testing several correlated constituents on the probable outcome of the statistical tests has not been quantified. Results from computer simulations to determine power functions for realistic monitoring situations are presented here. Power functions for two different statistical procedures: the Student`s t-test, and the multivariate Hotelling`s T{sup 2} test, are compared. The comparisons indicate that the multivariate test is often more powerful when the tests are applied with significance levels to control the probability of falsely identifying clean sites as contaminated. This program could also be used to verify that statistical procedures achieve some minimum power standard at a regulated waste unit.
Statistical Symbolic Execution with Informed Sampling
NASA Technical Reports Server (NTRS)
Filieri, Antonio; Pasareanu, Corina S.; Visser, Willem; Geldenhuys, Jaco
2014-01-01
Symbolic execution techniques have been proposed recently for the probabilistic analysis of programs. These techniques seek to quantify the likelihood of reaching program events of interest, e.g., assert violations. They have many promising applications but have scalability issues due to high computational demand. To address this challenge, we propose a statistical symbolic execution technique that performs Monte Carlo sampling of the symbolic program paths and uses the obtained information for Bayesian estimation and hypothesis testing with respect to the probability of reaching the target events. To speed up the convergence of the statistical analysis, we propose Informed Sampling, an iterative symbolic execution that first explores the paths that have high statistical significance, prunes them from the state space and guides the execution towards less likely paths. The technique combines Bayesian estimation with a partial exact analysis for the pruned paths leading to provably improved convergence of the statistical analysis. We have implemented statistical symbolic execution with in- formed sampling in the Symbolic PathFinder tool. We show experimentally that the informed sampling obtains more precise results and converges faster than a purely statistical analysis and may also be more efficient than an exact symbolic analysis. When the latter does not terminate symbolic execution with informed sampling can give meaningful results under the same time and memory limits.
Statistical Analysis Experiment for Freshman Chemistry Lab.
ERIC Educational Resources Information Center
Salzsieder, John C.
1995-01-01
Describes a laboratory experiment dissolving zinc from galvanized nails in which data can be gathered very quickly for statistical analysis. The data have sufficient significant figures and the experiment yields a nice distribution of random errors. Freshman students can gain an appreciation of the relationships between random error, number of…
The Academic Pecking Order: A Statistical Expose.
ERIC Educational Resources Information Center
Ciampa, Bartholomew J.
This study was designed to provide statistical analysis of certain curricular characteristics that could be used as a projective device to be considered prior to the implementation of any further changes of curricular or philosophical significance. The population of the study comprised all students at Nasson College in the classes of 1968 through…
Ideal statistically quasi Cauchy sequences
NASA Astrophysics Data System (ADS)
Savas, Ekrem; Cakalli, Huseyin
2016-08-01
An ideal I is a family of subsets of N, the set of positive integers which is closed under taking finite unions and subsets of its elements. A sequence (xk) of real numbers is said to be S(I)-statistically convergent to a real number L, if for each ɛ > 0 and for each δ > 0 the set { n ∈N :1/n | { k ≤n :| xk-L | ≥ɛ } | ≥δ } belongs to I. We introduce S(I)-statistically ward compactness of a subset of R, the set of real numbers, and S(I)-statistically ward continuity of a real function in the senses that a subset E of R is S(I)-statistically ward compact if any sequence of points in E has an S(I)-statistically quasi-Cauchy subsequence, and a real function is S(I)-statistically ward continuous if it preserves S(I)-statistically quasi-Cauchy sequences where a sequence (xk) is called to be S(I)-statistically quasi-Cauchy when (Δxk) is S(I)-statistically convergent to 0. We obtain results related to S(I)-statistically ward continuity, S(I)-statistically ward compactness, Nθ-ward continuity, and slowly oscillating continuity.
Determining significant material properties: A discovery approach
NASA Technical Reports Server (NTRS)
Karplus, Alan K.
1992-01-01
The following is a laboratory experiment designed to further understanding of materials science. The experiment itself can be informative for persons of any age past elementary school, and even for some in elementary school. The preparation of the plastic samples is readily accomplished by persons with resonable dexterity in the cutting of paper designs. The completion of the statistical Design of Experiments, which uses Yates' Method, requires basic math (addition and subtraction). Interpretive work requires plotting of data and making observations. Knowledge of statistical methods would be helpful. The purpose of this experiment is to acquaint students with the seven classes of recyclable plastics, and provide hands-on learning about the response of these plastics to mechanical tensile loading.
Basic statistics in cell biology.
Vaux, David L
2014-01-01
The physicist Ernest Rutherford said, "If your experiment needs statistics, you ought to have done a better experiment." Although this aphorism remains true for much of today's research in cell biology, a basic understanding of statistics can be useful to cell biologists to help in monitoring the conduct of their experiments, in interpreting the results, in presenting them in publications, and when critically evaluating research by others. However, training in statistics is often focused on the sophisticated needs of clinical researchers, psychologists, and epidemiologists, whose conclusions depend wholly on statistics, rather than the practical needs of cell biologists, whose experiments often provide evidence that is not statistical in nature. This review describes some of the basic statistical principles that may be of use to experimental biologists, but it does not cover the sophisticated statistics needed for papers that contain evidence of no other kind.
Gaussian statistics for palaeomagnetic vectors
Love, J.J.; Constable, C.G.
2003-01-01
formulate the inverse problem, and how to estimate the mean and variance of the magnetic vector field, even when the data consist of mixed combinations of directions and intensities. We examine palaeomagnetic secular-variation data from Hawaii and Re??union, and although these two sites are on almost opposite latitudes, we find significant differences in the mean vector and differences in the local vectorial variances, with the Hawaiian data being particularly anisotropic. These observations are inconsistent with a description of the mean field as being a simple geocentric axial dipole and with secular variation being statistically symmetrical with respect to reflection through the equatorial plane. Finally, our analysis of palaeomagnetic acquisition data from the 1960 Kilauea flow in Hawaii and the Holocene Xitle flow in Mexico, is consistent with the widely held suspicion that directional data are more accurate than intensity data.
Technology Transfer Automated Retrieval System (TEKTRAN)
Geophysical methods can provide important information in agricultural settings, and the use of these techniques are becoming more and more widespread. Magnetrometry, self-potential, and seismic are three geophysical methods, all of which have the potential for substantial future use in agriculture, ...
Alwin, Jennifer Louise
1999-08-01
The effect of process parameters and chemical additives on the specific cake resistance of zinc hydroxide precipitates was investigated. The ability of a slurry to be filtered is dependent upon the particle habit of the solid and the particle habit is influenced by certain process variables. The process variables studied include neutralization temperature, agitation type, and alkalinity source used for neutralization. Several commercially available chemical additives advertised to aid in solid/liquid separation were also examined in conjunction with hydroxide precipitation. A statistical analysis revealed that the neutralization temperature and the source of alkalinity were statistically significant in influencing the specific cake resistance of zinc hydroxide precipitates in this study. The type of agitation did not significantly effect the specific cake resistance of zinc hydroxide precipitates. The use of chemical additives in conjunction with hydroxide precipitation had a favorable effect on the filterability. The morphology of the hydroxide precipitates was analyzed using scanning electron microscopy.
Analysis and modeling of resistive switching statistics
NASA Astrophysics Data System (ADS)
Long, Shibing; Cagli, Carlo; Ielmini, Daniele; Liu, Ming; Suñé, Jordi
2012-04-01
The resistive random access memory (RRAM), based on the reversible switching between different resistance states, is a promising candidate for next-generation nonvolatile memories. One of the most important challenges to foster the practical application of RRAM is the control of the statistical variation of switching parameters to gain low variability and high reliability. In this work, starting from the well-known percolation model of dielectric breakdown (BD), we establish a framework of analysis and modeling of the resistive switching statistics in RRAM devices, which are based on the formation and disconnection of a conducting filament (CF). One key aspect of our proposal is the relation between the CF resistance and the switching statistics. Hence, establishing the correlation between SET and RESET switching variables and the initial resistance of the device in the OFF and ON states, respectively, is a fundamental issue. Our modeling approach to the switching statistics is fully analytical and contains two main elements: (i) a geometrical cell-based description of the CF and (ii) a deterministic model for the switching dynamics. Both ingredients might be slightly different for the SET and RESET processes, for the type of switching (bipolar or unipolar), and for the kind of considered resistive structure (oxide-based, conductive bridge, etc.). However, the basic structure of our approach is thought to be useful for all the cases and should provide a framework for the physics-based understanding of the switching mechanisms and the associated statistics, for the trustful estimation of RRAM performance, and for the successful forecast of reliability. As a first application example, we start by considering the case of the RESET statistics of NiO-based RRAM structures. In particular, we statistically analyze the RESET transitions of a statistically significant number of switching cycles of Pt/NiO/W devices. In the RESET transition, the ON-state resistance (RON) is a
Statistics without Tears: Complex Statistics with Simple Arithmetic
ERIC Educational Resources Information Center
Smith, Brian
2011-01-01
One of the often overlooked aspects of modern statistics is the analysis of time series data. Modern introductory statistics courses tend to rush to probabilistic applications involving risk and confidence. Rarely does the first level course linger on such useful and fascinating topics as time series decomposition, with its practical applications…
Characterizations of linear sufficient statistics
NASA Technical Reports Server (NTRS)
Peters, B. C., Jr.; Redner, R.; Decell, H. P., Jr.
1976-01-01
A necessary and sufficient condition is developed such that there exists a continous linear sufficient statistic T for a dominated collection of totally finite measures defined on the Borel field generated by the open sets of a Banach space X. In particular, corollary necessary and sufficient conditions are given so that there exists a rank K linear sufficient statistic T for any finite collection of probability measures having n-variate normal densities. In this case a simple calculation, involving only the population means and covariances, determines the smallest integer K for which there exists a rank K linear sufficient statistic T (as well as an associated statistic T itself).
A Statistical investigation of sloshing parameters for multiphase offshore separators
NASA Astrophysics Data System (ADS)
Mahmud, Md; Khan, Rafiqul; Xu, Qiang
Liquid sloshing in multiphase offshore separators has been the subject of intense investigations for last several decades both by experiments and simulations. Large number scientists have worked to minimize sloshing impacts and others have developed new methods to describe the sloshing patterns. In addition, complex mathematical models are developed to characterize sloshing phenomenon. However, a comprehensive statistical study of the input parameters and output results is yet to be done. In this study, statistical approaches will be considered to determine the significant parameters for liquid sloshing. The factor analysis and principal component analysis techniques are considered to identify the significant parameters for liquid sloshing. Numerical experiments are carried out through Computation Fluid Dynamics (CFD) technique using ANSYS Fluent software. The input parameters considered here are liquid depth/length ratio, acceleration, wave frequencies, amplitudes in various sea state conditions. The measured variables include hydrodynamic force, pressure, moments, turbulent kinetic energy, height of interfaces. Mathematical correlations may be developed from the data analysis. Graduate Student Dept of Chemical Eng,Lamar University, Beaumont, TX 77710.
Statistical Analysis of Big Data on Pharmacogenomics
Fan, Jianqing; Liu, Han
2013-01-01
This paper discusses statistical methods for estimating complex correlation structure from large pharmacogenomic datasets. We selectively review several prominent statistical methods for estimating large covariance matrix for understanding correlation structure, inverse covariance matrix for network modeling, large-scale simultaneous tests for selecting significantly differently expressed genes and proteins and genetic markers for complex diseases, and high dimensional variable selection for identifying important molecules for understanding molecule mechanisms in pharmacogenomics. Their applications to gene network estimation and biomarker selection are used to illustrate the methodological power. Several new challenges of Big data analysis, including complex data distribution, missing data, measurement error, spurious correlation, endogeneity, and the need for robust statistical methods, are also discussed. PMID:23602905
A spatial scan statistic for multinomial data
Jung, Inkyung; Kulldorff, Martin; Richard, Otukei John
2014-01-01
As a geographical cluster detection analysis tool, the spatial scan statistic has been developed for different types of data such as Bernoulli, Poisson, ordinal, exponential and normal. Another interesting data type is multinomial. For example, one may want to find clusters where the disease-type distribution is statistically significantly different from the rest of the study region when there are different types of disease. In this paper, we propose a spatial scan statistic for such data, which is useful for geographical cluster detection analysis for categorical data without any intrinsic order information. The proposed method is applied to meningitis data consisting of five different disease categories to identify areas with distinct disease-type patterns in two counties in the U.K. The performance of the method is evaluated through a simulation study. PMID:20680984
Statistical process control in nursing research.
Polit, Denise F; Chaboyer, Wendy
2012-02-01
In intervention studies in which randomization to groups is not possible, researchers typically use quasi-experimental designs. Time series designs are strong quasi-experimental designs but are seldom used, perhaps because of technical and analytic hurdles. Statistical process control (SPC) is an alternative analytic approach to testing hypotheses about intervention effects using data collected over time. SPC, like traditional statistical methods, is a tool for understanding variation and involves the construction of control charts that distinguish between normal, random fluctuations (common cause variation), and statistically significant special cause variation that can result from an innovation. The purpose of this article is to provide an overview of SPC and to illustrate its use in a study of a nursing practice improvement intervention. PMID:22095634
Statistical Approaches to Functional Neuroimaging Data
DuBois Bowman, F; Guo, Ying; Derado, Gordana
2007-01-01
Synopsis The field of statistics makes valuable contributions to functional neuroimaging research by establishing procedures for the design and conduct of neuroimaging experiements and by providing tools for objectively quantifying and measuring the strength of scientific evidence provided by the data. Two common functional neuroimaging research objecitves include detecting brain regions that reveal task-related alterations in measured brain activity (activations) and identifying highly correlated brain regions that exhibit similar patterns of activity over time (functional connectivity). In this article, we highlight various statistical procedures for analyzing data from activation studies and from functional connectivity studies, focusing on functional magnetic resonance imaging (fMRI) and positron emission tomography (PET) data. We also discuss emerging statistical methods for prediction using fMRI and PET data, which stand to increase the translational significance of functional neuroimaging data to clinical practice. PMID:17983962
ERIC Educational Resources Information Center
La Spata, Michelle G.; Carter, Christopher W.; Johnson, Wendi L.; McGill, Ryan J.
2016-01-01
The present study examined the utility of video self-modeling (VSM) for reducing externalizing behaviors (e.g., aggression, conduct problems, hyperactivity, and impulsivity) observed within the classroom environment. After identification of relevant target behaviors, VSM interventions were developed for first and second grade students (N = 4),…
Chekanov, S.; Levchenko, B. B.; High Energy Physics; Skobeltsyn Inst. of Nuclear Physics
2007-01-01
An empirical principle for the construction of a linear relationship between the total angular momentum and squared-mass of baryons is proposed. In order to examine linearity of the trajectories, a rigorous least-squares regression analysis was performed. Unlike the standard Regge-Chew-Frautschi approach, the constructed trajectories do not have nonlinear behavior. A similar regularity may exist for lowest-mass mesons. The linear baryonic trajectories are well described by a semiclassical picture based on a spinning relativistic string with tension. The obtained numerical solution of this model was used to extract the (di)quark masses.
Constructing the Exact Significance Level for a Person-Fit Statistic.
ERIC Educational Resources Information Center
Liou, Michelle; Chang, Chih-Hsin
1992-01-01
An extension is proposed for the network algorithm introduced by C.R. Mehta and N.R. Patel to construct exact tail probabilities for testing the general hypothesis that item responses are distributed according to the Rasch model. A simulation study indicates the efficiency of the algorithm. (SLD)
ERIC Educational Resources Information Center
Buchanan, Taylor L.; Lohse, Keith R.
2016-01-01
We surveyed researchers in the health and exercise sciences to explore different areas and magnitudes of bias in researchers' decision making. Participants were presented with scenarios (testing a central hypothesis with p = 0.06 or p = 0.04) in a random order and surveyed about what they would do in each scenario. Participants showed significant…
Deriving statistical significance maps for support vector regression using medical imaging data.
Gaonkar, Bilwaj; Sotiras, Aristeidis; Davatzikos, Christos
2013-01-01
Regression analysis involves predicting a continuous variable using imaging data. The Support Vector Regression (SVR) algorithm has previously been used in addressing regression analysis in neuroimaging. However, identifying the regions of the image that the SVR uses to model the dependence of a target variable remains an open problem. It is an important issue when one wants to biologically interpret the meaning of a pattern that predicts the variable(s) of interest, and therefore to understand normal or pathological process. One possible approach to the identification of these regions is the use of permutation testing. Permutation testing involves 1) generation of a large set of 'null SVR models' using randomly permuted sets of target variables, and 2) comparison of the SVR model trained using the original labels to the set of null models. These permutation tests often require prohibitively long computational time. Recent work in support vector classification shows that it is possible to analytically approximate the results of permutation testing in medical image analysis. We propose an analogous approach to approximate permutation testing based analysis for support vector regression with medical imaging data. In this paper we present 1) the theory behind our approximation, and 2) experimental results using two real datasets.
Statistical Significance, Effect Size Reporting, and Confidence Intervals: Best Reporting Strategies
ERIC Educational Resources Information Center
Capraro, Robert M.
2004-01-01
With great interest the author read the May 2002 editorial in the "Journal for Research in Mathematics Education (JRME)" (King, 2002) regarding changes to the 5th edition of the "Publication Manual of the American Psychological Association" (APA, 2001). Of special note to him, and of great import to the field of mathematics education research, are…
ERIC Educational Resources Information Center
Bothe, Anne K.; Richardson, Jessica D.
2011-01-01
Purpose: To discuss constructs and methods related to assessing the magnitude and the meaning of clinical outcomes, with a focus on applications in speech-language pathology. Method: Professionals in medicine, allied health, psychology, education, and many other fields have long been concerned with issues referred to variously as practical…
ERIC Educational Resources Information Center
Thompson, Bruce
After presenting a general linear model as a framework for discussion, this paper reviews five methodology errors that occur in educational research: (1) the use of stepwise methods; (2) the failure to consider in result interpretation the context specificity of analytic weights (e.g., regression beta weights, factor pattern coefficients,…
Response of Dissolved Organic Matter to Warming and Nitrogen Addition
NASA Astrophysics Data System (ADS)
Choi, J. H.; Nguyen, H.
2014-12-01
Dissolved Organic Matter (DOM) is a ubiquitous mixture of soluble organic components. Since DOM is produced from the terrestrial leachate of various soil types, soil may influence the chemistry and biology of freshwater through the input of leachate and run-off. The increased temperature by climate change could dramatically change the DOM characteristics of soils through enhanced decomposition rate and losses of carbon from soil organic matter. In addition, the increase in the N-deposition affects DOM leaching from soils by changing the carbon cycling and decomposition rate of soil decay. In this study, we conducted growth chamber experiments using two types of soil (wetland and forest) under the conditions of temperature increase and N-deposition in order to investigate how warming and nitrogen addition influence the characteristics of the DOM leaching from different soil types. This leachate controls the quantity and quality of DOM in surface water systems. After 10 months of incubation, the dissolved organic carbon (DOC) concentrations decreased for almost samples in the range of 7.6 to 87.3% (ANOVA, p<0.05). The specific UV absorption (SUVA) values also decreased for almost samples after the first 3 months and then increased gradually afterward in range of 3.3 to 108.4%. Both time and the interaction between time and the temperature had the statistically significant effects on the SUVA values (MANOVA, p<0.05). Humification index (HIX) showed the significant increase trends during the duration of incubation and temperature for almost the samples (ANOVA, p<0.05). Higher decreases in the DOC values and increases in HIX were observed at higher temperatures, whereas the opposite trend was observed for samples with N-addition. The PARAFAC results showed that three fluorescence components: terrestrial humic (C1), microbial humic-like (C2), and protein-like (C3), constituted the fluorescence matrices of soil samples. During the experiment, labile DOM from the soils was
Statistics of high-level scene context
Greene, Michelle R.
2013-01-01
Context is critical for recognizing environments and for searching for objects within them: contextual associations have been shown to modulate reaction time and object recognition accuracy, as well as influence the distribution of eye movements and patterns of brain activations. However, we have not yet systematically quantified the relationships between objects and their scene environments. Here I seek to fill this gap by providing descriptive statistics of object-scene relationships. A total of 48, 167 objects were hand-labeled in 3499 scenes using the LabelMe tool (Russell et al., 2008). From these data, I computed a variety of descriptive statistics at three different levels of analysis: the ensemble statistics that describe the density and spatial distribution of unnamed “things” in the scene; the bag of words level where scenes are described by the list of objects contained within them; and the structural level where the spatial distribution and relationships between the objects are measured. The utility of each level of description for scene categorization was assessed through the use of linear classifiers, and the plausibility of each level for modeling human scene categorization is discussed. Of the three levels, ensemble statistics were found to be the most informative (per feature), and also best explained human patterns of categorization errors. Although a bag of words classifier had similar performance to human observers, it had a markedly different pattern of errors. However, certain objects are more useful than others, and ceiling classification performance could be achieved using only the 64 most informative objects. As object location tends not to vary as a function of category, structural information provided little additional information. Additionally, these data provide valuable information on natural scene redundancy that can be exploited for machine vision, and can help the visual cognition community to design experiments guided by
Efforts to improve international migration statistics: a historical perspective.
Kraly, E P; Gnanasekaran, K S
1987-01-01
During the past decade, the international statistical community has made several efforts to develop standards for the definition, collection and publication of statistics on international migration. This article surveys the history of official initiatives to standardize international migration statistics by reviewing the recommendations of the International Statistical Institute, International Labor Organization, and the UN, and reports a recently proposed agenda for moving toward comparability among national statistical systems. Heightening awareness of the benefits of exchange and creating motivation to implement international standards requires a 3-pronged effort from the international statistical community. 1st, it is essential to continue discussion about the significance of improvement, specifically standardization, of international migration statistics. The move from theory to practice in this area requires ongoing focus by migration statisticians so that conformity to international standards itself becomes a criterion by which national statistical practices are examined and assessed. 2nd, the countries should be provided with technical documentation to support and facilitate the implementation of the recommended statistical systems. Documentation should be developed with an understanding that conformity to international standards for migration and travel statistics must be achieved within existing national statistical programs. 3rd, the call for statistical research in this area requires more efforts by the community of migration statisticians, beginning with the mobilization of bilateral and multilateral resources to undertake the preceding list of activities. PMID:12280924
Prognostic Significance of POLE Proofreading Mutations in Endometrial Cancer
Church, David N.; Stelloo, Ellen; Nout, Remi A.; Valtcheva, Nadejda; Depreeuw, Jeroen; ter Haar, Natalja; Noske, Aurelia; Amant, Frederic; Wild, Peter J.; Lambrechts, Diether; Jürgenliemk-Schulz, Ina M.; Jobsen, Jan J.; Smit, Vincent T. H. B. M.; Creutzberg, Carien L.; Bosse, Tjalling
2015-01-01
Background: Current risk stratification in endometrial cancer (EC) results in frequent over- and underuse of adjuvant therapy, and may be improved by novel biomarkers. We examined whether POLE proofreading mutations, recently reported in about 7% of ECs, predict prognosis. Methods: We performed targeted POLE sequencing in ECs from the PORTEC-1 and -2 trials (n = 788), and analyzed clinical outcome according to POLE status. We combined these results with those from three additional series (n = 628) by meta-analysis to generate multivariable-adjusted, pooled hazard ratios (HRs) for recurrence-free survival (RFS) and cancer-specific survival (CSS) of POLE-mutant ECs. All statistical tests were two-sided. Results: POLE mutations were detected in 48 of 788 (6.1%) ECs from PORTEC-1 and-2 and were associated with high tumor grade (P < .001). Women with POLE-mutant ECs had fewer recurrences (6.2% vs 14.1%) and EC deaths (2.3% vs 9.7%), though, in the total PORTEC cohort, differences in RFS and CSS were not statistically significant (multivariable-adjusted HR = 0.43, 95% CI = 0.13 to 1.37, P = .15; HR = 0.19, 95% CI = 0.03 to 1.44, P = .11 respectively). However, of 109 grade 3 tumors, 0 of 15 POLE-mutant ECs recurred, compared with 29 of 94 (30.9%) POLE wild-type cancers; reflected in statistically significantly greater RFS (multivariable-adjusted HR = 0.11, 95% CI = 0.001 to 0.84, P = .03). In the additional series, there were no EC-related events in any of 33 POLE-mutant ECs, resulting in a multivariable-adjusted, pooled HR of 0.33 for RFS (95% CI = 0.12 to 0.91, P = .03) and 0.26 for CSS (95% CI = 0.06 to 1.08, P = .06). Conclusion: POLE proofreading mutations predict favorable EC prognosis, independently of other clinicopathological variables, with the greatest effect seen in high-grade tumors. This novel biomarker may help to reduce overtreatment in EC. PMID:25505230
Statistical label fusion with hierarchical performance models
Asman, Andrew J.; Dagley, Alexander S.; Landman, Bennett A.
2014-01-01
Label fusion is a critical step in many image segmentation frameworks (e.g., multi-atlas segmentation) as it provides a mechanism for generalizing a collection of labeled examples into a single estimate of the underlying segmentation. In the multi-label case, typical label fusion algorithms treat all labels equally – fully neglecting the known, yet complex, anatomical relationships exhibited in the data. To address this problem, we propose a generalized statistical fusion framework using hierarchical models of rater performance. Building on the seminal work in statistical fusion, we reformulate the traditional rater performance model from a multi-tiered hierarchical perspective. This new approach provides a natural framework for leveraging known anatomical relationships and accurately modeling the types of errors that raters (or atlases) make within a hierarchically consistent formulation. Herein, we describe several contributions. First, we derive a theoretical advancement to the statistical fusion framework that enables the simultaneous estimation of multiple (hierarchical) performance models within the statistical fusion context. Second, we demonstrate that the proposed hierarchical formulation is highly amenable to the state-of-the-art advancements that have been made to the statistical fusion framework. Lastly, in an empirical whole-brain segmentation task we demonstrate substantial qualitative and significant quantitative improvement in overall segmentation accuracy. PMID:24817809
Clastogenic effects of food additive citric acid in human peripheral lymphocytes
Ünal, Fatma; Yüzbaşıoğlu, Deniz; Aksoy, Hüseyin
2008-01-01
Clastogenic properties of the food additive citric acid, commonly used as an antioxidant, were analysed in human peripheral blood lymphocytes. Citric acid induced a significant increase of chromosomal aberrations (CAs) at all the concentrations and treatment periods tested. Citric acid significantly decreased mitotic index (MI) at 100 and 200 μg ml−1 concentrations at 24 h, and in all concentrations at 48 h. However, it did not decrease the replication index (RI) significantly. Citric acid also significantly increased sister chromatid exchanges (SCEs) at 100 and 200 μg ml−1 concentrations at 24 h, and in all concentrations at 48 h. This chemical significantly increased the micronuclei frequency (MN) compared to the negative control. It also decreased the cytokinesis-block proliferation index (CBPI), but this result was not statistically significant. PMID:19002851
Statistical Issues in TBI Clinical Studies
Rapp, Paul E.; Cellucci, Christopher J.; Keyser, David O.; Gilpin, Adele M. K.; Darmon, David M.
2013-01-01
The identification and longitudinal assessment of traumatic brain injury presents several challenges. Because these injuries can have subtle effects, efforts to find quantitative physiological measures that can be used to characterize traumatic brain injury are receiving increased attention. The results of this research must be considered with care. Six reasons for cautious assessment are outlined in this paper. None of the issues raised here are new. They are standard elements in the technical literature that describes the mathematical analysis of clinical data. The purpose of this paper is to draw attention to these issues because they need to be considered when clinicians evaluate the usefulness of this research. In some instances these points are demonstrated by simulation studies of diagnostic processes. We take as an additional objective the explicit presentation of the mathematical methods used to reach these conclusions. This material is in the appendices. The following points are made: (1) A statistically significant separation of a clinical population from a control population does not ensure a successful diagnostic procedure. (2) Adding more variables to a diagnostic discrimination can, in some instances, actually reduce classification accuracy. (3) A high sensitivity and specificity in a TBI versus control population classification does not ensure diagnostic successes when the method is applied in a more general neuropsychiatric population. (4) Evaluation of treatment effectiveness must recognize that high variability is a pronounced characteristic of an injured central nervous system and that results can be confounded by either disease progression or spontaneous recovery. A large pre-treatment versus post-treatment effect size does not, of itself, establish a successful treatment. (5) A procedure for discriminating between treatment responders and non-responders requires, minimally, a two phase investigation. This procedure must include a mechanism to
Neutron Characterization for Additive Manufacturing
NASA Technical Reports Server (NTRS)
Watkins, Thomas; Bilheux, Hassina; An, Ke; Payzant, Andrew; DeHoff, Ryan; Duty, Chad; Peter, William; Blue, Craig; Brice, Craig A.
2013-01-01
Oak Ridge National Laboratory (ORNL) is leveraging decades of experience in neutron characterization of advanced materials together with resources such as the Spallation Neutron Source (SNS) and the High Flux Isotope Reactor (HFIR) shown in Fig. 1 to solve challenging problems in additive manufacturing (AM). Additive manufacturing, or three-dimensional (3-D) printing, is a rapidly maturing technology wherein components are built by selectively adding feedstock material at locations specified by a computer model. The majority of these technologies use thermally driven phase change mechanisms to convert the feedstock into functioning material. As the molten material cools and solidifies, the component is subjected to significant thermal gradients, generating significant internal stresses throughout the part (Fig. 2). As layers are added, inherent residual stresses cause warping and distortions that lead to geometrical differences between the final part and the original computer generated design. This effect also limits geometries that can be fabricated using AM, such as thin-walled, high-aspect- ratio, and overhanging structures. Distortion may be minimized by intelligent toolpath planning or strategic placement of support structures, but these approaches are not well understood and often "Edisonian" in nature. Residual stresses can also impact component performance during operation. For example, in a thermally cycled environment such as a high-pressure turbine engine, residual stresses can cause components to distort unpredictably. Different thermal treatments on as-fabricated AM components have been used to minimize residual stress, but components still retain a nonhomogeneous stress state and/or demonstrate a relaxation-derived geometric distortion. Industry, federal laboratory, and university collaboration is needed to address these challenges and enable the U.S. to compete in the global market. Work is currently being conducted on AM technologies at the ORNL
Research Design and Statistical Design.
ERIC Educational Resources Information Center
Szymanski, Edna Mora
1993-01-01
Presents fourth editorial in series, this one describing research design and explaining its relationship to statistical design. Research design, validity, and research approaches are examined, quantitative research designs and hypothesis testing are described, and control and statistical designs are discussed. Concludes with section on the art of…
Explorations in Statistics: Confidence Intervals
ERIC Educational Resources Information Center
Curran-Everett, Douglas
2009-01-01
Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This third installment of "Explorations in Statistics" investigates confidence intervals. A confidence interval is a range that we expect, with some level of confidence, to include the true value of a population parameter…
Book Trade Research and Statistics.
ERIC Educational Resources Information Center
Bosch, Stephen; Ink, Gary; Lofquist, William S.
1998-01-01
Provides data on prices of U.S. and foreign materials; book title output and average prices, 1996 final and 1997 preliminary figures; book sales statistics, 1997--AAP preliminary estimates; U.S. trade in books, 1997; international book title output, 1990-95; book review media statistics; and number of book outlets in the U.S. and Canada. (PEN)
Representational Versatility in Learning Statistics
ERIC Educational Resources Information Center
Graham, Alan T.; Thomas, Michael O. J.
2005-01-01
Statistical data can be represented in a number of qualitatively different ways, the choice depending on the following three conditions: the concepts to be investigated; the nature of the data; and the purpose for which they were collected. This paper begins by setting out frameworks that describe the nature of statistical thinking in schools, and…
Motivating Play Using Statistical Reasoning
ERIC Educational Resources Information Center
Cross Francis, Dionne I.; Hudson, Rick A.; Lee, Mi Yeon; Rapacki, Lauren; Vesperman, Crystal Marie
2014-01-01
Statistical literacy is essential in everyone's personal lives as consumers, citizens, and professionals. To make informed life and professional decisions, students are required to read, understand, and interpret vast amounts of information, much of which is quantitative. To develop statistical literacy so students are able to make sense of…
Statistical Methods in Psychology Journals.
ERIC Educational Resources Information Center
Willkinson, Leland
1999-01-01
Proposes guidelines for revising the American Psychological Association (APA) publication manual or other APA materials to clarify the application of statistics in research reports. The guidelines are intended to induce authors and editors to recognize the thoughtless application of statistical methods. Contains 54 references. (SLD)
Computing contingency statistics in parallel.
Bennett, Janine Camille; Thompson, David; Pebay, Philippe Pierre
2010-09-01
Statistical analysis is typically used to reduce the dimensionality of and infer meaning from data. A key challenge of any statistical analysis package aimed at large-scale, distributed data is to address the orthogonal issues of parallel scalability and numerical stability. Many statistical techniques, e.g., descriptive statistics or principal component analysis, are based on moments and co-moments and, using robust online update formulas, can be computed in an embarrassingly parallel manner, amenable to a map-reduce style implementation. In this paper we focus on contingency tables, through which numerous derived statistics such as joint and marginal probability, point-wise mutual information, information entropy, and {chi}{sup 2} independence statistics can be directly obtained. However, contingency tables can become large as data size increases, requiring a correspondingly large amount of communication between processors. This potential increase in communication prevents optimal parallel speedup and is the main difference with moment-based statistics where the amount of inter-processor communication is independent of data size. Here we present the design trade-offs which we made to implement the computation of contingency tables in parallel.We also study the parallel speedup and scalability properties of our open source implementation. In particular, we observe optimal speed-up and scalability when the contingency statistics are used in their appropriate context, namely, when the data input is not quasi-diffuse.
Education Statistics Quarterly, Spring 2001.
ERIC Educational Resources Information Center
Education Statistics Quarterly, 2001
2001-01-01
The "Education Statistics Quarterly" gives a comprehensive overview of work done across all parts of the National Center for Education Statistics (NCES). Each issue contains short publications, summaries, and descriptions that cover all NCES publications, data products and funding opportunities developed over a 3-month period. Each issue also…
SOCR: Statistics Online Computational Resource
ERIC Educational Resources Information Center
Dinov, Ivo D.
2006-01-01
The need for hands-on computer laboratory experience in undergraduate and graduate statistics education has been firmly established in the past decade. As a result a number of attempts have been undertaken to develop novel approaches for problem-driven statistical thinking, data analysis and result interpretation. In this paper we describe an…
Book Trade Research and Statistics.
ERIC Educational Resources Information Center
Bosch, Stephen; Ink, Gary; Greco, Albert N.
1999-01-01
Presents: "Prices of United States and Foreign Published Materials"; "Book Title Output and Average Prices"; "Book Sales Statistics, 1998"; "United States Book Exports and Imports: 1998"; "International Book Title Output: 1990-96"; "Number of Book Outlets in the United States and Canada"; and "Book Review Media Statistics". (AEF)
Book Trade Research and Statistics.
ERIC Educational Resources Information Center
Sullivan, Sharon G.; Ink, Gary; Grabois, Andrew; Barr, Catherine
2001-01-01
Includes six articles that discuss research and statistics relating to the book trade. Topics include prices of U.S. and foreign materials; book title output and average prices; book sales statistics; book exports and imports; book outlets in the U.S. and Canada; and books and other media reviewed. (LRW)
Book Trade Research and Statistics.
ERIC Educational Resources Information Center
Alexander, Adrian W.; And Others
1994-01-01
The six articles in this section examine prices of U.S. and foreign materials; book title output and average prices; book sales statistics; U.S. book exports and imports; number of book outlets in the United States and Canada; and book review media statistics. (LRW)
Education Statistics Quarterly, Fall 2000.
ERIC Educational Resources Information Center
Dillow, Sally, Ed.
2000-01-01
The "Education Statistics Quarterly" gives a comprehensive overview of work done across all parts of the National Center for Education Statistics (NCES). Each issue contains short publications, summaries, and descriptions that cover all NCES publications and data products released during a 3-month period. Each message also contains a message from…
Students' Attitudes toward Statistics (STATS).
ERIC Educational Resources Information Center
Sutarso, Toto
The purposes of this study were to develop an instrument to measure students' attitude toward statistics (STATS), and to define the underlying dimensions that comprise the STATS. The instrument consists of 24 items. The sample included 79 male and 97 female students from the statistics classes at the College of Education and the College of…
Statistical Factors in Complexation Reactions.
ERIC Educational Resources Information Center
Chung, Chung-Sun
1985-01-01
Four cases which illustrate statistical factors in complexation reactions (where two of the reactants are monodentate ligands) are presented. Included are tables showing statistical factors for the reactions of: (1) square-planar complexes; (2) tetrahedral complexes; and (3) octahedral complexes. (JN)
Design of order statistics filters using feedforward neural networks
NASA Astrophysics Data System (ADS)
Maslennikova, Yu. S.; Bochkarev, V. V.
2016-08-01
In recent years significant progress have been made in the development of nonlinear data processing techniques. Such techniques are widely used in digital data filtering and image enhancement. Many of the most effective nonlinear filters based on order statistics. The widely used median filter is the best known order statistic filter. Generalized form of these filters could be presented based on Lloyd's statistics. Filters based on order statistics have excellent robustness properties in the presence of impulsive noise. In this paper, we present special approach for synthesis of order statistics filters using artificial neural networks. Optimal Lloyd's statistics are used for selecting of initial weights for the neural network. Adaptive properties of neural networks provide opportunities to optimize order statistics filters for data with asymmetric distribution function. Different examples demonstrate the properties and performance of presented approach.
Students' attitudes towards learning statistics
NASA Astrophysics Data System (ADS)
Ghulami, Hassan Rahnaward; Hamid, Mohd Rashid Ab; Zakaria, Roslinazairimah
2015-05-01
Positive attitude towards learning is vital in order to master the core content of the subject matters under study. This is unexceptional in learning statistics course especially at the university level. Therefore, this study investigates the students' attitude towards learning statistics. Six variables or constructs have been identified such as affect, cognitive competence, value, difficulty, interest, and effort. The instrument used for the study is questionnaire that was adopted and adapted from the reliable instrument of Survey of Attitudes towards Statistics(SATS©). This study is conducted to engineering undergraduate students in one of the university in the East Coast of Malaysia. The respondents consist of students who were taking the applied statistics course from different faculties. The results are analysed in terms of descriptive analysis and it contributes to the descriptive understanding of students' attitude towards the teaching and learning process of statistics.
Probability, Information and Statistical Physics
NASA Astrophysics Data System (ADS)
Kuzemsky, A. L.
2016-03-01
In this short survey review we discuss foundational issues of the probabilistic approach to information theory and statistical mechanics from a unified standpoint. Emphasis is on the inter-relations between theories. The basic aim is tutorial, i.e. to carry out a basic introduction to the analysis and applications of probabilistic concepts to the description of various aspects of complexity and stochasticity. We consider probability as a foundational concept in statistical mechanics and review selected advances in the theoretical understanding of interrelation of the probability, information and statistical description with regard to basic notions of statistical mechanics of complex systems. It includes also a synthesis of past and present researches and a survey of methodology. The purpose of this terse overview is to discuss and partially describe those probabilistic methods and approaches that are used in statistical mechanics with the purpose of making these ideas easier to understanding and to apply.
Statistical Thermodynamics and Microscale Thermophysics
NASA Astrophysics Data System (ADS)
Carey, Van P.
1999-08-01
Many exciting new developments in microscale engineering are based on the application of traditional principles of statistical thermodynamics. In this text Van Carey offers a modern view of thermodynamics, interweaving classical and statistical thermodynamic principles and applying them to current engineering systems. He begins with coverage of microscale energy storage mechanisms from a quantum mechanics perspective and then develops the fundamental elements of classical and statistical thermodynamics. Subsequent chapters discuss applications of equilibrium statistical thermodynamics to solid, liquid, and gas phase systems. The remainder of the book is devoted to nonequilibrium thermodynamics of transport phenomena and to nonequilibrium effects and noncontinuum behavior at the microscale. Although the text emphasizes mathematical development, Carey includes many examples and exercises to illustrate how the theoretical concepts are applied to systems of scientific and engineering interest. In the process he offers a fresh view of statistical thermodynamics for advanced undergraduate and graduate students, as well as practitioners, in mechanical, chemical, and materials engineering.
Statistical security for Social Security.
Soneji, Samir; King, Gary
2012-08-01
The financial viability of Social Security, the single largest U.S. government program, depends on accurate forecasts of the solvency of its intergenerational trust fund. We begin by detailing information necessary for replicating the Social Security Administration's (SSA's) forecasting procedures, which until now has been unavailable in the public domain. We then offer a way to improve the quality of these procedures via age- and sex-specific mortality forecasts. The most recent SSA mortality forecasts were based on the best available technology at the time, which was a combination of linear extrapolation and qualitative judgments. Unfortunately, linear extrapolation excludes known risk factors and is inconsistent with long-standing demographic patterns, such as the smoothness of age profiles. Modern statistical methods typically outperform even the best qualitative judgments in these contexts. We show how to use such methods, enabling researchers to forecast using far more information, such as the known risk factors of smoking and obesity and known demographic patterns. Including this extra information makes a substantial difference. For example, by improving only mortality forecasting methods, we predict three fewer years of net surplus, $730 billion less in Social Security Trust Funds, and program costs that are 0.66% greater for projected taxable payroll by 2031 compared with SSA projections. More important than specific numerical estimates are the advantages of transparency, replicability, reduction of uncertainty, and what may be the resulting lower vulnerability to the politicization of program forecasts. In addition, by offering with this article software and detailed replication information, we hope to marshal the efforts of the research community to include ever more informative inputs and to continue to reduce uncertainties in Social Security forecasts. PMID:22592944
Addition lens alleviates reading-induced ocular stress.
Choy, Camus K; Siu, Andrew W; Lam, Frankie Y; Tse, Jesse T; Lau, Sylvania Y
2000-01-01
BACKGROUND: Near tasks have been associated with binocular stress to induce myopia. The aim of this study was to investigate the effects of accommodation on reading-induced near heterophoria. METHODS: We measured the near heterophoria of 22 young adults before and after 30 minutes of reading. The reading task comprised a column of local English newsletter studied monocularly at 33 cm. One of three addition lenses (that is, 0.00 D, +1.50 D and +3.00 D) was randomly incorporated into the optical prescription. The difference in near heterophoria between the pre- and post-reading task was recorded. The experiment was completed on separate days for the other lens powers. RESULTS: Reading for 30 minutes with a plano lens addition (control) increased the near heterophoria by 3.81 +/- 0.95 prism dioptres (SEM) toward exo-deviation (p < 0.002). Addition of a +3.00 D lens significantly decreased the reading-induced exophoric shift to 1.36 +/- 0.55 prism dioptres (SEM). Similarly, a +1.50 D lens reduced the exophoric shift to 3.14 +/- 0.85 prism dioptres (SEM) but the difference was not statistically significant when compared with the control. CONCLUSIONS: The results showed that close work might cause eye strain via the extraocular muscles. Incorporation of plus lens into the optical correction caused a power-dependent reduction in the stress, that is, smaller exophoric shift. Whether binocular stress contributes to myopia and its response to addition lens therapy deserve further investigation.
NASA Astrophysics Data System (ADS)
Kim, Younggwan; Suh, Youngjoo; Kim, Hoirin
2011-12-01
The role of the statistical model-based voice activity detector (SMVAD) is to detect speech regions from input signals using the statistical models of noise and noisy speech. The decision rule of SMVAD is based on the likelihood ratio test (LRT). The LRT-based decision rule may cause detection errors because of statistical properties of noise and speech signals. In this article, we first analyze the reasons why the detection errors occur and then propose two modified decision rules using reliable likelihood ratios (LRs). We also propose an effective weighting scheme considering spectral characteristics of noise and speech signals. In the experiments proposed in this study, with almost no additional computations, the proposed methods show significant performance improvement in various noise conditions. Experimental results also show that the proposed weighting scheme provides additional performance improvement over the two proposed SMVADs.
Takatsuka, Kazuo; Matsumoto, Kentaro
2016-01-21
We present a basic theory to study real-time dynamics embedded in a large environment that is treated using a statistical method. In light of great progress in the molecular-level studies on time-resolved spectroscopies, chemical reaction dynamics, and so on, not only in the gas phase but also in condensed phases like liquid solvents and even in crowded environments in living cells, we need to bridge over a gap between statistical mechanics and microscopic real-time dynamics. For instance, an analogy to gas-phase dynamics in which molecules are driven by the gradient of the potential energy hyper-surfaces (PESs) suggests that particles in condensed phases should run on the free energy surface instead. The question is whether this anticipation is correct. To answer it, we here propose a mixed dynamics and statistical representation to treat chemical dynamics embedded in a statistical ensemble. We first define the entropy functional, which is a function of the phase-space position of the dynamical subsystem, being dressed with statistical weights from the statistical counterpart. We then consider the functionals of temperature, free energy, and chemical potential as their extensions in statistical mechanics, through which one can clarify the relationship between real-time microscopic dynamics and statistical quantities. As an illustrative example we show that molecules in the dynamical subsystem should run on the free-energy functional surface, if and only if the spatial gradients of the temperature functional are all zero. Otherwise, additional forces emerge from the gradient of the temperature functional. Numerical demonstrations are presented at the very basic level of this theory of molecular dissociation in atomic cluster solvents. PMID:26674298
Glacial marine sedimentation: Paleoclimatic significance
Anderson, J.B.; Ashley, G.M.
1991-01-01
This publication resulted from a symposium held during the 1988 Annual Meeting of the Geological Society of America. Many, but not all, contributors to the symposium have papers in this volume. This Special Paper consists of 14 chapters and a Subject/Geographic index. Each chapter has is own list of references. The papers cover a wide range of modem climate/ ocean environments, including papers on glacial marine sediments from Antarctica, the fiords of Alaska, and sediments from the Canadian High Arctic. In addition, three papers discuss [open quote]old[close quotes] glacial marine records (i.e., pre-Tertiary), and one paper discusses the Yakataga Formation of the Gulf of Alaska which is a Miocene-to-late-Pleistocene sequence. The last chapter in the book includes a survey and summary of the evidence for the paleoclimatic significance of glacial marine sediments by the two editors, John Anderson and Gail Ashley. It is worth noting that Anderson and Domack state in the Foreword that there is a considerable variation in terminology; hence they employ a series of definitions which they urge the other authors to employ. They define and explain what they mean by [open quotes]polar ice cap,[close quotes] [open quote]polar tundra (subpolar),[close quotes] and [open quotes]temperate oceanic and boreal[close quotes] in terms of the dominant glacial and glacial marine processes. Although one might quarrel with the terminology, the broad differences between these three glaciological regimes are indeed fundamental and need to be sought in the geological record. The flavor of the volume can be judged by some of the chapter titles. Contributions on Antarctica include a paper by Anderson and other entitled [open quote]Sedimentary facies associated with Antarctica's floating ice masses[close quotes] and a companion paper by Anderson and Domack which deals with the extremely complex glacial marine facies (13 facies are delimited) in McMurdo Sound, Antarctica.
Statistical and computational challenges in physical mapping
Nelson, D.O.; Speed, T.P.
1994-06-01
One of the great success stories of modern molecular genetics has been the ability of biologists to isolate and characterize the genes responsible for serious inherited diseases like Huntington`s disease, cystic fibrosis, and myotonic dystrophy. Instrumental in these efforts has been the construction of so-called {open_quotes}physical maps{close_quotes} of large regions of human chromosomes. Constructing a physical map of a chromosome presents a number of interesting challenges to the computational statistician. In addition to the general ill-posedness of the problem, complications include the size of the data sets, computational complexity, and the pervasiveness of experimental error. The nature of the problem and the presence of many levels of experimental uncertainty make statistical approaches to map construction appealing. Simultaneously, however, the size and combinatorial complexity of the problem make such approaches computationally demanding. In this paper we discuss what physical maps are and describe three different kinds of physical maps, outlining issues which arise in constructing them. In addition, we describe our experience with powerful, interactive statistical computing environments. We found that the ability to create high-level specifications of proposed algorithms which could then be directly executed provided a flexible rapid prototyping facility for developing new statistical models and methods. The ability to check the implementation of an algorithm by comparing its results to that of an executable specification enabled us to rapidly debug both specification and implementation in an environment of changing needs.
NASA Technical Reports Server (NTRS)
Wilson, R. M.
1982-01-01
Based on 1349 hydrogen alpha flares with X-ray counterparts, an investigation into the relationship between the rise time, decay time, duration, latitude, hydrogen alpha importance, and X-ray class with 2800 MHz radio emission (F2800) was accomplished. An important finding is that during 1980 both the number of hydrogen alpha importance class 1 and number of X-ray class M (and M+X) flares appeared to be rather strongly related to F2800, in a positive sense; i.e., number of class 1 and class M events increased as F2800 increased.
Lachowiec, Jennifer; Shen, Xia; Queitsch, Christine; Carlborg, Örjan
2015-01-01
Efforts to identify loci underlying complex traits generally assume that most genetic variance is additive. Here, we examined the genetics of Arabidopsis thaliana root length and found that the genomic narrow-sense heritability for this trait in the examined population was statistically zero. The low amount of additive genetic variance that could be captured by the genome-wide genotypes likely explains why no associations to root length could be found using standard additive-model-based genome-wide association (GWA) approaches. However, as the broad-sense heritability for root length was significantly larger, and primarily due to epistasis, we also performed an epistatic GWA analysis to map loci contributing to the epistatic genetic variance. Four interacting pairs of loci were revealed, involving seven chromosomal loci that passed a standard multiple-testing corrected significance threshold. The genotype-phenotype maps for these pairs revealed epistasis that cancelled out the additive genetic variance, explaining why these loci were not detected in the additive GWA analysis. Small population sizes, such as in our experiment, increase the risk of identifying false epistatic interactions due to testing for associations with very large numbers of multi-marker genotypes in few phenotyped individuals. Therefore, we estimated the false-positive risk using a new statistical approach that suggested half of the associated pairs to be true positive associations. Our experimental evaluation of candidate genes within the seven associated loci suggests that this estimate is conservative; we identified functional candidate genes that affected root development in four loci that were part of three of the pairs. The statistical epistatic analyses were thus indispensable for confirming known, and identifying new, candidate genes for root length in this population of wild-collected A. thaliana accessions. We also illustrate how epistatic cancellation of the additive genetic variance
Lachowiec, Jennifer; Shen, Xia; Queitsch, Christine; Carlborg, Örjan
2015-01-01
Efforts to identify loci underlying complex traits generally assume that most genetic variance is additive. Here, we examined the genetics of Arabidopsis thaliana root length and found that the genomic narrow-sense heritability for this trait in the examined population was statistically zero. The low amount of additive genetic variance that could be captured by the genome-wide genotypes likely explains why no associations to root length could be found using standard additive-model-based genome-wide association (GWA) approaches. However, as the broad-sense heritability for root length was significantly larger, and primarily due to epistasis, we also performed an epistatic GWA analysis to map loci contributing to the epistatic genetic variance. Four interacting pairs of loci were revealed, involving seven chromosomal loci that passed a standard multiple-testing corrected significance threshold. The genotype-phenotype maps for these pairs revealed epistasis that cancelled out the additive genetic variance, explaining why these loci were not detected in the additive GWA analysis. Small population sizes, such as in our experiment, increase the risk of identifying false epistatic interactions due to testing for associations with very large numbers of multi-marker genotypes in few phenotyped individuals. Therefore, we estimated the false-positive risk using a new statistical approach that suggested half of the associated pairs to be true positive associations. Our experimental evaluation of candidate genes within the seven associated loci suggests that this estimate is conservative; we identified functional candidate genes that affected root development in four loci that were part of three of the pairs. The statistical epistatic analyses were thus indispensable for confirming known, and identifying new, candidate genes for root length in this population of wild-collected A. thaliana accessions. We also illustrate how epistatic cancellation of the additive genetic variance
Predicting typology of landslide occurrences by statistical GIS analysis
NASA Astrophysics Data System (ADS)
Mancini, Francesco; Ceppi, Claudia; Ritrovato, Giuliano
2010-05-01
This study aim at the landslide susceptibility mapping by multivariate statistical methods with the additional capability to distinguish among typology of landslide occurrences. The methodology is being tested in a hilly area of the Daunia Region (Apulia, southern Italy) where small settlements are historically threatened by landslide phenomena. In the used multivariate statistical analysis all the variables were managed in a GIS in addition to the landslide inventory where geometric and descriptive properties have to be implemented in a suitable data structure in order to refer the independent set of variables to the typology of landslide occurrences. The independent set of variable selected as possible triggering factors of slope instability phenomena are: elevation, slope, aspect, planform and profile curvature, drained area, lithology, land use, distance from road and river network. The implementation of the landslide inventory was more demanding with respect to a usual multivariate analysis, such as the multiple regression analysis, where the simple presence/absence status of occurrences is being required. According to the classification proposed by Cruden and Varnes, three main landslide typologies were included in the inventory after recognizing by geomorphological survey: a) intermediate to deep-seated compound landslides with failure surface depth > 30m; b) mudslides of shallow to intermediate depth sliding surface; c) deep-seated to intermediate depth rotational landslides with depth of sliding surface < 30m. The inventory implementation constitutes a significant effort supported by the project "Landslide risk assessment for the planning of small urban settlements within chain areas: the case of Daunia" through several expertise. The outcomes of the analysis provide the proneness to landslide, as predicted level of probability, by considering in addition the failure mechanism introduced in the landslide inventory. A map of landslide susceptibility along
Kontos, Despina; Megalooikonomou, Vasileios; Gee, James C.
2009-01-01
Summary Objective We evaluate the feasibility of applying dynamic recursive partitioning (DRP), an image analysis technique, to perform morphometric analysis. We apply DRP to detect and characterize discriminative morphometric characteristics between anatomical brain structures from different groups of subjects. Our method reduces the number of statistical tests, commonly required by pixel-wise statistics, alleviating the effect of the multiple comparison problem. Methods and Materials The main idea of DRP is to partition the two-dimensional (2D) image adaptively into progressively smaller sub-regions until statistically significant discriminative regions are detected. The partitioning process is guided by statistical tests applied on groups of pixels. By performing statistical tests on groups of pixels rather than on individual pixels, the number of statistical tests is effectively reduced. This reduction of statistical tests restricts the effect of the multiple comparison problem (i.e. type-I error). We demonstrate an application of DRP for detecting gender-related morphometric differentiation of the corpus callosum. DRP was applied to template deformation fields computed from registered magnetic resonance images of the corpus callosum in order to detect regions of significant expansion or contraction between female and male subjects. Results DRP was able to detect regions comparable to those of pixel-wise analysis, while reducing the number of required statistical tests up to almost 50%. The detected regions were in agreement with findings previously reported in the literature. Statistically significant discriminative morphological variability was detected in the posterior corpus callosum region, the isthmus and the anterior corpus callosum. In addition, by operating on groups of pixels, DRP appears to be less prone to detecting spatially diffused and isolated outlier pixels as significant. Conclusion DRP can be a viable approach for detecting discriminative
Clinical statistics: five key statistical concepts for clinicians.
Choi, Yong-Geun
2013-10-01
Statistics is the science of data. As the foundation of scientific knowledge, data refers to evidentiary facts from the nature of reality by human action, observation, or experiment. Clinicians should be aware of the conditions of good data to support the validity of clinical modalities in reading scientific articles, one of the resources to revise or update their clinical knowledge and skills. The cause-effect link between clinical modality and outcome is ascertained as pattern statistic. The uniformity of nature guarantees the recurrence of data as the basic scientific evidence. Variation statistics are examined for patterns of recurrence. This provides information on the probability of recurrence of the cause-effect phenomenon. Multiple causal factors of natural phenomenon need a counterproof of absence in terms of the control group. A pattern of relation between a causal factor and an effect becomes recognizable, and thus, should be estimated as relation statistic. The type and meaning of each relation statistic should be well-understood. A study regarding a sample from the population of wide variations require clinicians to be aware of error statistics due to random chance. Incomplete human sense, coarse measurement instrument, and preconceived idea as a hypothesis that tends to bias the research, which gives rise to the necessity of keen critical independent mind with regard to the reported data.
The faulty statistics of complementary alternative medicine (CAM).
Pandolfi, Maurizio; Carreras, Giulia
2014-09-01
The authors illustrate the difficulties involved in obtaining a valid statistical significance in clinical studies especially when the prior probability of the hypothesis under scrutiny is low. Since the prior probability of a research hypothesis is directly related to its scientific plausibility, the commonly used frequentist statistics, which does not take into account this probability, is particularly unsuitable for studies exploring matters in various degree disconnected from science such as complementary alternative medicine (CAM) interventions. Any statistical significance obtained in this field should be considered with great caution and may be better applied to more plausible hypotheses (like placebo effect) than that examined - which usually is the specific efficacy of the intervention. Since achieving meaningful statistical significance is an essential step in the validation of medical interventions, CAM practices, producing only outcomes inherently resistant to statistical validation, appear not to belong to modern evidence-based medicine.
Fresh stirrings among statisticians: statistical commentary.
Godfrey, Keith
2016-05-01
For some years there has been unrest in the statistical world regarding the use of the p-value. It has been indicated that the significance of p-values is open to question, which therefore reduces the ability to measure the strength of evidence. This paper examines the use and misuse of the p-value and recommends consideration in its application. PMID:27468598
Indigenous family violence: a statistical challenge.
Cripps, Kyllie
2008-12-01
The issue of family violence and sexual abuse in Indigenous communities across Australia has attracted much attention throughout 2007, including significant intervention by the federal government into communities deemed to be in crisis. This paper critically examines the reporting and recording of Indigenous violence in Australia and reflects on what 'statistics' can offer as we grapple with how to respond appropriately to a problem defined as a 'national emergency'. PMID:19130914
Characterizations of linear sufficient statistics
NASA Technical Reports Server (NTRS)
Peters, B. C., Jr.; Reoner, R.; Decell, H. P., Jr.
1977-01-01
A surjective bounded linear operator T from a Banach space X to a Banach space Y must be a sufficient statistic for a dominated family of probability measures defined on the Borel sets of X. These results were applied, so that they characterize linear sufficient statistics for families of the exponential type, including as special cases the Wishart and multivariate normal distributions. The latter result was used to establish precisely which procedures for sampling from a normal population had the property that the sample mean was a sufficient statistic.
An introduction to statistical finance
NASA Astrophysics Data System (ADS)
Bouchaud, Jean-Philippe
2002-10-01
We summarize recent research in a rapid growing field, that of statistical finance, also called ‘econophysics’. There are three main themes in this activity: (i) empirical studies and the discovery of interesting universal features in the statistical texture of financial time series, (ii) the use of these empirical results to devise better models of risk and derivative pricing, of direct interest for the financial industry, and (iii) the study of ‘agent-based models’ in order to unveil the basic mechanisms that are responsible for the statistical ‘anomalies’ observed in financial time series. We give a brief overview of some of the results in these three directions.
Osborn, C.T.; Llacuna, F.; Linsenbigler, M.
1992-11-01
The United States Department of Agriculture accepted approximately 33.9 million acres of cropland into the Conservation Reserve Program (CRP) during 1986-89. These acres were enrolled in nine separate signups under the authority of the Food Security Act of 1985. Enrollment was extended through 1995 by the Food, Agriculture, Conservation, and Trade Act of 1990. Two signup periods were held in 1991. The 10th signup was for fiscal year 1991 retirement of land from production, and the 11th signup was for fiscal year 1992. Approximately 1.5 million additional acres were enrolled in these two signup periods under significantly revised program rules. The report presents statistics covering all CRP cropland enrolled in signup periods 1-11 and CRP cropland newly retired in fiscal years 1990-92.
Trend Analysis of Tropical Ozone From the Southern Hemisphere Additional Ozonesondes (SHADOZ) Data
NASA Astrophysics Data System (ADS)
Morioka, H.; Fujiwara, M.; Shiotani, M.; Thompson, A. M.; Witte, J. C.; Oltmans, S. J.
2007-12-01
Linear trends of ozone for 1998-2007 are estimated for the troposphere through the lower stratosphere at ten tropical ozonesonde stations participating in the Southern Hemisphere Additional Ozonesondes (SHADOZ) project. Most stations cover the period from early 1998 to the end of 2006, but some stations have a shorter or longer record. Soundings are made once to four times per month, varying for station and year, but cover basically all seasons. The total sounding number ranges from 102 for Malindi to 429 for Ascension Island. Trends are calculated for vertically averaged values in each 1-km bin from 0-1 km to 30-31 km, and expressed as percent per year. Statistical test is also made. Around the tropopause, between 15 and 20 km, negative trends are seen for most stations. At San Cristobal (in the eastern Pacific) at 16-17 km, the trend is -4.3 ± 3.0 percent per year, and at Watukosek (in Indonesia) at 17-18 km, it is -4.8 ± 3.9 percent per year, both statistically significant. However, at Ascension (in the Atlantic) and at Natal (in South America), the tropopause trend is near zero and not statistically significant. At Natal at 12-13 km, the trend is +3.7 ± 3.0 percent per year, and at Malindi (in Africa) at 11-12 km, it is +5.0 ± 4.6 percent per year, both statistically significant. Generally in the free troposphere, positive trends are seen, but are statistically not significant for most regions. In the planetary boundary layer, statistically significant positive trends are seen at Kuala Lumpur (in Southeast Asia) and at Fiji (in the southwestern Pacific), and a statistically significant negative trend is seen at Paramaribo (in South America). The trend analysis is also made for four different seasons. Around the tropopause, seasonality in trend is small for all stations. In the upper troposphere, at Fiji and at Samoa, negative trends are seen in SON, but positive trends are seen in DJF.
Topological Cacti: Visualizing Contour-based Statistics
Weber, Gunther H.; Bremer, Peer-Timo; Pascucci, Valerio
2011-05-26
Contours, the connected components of level sets, play an important role in understanding the global structure of a scalar field. In particular their nestingbehavior and topology-often represented in form of a contour tree-have been used extensively for visualization and analysis. However, traditional contour trees onlyencode structural properties like number of contours or the nesting of contours, but little quantitative information such as volume or other statistics. Here we use thesegmentation implied by a contour tree to compute a large number of per-contour (interval) based statistics of both the function defining the contour tree as well asother co-located functions. We introduce a new visual metaphor for contour trees, called topological cacti, that extends the traditional toporrery display of acontour tree to display additional quantitative information as width of the cactus trunk and length of its spikes. We apply the new technique to scalar fields ofvarying dimension and different measures to demonstrate the effectiveness of the approach.
Prognostic significance of astrocyte elevated gene-1 in human astrocytomas
He, Zhangxiu; He, Meihui; Wang, Chao; Xu, Baozhan; Tong, Liping; He, Junming; Sun, Bowen; Wei, Lanlan; Chu, Ming
2014-01-01
Astrocyte Elevated Gene-1 (AEG-1) has been proposed as a biomarker for a variety of cancers. This study aimed to investigate the expression of AEG-1 in human astrocytomas and the correlation between AEG-1 expression and clinicopathologic variables of astrocytomas. AEG-1 expression in four pairs of matched astrocytomas tissues and 5 cell lines was detected by immunohistochemical and Western blot analysis. In addition, AEG-1 protein expression was examined by immunohistochemical staining in 204 cases, including 32 normal brain tissues, 80 Low-malignant astrocytomas (LMAs) and 92 High-Malignant astrocytomas (HMAs). AEG-1 expression in 31 LMAs and 29 HMAs samples was detected by RT-PCR and Western blot analysis. We detected AEG-1 expression in normal neurons and glioma cell lines U87, U251 and M059K, but not in normal glial cells. Immunohistochemical analysis showed that 128 of 172 (74.4%) paraffin-embedded archival astrocytomas samples exhibited positive AEG-1 expression. Statistical analysis suggested that higher AEG-1 level was significantly correlated with histological grade of astrocytomas. In addition, AEG-1 mRNA and protein expression was higher in HMAs than in LMAs. AEG-1 expression had no correlation with the gender or age of astrocytoma patients. In summary, our data suggest that AEG-1 may represent a novel prognostic marker for astrocytomas. PMID:25197376
The Effect Size Statistic: Overview of Various Choices.
ERIC Educational Resources Information Center
Mahadevan, Lakshmi
Over the years, methodologists have been recommending that researchers use magnitude of effect estimates in result interpretation to highlight the distinction between statistical and practical significance (cf. R. Kirk, 1996). A magnitude of effect statistic (i.e., effect size) tells to what degree the dependent variable can be controlled,…
What's Funny about Statistics? A Technique for Reducing Student Anxiety.
ERIC Educational Resources Information Center
Schacht, Steven; Stewart, Brad J.
1990-01-01
Studied the use of humorous cartoons to reduce the anxiety levels of students in statistics classes. Used the Mathematics Anxiety Rating Scale (MARS) to measure the level of student anxiety before and after a statistics course. Found that there was a significant reduction in levels of mathematics anxiety after the course. (SLM)